Asset Creation Pipeline Design

Pablo Dobarro shares an outline of the design for a new modern asset creation pipeline to be developed during the following years.

During the past months I’ve been working on the design of what I call the “Asset Creation Pipeline”. This project will tackle all Blender functionality related to how you create characters, props or environments with Blender. Here we will refer as asset any object that is going to be rendered in a final scene, not the asset definition of a datablock used by the Asset Browser project. The goal is to have a design and a technical implementation plan on how to tackle long standing limitations of Blender like painting or retopology, making a modern design ready for the years to come. 

This post outlines the more general ideas of the design, without going into any details on how the implementation or the final product will look like. More detailed documents about the whole design and implementation are being worked on and they will be published soon. 


Blender is a software that was always designed for interactivity, and most of its technical innovations were done in this field. This means features like Cycles rendering directly in the viewport with all of the scene contents as they are being edited, EEVEE, modifiers and geometry nodes and the redo panel. We can also mention planned projects that still did not happen such as the full interactive mode or the real time viewport compositor. 

There is another way of approaching a software design, which is prioritizing handling any arbitrarily big amount of data. The main selling point of these software is that it can actually render and edit the data, leaving interactivity as a “nice to have” feature when it is technically possible to implement. So, when you are designing for handling any data size, you can’t assume that the tools will just scale in performance to handle the data. Software designed like this will try to make sure that no matter how many textures, video files or polygons you want to edit, you will always be able to do it in some (usually, not interactive) way.

The core concept of the new asset creation pipeline design is embracing Blender’s interactive design instead of trying to fit a different workflow inside it. 

So, development and new technical innovations regarding the asset creation workflow will focus on having the most advanced real time interaction possible instead of handling large amounts of data. This means that performance will still improve (new features will need performance in order to keep the real time interaction working), but the features and code design won’t be targeting handling the highest possible poly count or the largest possible UDIM data set. The focus on performance won’t be on how high the vertex count in Sculpt Mode can be, but how fast Blender can deform a mesh, evaluate a geometry node network on top of it and render it with PBR shading and lighting. 

Clay brushes working with EEVEE enabled (performance prototype). Supporting sculpting tools while using a fully featured render engines is one of Blender’s strengths. Improving that workflow is one of the goals of this project.

Focusing on this new design will allow Blender to have the best possible version of features that will properly take advantage of Blender’s strengths as an entire software package. This means things like:

  • The most advanced deformation tools to control the shapes of base meshes, which can be used in combination with procedural shaders and geometry node networks for further non-destructive detailing of the assets. Current tools like the Pose, Boundary and Cloth brushes are poorly implemented in master due to handling the legacy sculpt data types. Addressing these limitations will make them work as they should.
  • The best possible version of Keymesh in order to combine fully rigged and stop motion animation in the same scene. 
  • A fully customizable painting brush engine, allowing the support for procedural painting brushes for textures, concept art and illustration. 
  • The ability to use advanced painting tools to control how procedural PBR materials or procedural geometry effects are applied to the surface of an asset, manipulating surface information stored in mesh elements and textures that can control both masks or general attributes.
  • Multi data type tools, the same brush will be able to deform meshes, volume levels sets, curves, Grease Pencil strokes or displacement vectors stored in a texture without the need for baking them from a mesh. 
Blender allows tweaking the shape, details and surface of objects using different system that interact with each other, providing real time feedback and non destructive editing.

Handling High-poly

We also know that handling large amounts of data is important for some studio pipelines. For those tasks, the plan is to handle the data from a separate editor that does not interfere with the rest of Blender interactive workflow. When it comes to meshes, this can come as a separate editor with its own viewport optimized only for rendering as many polygons as possible and non real time mesh processing operations. This will keep the high poly mesh data isolated from the rest of the scene, making the tools, real time viewports and the rest of the features perform as they should. Having this kind of data into its own container will also help with features like streaming directly to render engines without affecting the performance of the scene.

Modes

In order to fit all planned features of the new design, some bigger changes have to be made to Blender in order to properly organize the new workflow. Among other changes, this means that the modes and their roles have to be redefined. Modes will contain all features that have a common purpose, regardless of the target data type or workflow stage. Workspaces will be responsible for configuring the modes so they can be used for a particular task. This will allow handling a much higher level of tool complexity and flexibility when defining custom workflows. 

These are the proposed modes for all object types, describing their functional purpose in the pipeline. Note that the naming of the modes is not final, but their design and intended purpose on the workflow are:

  • Object: Manages objects in the scene and their properties.
  • Freeform: Controls the base shape of organic objects.
  • CAD: Controls the base shape of hard surface and mechanical objects.
  • Paint: Controls the base surface information of objects.
  • Layout/Topology: Prepares the data for procedural tools and animation.
  • Attribute Edit: Controls the source data for the procedural systems.
  • Edit: Does low level data layout editing when needed, allowing direct manipulation of the individual elements of the data type.

Other modes related to other parts of the pipeline like weight painting, Grease Pencil draw and Pose are not directly related to the asset creation pipeline, so they won’t be affected by this project.

Not just that workspaces, tool presets and editors can be reorganized for various tasks but that the user has control over this customization for their own tasks. As an example, let’s define workspaces for different uses cases of painting, all based on the Paint Mode:

  • A hand painting workspace uses the Paint Mode. It contains a viewport with white workbench studio light. The UI shows blend brushes presets, color gradients presets. There should be open UI panels for color palettes and color wheels.
  • A concept art workspace uses the Paint Mode. It is similar to the hand painting workspace but it contains a 2D viewport and 2D selection tool action presets.
  • A PBR texture workspace uses the Paint Mode. It contains a viewport with EEVEE enabled in material preview mode. The UI also shows a texture node editor and an asset browser with material presets. 

Roadmap

Despite the amount of changes this design introduces, most of the development required to achieve the proposed product already happened (some of them are in master with a different UI, other in separate branches or disabled as experimental). The first step would be to gather and merge all that development into an MVP version. This initial version will have mostly the same features as the current Blender master branch (no functionality nor performance will be lost), just organized in a different way. Hopefully, this new organization and naming will make more clear how the workflow and tools were intentionally designed, so they can be used to their full potential. For example, after the reorganization, the same sculpting functionality will still be available as a subset of features of the Freeform Mode, which now has a much broader scope. 

After that initial step, more technical development can happen. This includes things like redesigning the texture projection painting code, refactoring the tool system for tool preset management or building better overlays and snapping for retopology. With this design clear, those tasks can now happen faster as they fit in a well defined big picture design.

It is also important to notice that this design also includes some tasks that require some technical research and innovation, like painting displacement surface details. These tasks have a high risk of taking much longer to develop, but they are not crucial for having a functional version of the asset creation pipeline. 

This project also depends on other ongoing development like the asset browser and storage or the upcoming UI workshop. More detailed designs about the final features that involve the asset creation pipeline will be discussed and worked on with those projects. 

62 comments
  1. Yikes, a lot of people here are against modes. I love having modes in blender, but I am a person who also loves Vim, so maybe I just work differently compared to others. I hate programs like Maya because they aren’t modal like Blender. Overall I think these are some awesome plans. Also, please make sure that high-poly-high-detail workflows aren’t just thrown out the window. I know you mentioned the dedicated high-detail, data-heavy workspace/window but big studios need to handle big data. Just make sure Blender isn’t stuck with stylized art. It needs to be able to handle making Jaegers from Pacific Rim with over a hundred UDIMs someday. ;) Also, extremely high-detail laser scanned assets from the real world are becoming far more common, including in game development (see Megascans and Unreal Engine 5).

  2. What’s the estimated time for the MVP? at least we(users) can have something tangible to use and see this new design and how it will affect Blender in general.

  3. Please no more modes, in a fact i would rather say find a better solution to even remove them from existence and make the jumping between the workflows seamless.
    I know it’s a bit hard to do it especially how blender was architected but at least for now try to improve the current ones and their own tools even if it means that some features might have to be sacrificed like having to sculpt in Eevee for example.
    Also for retopology it’s best to try and incorporate the tools in edit mode itself so we can benefit from both just like u have experimented with the poly build, it’s a good start even i would say better than many addons that have mode switching & are clumsy & very slow.
    the devs shouldn’t just think okay it’s a different workflow therefore it needs it’s own mode with it’s own tools you also have to look at the frequency of it & the speed too .

  4. How high is too high? How much data is too much data? Is Blender only to be used for creating low-to-mid-poly stylized characters? Because that is only one small niche in the sea of 3D asset pipelines and the de-emphasis of “high-poly” data in this post is concerning (and vague). There are *so many* people who would like to make the switch to Blender but they need *performance* in their 3D DCC app whether that’s for sculpting surface texture on a highly-divided model, or cleaning up the import of a photogrammetry or laser scan model for retopology and baking, or arranging an architectural scene with thousands of member objects.

  5. While i personally am excited and was awaiting something similar to this approach, i am surprised by the amount of people who are so confused and spreading misinformation in the comments.

    You have to take into account novices and users that are only familiar with a few aspects of the software and are unfamiliar with the rest, i think these are the ones confused the most.

    Bright ideas, please don’t cave in to the reactions of the confused. Consider a “much” simpler presentation with slow video and voice narration. Maybe in an upcoming blender today stream ?

  6. PERFORMANCE is the only thing that blender need to be an industry standard. The current workflow is quite good yet the main drawback is performance, people use zbrush not because it’s more intuitive but because it can handle much more geometry, the same for other software. All area would benefit from this photogrammetry/rendering etc. Hardware doesn’t solve not optimized code, i often see people recommending hardware upgrade but in fact it won’t change a lot, the real solution is buying other software. I think you may focus on this, this is my own thought.

  7. I can’t say I really support this effort at all.

    And here is why. There is not a pressing need to create assets interactively in Blender if it means sacrificing performance overall. I understand that there are people who want this. I mean I completely understand the need to keep it all in Blender. And that is great. We use Substance and Zbrush and so on to interactively create assets. The production pipeline bottleneck is not there. The production bottleneck is Blender.

    So here is the scenario to propose to those who have not thought this through all the way to the end. Lets say this project goes as intended. Now you can use Blender for your end to end production pipeline of assets. Then what?

    Now you still have to assemble all of these assets in Blender in one scene at some point and press F12. If Blender can’t handle large data sets, its gonna crash. And all of those wonderful assets you created happily in Blender’s interactive environment are useless to you.

    Well, welcome to the crowd.

    Renderman, Arnold and other production-ready render solutions can handle all of that data.

    Blender can’t. It will crash. I know. I tested it one for one. And you can test it too if you don’t believe me.

    So you are either going to continue to use external apps to create assets or do it all in Blender. It all ends in Blender as your final bottleneck. For Blender this is negative gain overall. That is, if you want Blender to compete with what is being used in the studios. And if you as an artist or a team want to compete on that level as well.

    • I want to clarify something that was maybe not clear enough from the article.
      The plan is not to sacrifice performance! If anything the performance in Blender will still overall improve ionce devs get time to work on it (Like the current mesh editing improvements) but the focus will be on restructuring the modes and toolset to make sure that performance can be achieved without sacrificing interactivity.
      The main focus will be on the 3D Viewport which will have (and mostly already has) an interactive toolset, modes and various shading and rendering options which work all together with each other. This means being able to model, sculpt, paint , shade, render and use procedural node networks at the same time in the same viewport without having it slow down.
      This means that if you plan to add or work with an extremely high amount of geometry you will have the option to do this in a separate 3D editor that has a toolset and viewport shading that is optimised for that. In that editor the toolset would be streamlined to not cause any unexpected slowdowns (like not having expensive shading or tools that are meant for lower polycounts that would freeze) and instead offer a toolset that will perform well.
      So no matter what the task is, there is an editor or mode to switch to that will provide exactly the toolset that you need and that works well.

      Blender will also focus more on separating the data that you work on. Instead of only sculpting on an ever increasing polycount to add more detail which is a very destructive workflow, the ideal workflow in Blender would be to have these data layers as the base mesh, displacement textures, modifiers, geo node setups and shader nodes that can be interactively painted and edited.
      It’s a lot of work to pull this off but it’s the direction that plays to Blenders strengths.

      I hope this clears things up.

      • That high poly mode your speaking about sounds like a fantastic application of USD Stages and Hydra. I know you keep saying its too much work.. but its less work than writing it yourself.

  8. My contribution to this argument is that we urgently need to do something about how we handle plugins and add-ons in blender one of the softwares biggest echo systems. Going forward into this initiative I think it is extremely crucial we find a way to better access and expose these tools and also fix the n panel. If your somone who has a lot of addons or somone who builds your own tools for custom Workflow which is a point highly emphasized point in this article how will it work among this new world that we’re headed to? Plus the fact some of these tools only show themselves in Certain senarios and will get hidden asking all of this. Perhaps this should be In a UI workshop discussion but among all the changes coming we really need to address plugin and s
    addon management in blender and also make it more flexible and accessible.

  9. This is a really interesting proposal, can’t wait to see how it takes shape and evolves over the coming months!

    If my understanding is correct, you’re using something like a behavior driven approach to define these core modes, so that the mode can be specifically tailored to it’s associated behaviors in UX, development, and performance. Then on top of these core behavior based modes we’ll be able to create custom workspaces to suit individual / specific setup needs, i.e. your Paint mode setup example. Assuming I haven’t totally got that wrong, all I can say is… that makes a lot of sense :D.

    Maybe this has already happened or will in the future, but I’d be really interested to see a set of user stories ( and perhaps personas ) that help tie the modes and potential workspaces to a user and the user’s behaviors and goals.

    Is there a #channel setup to follow or get involved in this work as it progresses?

    p.s. 1000 thank u’s for the awesome development and design work!

  10. Hi, everyone!

    I like it. I think organizing the functionality we already have in Blender into modes that align with “Art” Creation Pipeline tasks is what we need. This should clean up a lot of the code and boost performance.

    A mode intended for high-poly assets should address a long-lasting need for Blender users. I think giving people the option to sacrifice some visual quality to gain performance to handle high-poly assets is good.

    I don’t know why people think that switching modes is like jumping off of a cliff. I love the separation between Edit and Pose modes for Armatures. Object mode is perfect for layout, and other modes should align with other tasks as well.

    I do agree with people that we need a more detailed description of what each mode represents. The Paint mode examples were clear, though. I also hope that we don’t just focus on Asset Creation Pipeline tasks. Blender is used for a lot more. As long as we keep that in mind, I don’t think there is a reason to worry. The Blender development team has been doing a great job so far, and I completely trust them, so take my money.

  11. Hi Pablo, sculpting in Blender is fun now, but most Maya and ZBrush users are not going to switch to Blender at this point. One of the big issues is performance, and I’d like to recommend Blender to people, but when people hear things like simulation, sculpting, PBR drawing, And the handling of high polymers is very weak, people will be very hesitant to not to try, so Rather than focusing on the interaction of ascension, we hope that the blender in the common work of the process can have its own position, and other professional software not victory, but not too weak, so please focus on the properties of high polygon a lot improvement!!

  12. If the problem is overcrowded ui we can try to organise the tools into sets that can be easily switched between and accessed.
    We can have a mesh edit, free form, CAD, retopo tool sets all available in the current edit mode.
    UI can be made intuitive and several sets could be active at the same time. There are DCC”a with a system like this and it’s intuitive and easy to use.

    As for the modes, in my opinion they should be kept at a minimum and only used when it is required for handling and optimising for a specific type of data.

  13. Thank you Pablo for tackling this. It’s been really necessary for a long time that someone have the courage to do it. I’m 100% supporting you on this envision of an inexorable future. Any bad side effect of this would be ironed out over time for sure, this is just the foundation for Blender to continue evolving. Some people not understanding this shouldn’t be successful being the show-stoppers.

    • -Posted by an architect who makes a living by using Blender for full architectural scenes modeling and visualization-

  14. Is it possible to stop shrinking Blender’s area of use to character/asset modeling?

  15. Hard surface and mechanical objects are not even CAD.

  16. Hello Blender developers, there are a lot of things in this blog post and for the most part, most of them sounds amazing. I’m very intrigued by the new modes and even though some people are seemed to be concerned about it, I personally think it’s definitely going to change many things in future for blender and it’s gonna get ironed out between a ton of back n forth between the community and the devs, so even tho I’m seeing a lot of people freaking out about the new workflow and modes, that’s not something I’m clearly as concerned as the other part of the blog that has been mentioned very briefly.

    Quoting Pablo – “We also know that handling large amounts of data is important for some studio pipelines” – I, with all the respects, completely disagree with that statement. Pretty much anyone using Blender for any purpose would instantly notice the performance issues with Blender right now, whether it’s simulation, sculpting or high-poly modeling or animation with high poly meshes. I don’t discourage or disagree with the idea of making a real time interactive Eevee workflow but if that means the ongoing issues with blender with performance and the fact if I have over 20 million particles in the viewport blender would crash and if I have high poly mesh, blender would become very laggy and same with sculpting then I’m not in for the support. I was really looking forward to the future of Blender as blender gets fully node-based geometry creation tools, node-based physics, node-based animation and rigging etc. but if Blender devs don’t fix the ongoing performance issues with sculpting and modeling high poly meshes and high-res simulations in the viewport and very laggy performance in the viewport and then crashing a million times and getting 2 fps playback then I’m not in for it. So far all the examples and the blog post suggest that it makes blender good for only cartoon and NPR art and not high end, realistic movie or game asset creation tool. I don’t think making Blender an unreal competitor is a good idea rather than making Blender robust and being able handle high poly count meshes with high-res textures so user would be able go back n forth between other softwares and game engines. I mean I don’t know how many people would choose to sculpt in Eevee render view just because they will be able to see the pink hair or the anime character over sculpting really high detailed models with millions of polygons for an AAA game or movie. I truly do not support this direction of Blender. This is the exact same reason why people are not just moving to blender from Zbrush and Maya, it’s simply performance, Blender guru asked on twitter, what would be the biggest improvement for people to switch to Blender for sculpting and 99% answers were performance, that’s like the single thing holding people back, similar stories go for Maya users. At the end if the goal is to do sculpting in render view but in reality, pretty much all studio or pro freelancers use a separate program for texturing, so it would be similar to having a fancy matcap while sculpting. At the end, everything in the blog post with real time interactive workflow seems and looks like making Blender the best tool for cartoon style creation or concept art and not for serious production tool for either game or movie which is okay but at the end of the day for concept artists, tools and techniques doesn’t matter, what matters is the render image at the end of the day, and if a new tool comes out tomorrow with better and faster workflow, concept artists would probably switch to it and if Blender can’t handle high-poly meshes then people wouldn’t be able to use it for production work so it would become obsolete. Please don’t do this. Please. I really like and support the idea of real-time interactive workflow but if that means sacrificing being able to do detailed sculpting high-poly meshes or high poly hard surface modeling or simulation with millions of particles, then I don’t support this direction of Blender, I’m sorry.

    • Agreed.

      This is the elephant in the room.

      It is not an option to choose really. You can’t just decide to choose something other than performance.

  17. The biggest pitfall of Blender official development is this:

    Big, time consuming, architectural changes made with the promise of enabling new and improved features… instead of adding new and improved features and exhaust all that the current architecture has to offer.

    Remember active tools? They would be great if only we had more and more interactive tools

    Remember the separation of UV and image editors? It would be great if only the UV editor had more significant changes

    Remember the despgraph? Where’s animation offsetting?

    It seems the structure of the Blender institute enables this behavior. If decisions are taken institutionally, it’s only for big architectural design changes, not for a sprint of overdue features. And if individual features get in, is due to interest from brilliant one person developers like Sculpt tools or Boolean or LineArt

    The problem is that, if feature X is long overdue, then you’re at the whims of fate if such brilliant developer exist or not. The Venn diagram of amazing developers and people interested in X feature is that narrow

    There should be a middle ground, an institutionally sanctioned way of getting developers to work on individual features that are asked of them, not just individual features that they want, nor just big architectural or design projects

    • Sorry, but I hope future readers pay little mind to your comments. You borderline insulted Pablo because somehow you decided, inexplicably, that he’s running a one-man show when there’s absolutely nothing to suggest that and everything to indicate that things are done as a team at Blender HQ. Please think twice before posting chicken-little style FUD. It’s not constructive at all.

  18. this is really controversial and confusing.

  19. IMHO.

    Stop all blender development for the next version, and focus purely on Core Architecture and Code Cleanup. Then you won’t have to care about “Handling High-poly”… it just will.

    Blender is still a giant cluster of small code snippets that “kind of” work well together. If you want Blender to have a true and honest impact on the industry of VFX and CG Animation, the core of blender needs to be cleaned up. A new fluid solver, or sculptor, or poorly designed particle nodes, or whatever flavor of the week add on that does nothing to the fact that the internal functions and dependencies are a mess. It’s bloatware.

    At some point, someone will realize this and fork Blender at a version to make a Pro version. It will be a full overhaul of the core code and strip out all filigree. Someone will show that it’s better to do a few things VERY well than a ton of things just fine. *looks at Epic with fingers crossed

    • I agree with your opinion 100%.
      Flexible integration of workflows is a major attraction of the blender. I totally agree with the new custom workflow in the long term.
      However, It is inevitable to use different types of software in the 3D industry. full source code C++ rewriting, stable geometry kernels (T86869, T89181), and standard file format specifications are more important.

    • Fully agree!!

  20. I think this is fantastic and exciting news! I’m a long time user, and I trust what you folks are working on. If I do get vocally negative, it’ll be about something I truly believe is important, but all of these changes sound exciting.

    It will be interesting to see!

  21. Obviously all of this will take a while to flesh out and feedback/back and forth discussion will nail down specifics more before we can respond to the ideas with more detail, but overall the vibe of this sounds mostly positive and a focus on asset workflow is of course absolutely great to hear.

    Personally for my needs, what I’d like to see in this focus on asset workflow is a few things:

    * Texture Painting

    Blender empowers me in so many ways when it comes to mesh, but when it comes to manipulating pixels on a texture, in Blender I feel sometimes utterly helpless but to work in very slow and very manual ways. What I’d love to see, is the ability to specify multiple input values/texture sources and multiple texture targets for a single paint brush stroke.
    Use case: Imagine texture painting the outside hull of a spaceship with PBR textures, and being able to create a “bolt” texture brush that includes PBR textures for the bolt, including base/roughness/metal/normals/displacement/etc, and being able to paint a bolt or line of bolts onto the edge of a panel in just a brush stroke.

    * Material Layering

    Blender’s material system is powerful but it has one major limitation in my opinion. Layering.
    Lets say I have a rock asset and I want to layer some graffiti onto it. How? Perhaps some kind of mesh decal, manually extracted and UV mapped, with a new material of graffiti created onto it? Or a separate instance of the rock material which includes extra nodes for blending the graffiti nodes over the top of the rock?

    Wouldn’t it be easier if we could just add more than one material to a mesh and have them automatically layer on top of each other? For sure it would be about 100x faster..

    * Baking

    It’s no secret, everyone knows Blender’s baking workflow is in desperate need of a rework. I hear positive noises that the Cycles X project will perhaps give the baking workflow some love, which is absolutely great to hear. But ideally, Blender should have built into it some simple workflow for setting up a bunch of high poly and low poly objects in batches, specifying texture outputs and what values go into which channels, and choosing “Bake” to output all of them in one go. Right now the current workflow of needing have a disconnected texture node selected in the shader editor to do a bake is comically tedious and slow, which makes having to rebake an asset’s texture maps painful. Not to mention that the outputs themselves are in many cases unhelpful, such as diffuse which doesn’t output the base colour of Principled shaders but instead the calculated diffuse value used by the render engine, after taking into account and factoring with specular etc.

    Baking is in desperate need of some love and the saddest part is, a perfectly serviceable upgrade for baking was submitted to developer.blender.org years ago (D3203) and got ignored and sat on due to scope creep discussion over wonderful ideas for “an even better system” that since then no one has had time to implement. The upgrade could have been merged back when it was presented in 2018 and we could have been enjoying it for 3 years while waiting for some fancy node based baking system to happen.

    * Asset Management

    I really love everything that’s happening with the asset browser right now, but what I think it lacks is a quick operator to send an asset from the current blender file to an asset repository. That’s something which I think should be addressed.

    * Overall

    I love the focus on asset workflow and I’m glad to see it.

    I do have some concerns over some of the blog post but I know it’s too early to get into those yet.

    I will say though, probably my biggest concern is this idea over Blender deciding to focus on interactivity rather than being able to handle high polygon workflows and the idea of this separate mode for high polygon workflows. Interactivity is nice, but I don’t want to sacrifice being able to handle rendering and editing large scenes or high poly objects for interactivity. I don’t want to trade off things like being able to sculpt high poly characters with detailed skin pores, just for stuff like being able to sculpt in Eevee with real time shadows and SSAO, which is something I might not use even when it’s possible to be honest, since it might get in the way of seeing the details I’m sculpting..

    • I agree with having concern with interactivity over being able to handle high Poly count workflows.

  22. People seem to be freaking out at the idea of more modes when if they read, they would see that the final implementation hasn’t been revealed or possibly even decided on at this point. People fear change but change is what allows software (and all of life) to survive by adapting.
    The Blender devs know what they’re doing and Blender is community driven software, people need to have a little more faith in the process.

  23. Instead of adding more modes, it would make more sense and far less work to add more contextual UI Layout for every proposed mode.

    Object: This already exists.

    Freeform: This is essentially sculpt mode.

    CAD: This would make more sense as a UI layout

    Paint: Exists. Also surface information is not necessarily per object.

    Layout/Topology: Does not need to be a separate mode, instead it should be a node graph like geometry nodes.

    Attribute Edit: Should be added to node graph.

    Edit: ‘Low level data handling’ IS 3D modeling. Edit mode should be left as is.

    • For the most part this is exactly the plan. Most modes will still exist as they are but with a clear design in place to allow them to make their toolset and UI grow and be more focused on the tasks the modes are meant for.
      The thing is that we currently have 3 modes for painting (vertex/texture/sculpt vertex colors) and this needs to be unified. Edit & sculpt mode will be almost exactly like they are but sculpt mode will no longer be only for traditional clay sculpting but more advanced free ways of transforming geometry. Attribute paint will be an evolved version of vertex paint & weight paint so it will let you edit/paint any generic attribute instead of just 2 (This is mostly tied to the geo nodes project).
      This design is not meant to disrupt any workflows but to finally make them clear and able to grow.

  24. Stop freaking out. Please, people, calm down.
    For example Cad mode = hardops or fluent or decal machine type tools being treated as first class citizens and not 100% relying on add-ons.

    I think sculpt and pixel/vertex paint are already planned to be joined into 1 mode while weight paint is going to be enhance and renamed to attribute paint.

    I don’t understand the people saying object AND edit AND sculpt AND paint should be a single mode. Where they do that at?

    • This. I don’t get why people are freaking out about it.
      As far as I can tell this plan just means we’re going to have more, better tools, and many tools will have broader usage (the data-agnostic modes and tools mentioned in the post).
      If you’ve used Retopoflow of Hops or whatever you know how they essentially have their own modes, because you wouldn’t be able to just stuff their tools and visualization into edit mode with everything else.
      This plan is just facilitating better, more specific tools and workflows like those addons do, and it probably means we’re going to get tools like those addons, but natively in Blender.

      It just seems like good things to me.

  25. I do understand the complexity of Blender’s growing feature set can make it difficult to navigate using the current UI, thus the temptation to add different object modes to simplyfy access to the feature set quickly available to the user to speed up workflow. Complex and large data sets also introduces a speed penalty that degrades the usefulness of an interactive realtime viewport.

    Perhaps keep the current modes and add methods to optimise performance, UI consistency/complexity and contextual access to features for specific tasks and workflows, submodes if you like.

  26. The sound of breaking up to these separate modes is a little concerning… maybe much better experienced if it’s under-the-hood/background mode-changing, and not so much the user having to manually switch modes.

    In my opinion, sculpt and edit mode should work in the same “mode”. Or just remove modes altogether from the user experience so from a user point of view it’s all simultaneous.

    I like the idea of having more CAD-like tools available for hardsurface, but I personally don’t want to have to switch up between modes just to access one tool. I’m already going back and forth between sculpt and edit mode for one or two tools in sculpt mode that I can’t access in edit mode, it would be MUCH easier to have them all available always.

    At least, at the user experience level. Just my opinion…

    • Also, would CAD-like tools/workflows have to have a specific object type? Maybe I like to mix subdivision surface tools/workflows with booleans/bevels? Perhaps make use of sculpting tools too for better shape refinement afterwards. And then make use of some retopo tools afterwards. That’s 4 modes the way it seems to be described here. Being able to access of those beautiful tools just in edit mode would be grand!

      Though I think I understand that the data is being handled a little differently with sculpt and edit mode for performance.

  27. I like idea of highpoly beying in diff contest. However rest is really bad.

    Blender since 2.4 & houdini H14

  28. Pablo, be aware. The magnitude of these changes is such, that if done badly, it could bring about the demise of Blender. Not saying it will, but if something ever does, this ticks the boxes. Never have I feared for the future of Blender, not in 2.4, 2.5, or 2.8, until now.

    You’re young, maybe emboldened by being aware of your own brilliance, of how fast you learn, so you But don’t let the ego take you. This isn’t just your tool to fit your way of work, it’s everybody’s way of bringing food to the table

    • Such a project is not to be tackled by a single developer. What Pablo presented here will be ironed out in an upcoming workshop in Amsterdam where the proposal will be more tangible for the end users.

      As it is, it presents the big picture for the developers, on what can (should?) be used to guide the decisions when building the tools in Blender. Basically, never sacrifice interaction and feature set.

      Once the design is fully flashed out with a sound technical plan, this can be a foundation for the years to come. For most of the developers and artists working in the project (Blender).

      There is also nothing very surprising or even disruptive on what Pablo presents. A lot of the tools in Blender are already being developed in that direction (for example, the way geometry-nodes play along with weight painting and Cycles shaders). This is most a formalization of this design.

      • Would be nice to have some visual information on this proposal, right now it is hard to understand what direction it will go.

  29. I think there are too many modes already. The effort should be to reduce them, probably merge sculpt and object/edit modes.

    • Indeed there are many modes. And part of the problem is that the modes are static and we keep needing to add more modes if a task doesn’t fit 100% within one of the nodes.

      In order to address this the mode design can be seeing as something customizable. This way templates can allow users to pick from a more varied pre-polished set of modes. While at the same time, allowing users to create their own modes and have them be the only ones around.

      Having more modes doesn’t bloat the interface. Quite the opposite. This design forces the development to abstract the definition of modes for artists to pick a handful of modes for all their needs. (in this scenario you may have even a single mode with all the tools you use, if you are a specialist).

  30. I think we need a proper explanation on how this “modes splitting” is going to be & work otherwise you guys are going to create confusion & panic if not already is.
    From what I understood ,you guys want to make custom workflows tied to workspaces so people would use them more but at the cost of adding extra modes, this idea needs to be flushed out and see the pros & cons of such changes.

    • Hi, fundamentally a mode can be customized to really fit within a workflow. Effectively a mode can include a set of tools the artist expect to use for a part of the pipeline. This is building on top of the 2.8 design where the workspace (tools + rendering) should be tailored to the task at hand.

      • Overall this sounds really exciting, though I am not sure I understand entirely how the final result is currently thought to look like.
        Let me try to paraphrase to see if I got it
        Basically the Editor Modes will be separated into Tasks/Projects rather than programmatical modes, correct?
        So Blender overall will be seen as a toolbox with everything that’s available to edit and create. Each mode will then pick the tools from this toolbox and have them all available in an ideally super performant realtime viewport solution, right?
        I am not sure if I understood the thing about large data meshes right – how they are supposed to be split. Does this mean that there might be a separate “high poly mesh data” workspace or rather a second blender instance running on the side?

        Would this mean that users can create their own customized workspace easily or is it going to be (ideally) so optimized that it has to be done in souce code?

  31. As a representative of neither extreme, I say: ‘Okay.’

  32. These ideas seem highly irrational.

    Please stop trying to break up Edit mode. There is no benefit. We don’t want to switch modes all the time. Art doesn’t neatly fit into these CAD/organic pigeon-holes like you think.

    • Completely agree, we have enough (maybe too much?) modes now. Object mode to deal with objects, edit mode to deal with mesh editing, and sculpt/paint modes which should be the same IMO. Anything more than that is just overcomplicating things.

      • How would you effectively combine the edit, sculpt, and paint modes? I can’t see a way to do this that didn’t degrade the user experience with things like burying features deep inside of sub-menus, eating up a large share of the viewport with tool icons, or removing feature sets from Blender entirely.

        • I’m not proposing to merge all the modes into one, I said there should be at most 3 modes: Object mode, Edit mode, and Sculpt/Paint mode. And, if the CAD mode mentioned in the post is about real parametric design then that of course would be a new one. Maybe the post is just too vague to have a clear idea about the actual plans.

          • Maybe I misread your post. Were you just referring to merging sculpt and paint modes then?

    • Very interesting, I’m curious to see where this will go.
      I see some concern that this will restrict and fragment workflows, but my impression of this plan (so far) is that the way we work won’t be much different, but that redefining and adding new modes will allow a much wider range of useful tools which currently don’t neatly fit into any mode of Blender (and therefore are never developed and implemented)
      I’d like to know more specifics: What tools are going where? What new tools will be created for each mode? For example, if we’re talking specifically about editing meshes, how will CAD mode be different from Edit mode?
      Will CAD mode be like, say, Sk*tchup, where you aren’t dealing exactly with “direct manipulation of individual elements” (verts, edges, faces), and Edit mode will more or less just be the same as it is now?

    • Hi Piotr, the benefits of the proposal is elaborated in the article. Let me rephrase what is presented in the article. Hopefully it helps to make them more clear:

      * Better synergy between tools (e.g., the right tools in a mode for a given workflow).
      * Full interactive editable and rendering workflow (e.g., sculpting with EEVEE).
      * (state of the art – EEVEE, geometry nodes) procedural and parametric panting with advanced sculpting.

      Besides a dedicated editor for mesh editing for operations that require massive amount of data that can’t be handed interactively.

      • I kind of had a kneejerk reaction there, apologies.

        I’ve since better understood the plan based on what was posted on both d.b.o and blender.chat and now I’m onboard with it. Good luck :)

    • Blender’s actual internal structure is not really prepared for high-poly meshes (some million polys) yet. It is possible to do some work on those, but this takes way to much time, is laggy and does lead to crashes.

      Having a seperate workflow for those kind of objects make them manageable. It does not make sense to prepare the normal edit mode for this kind of heavy lifting, typically you will do only very specific things with these kind of objects.

      As an analogy, it’s not really a good idea to chop a tree with a carving knife. Also, carving doesn’t really go well with a heavy chainsaw. Getting the right tool for the job does sound like a good idea to me.

  33. It is great to see asset-making being completely overhauled and I am thrilled to try things such as freeform mode and an updated texture painting workflow. Definitely going to keep an eye on that!
    However, one thing that wasn’t mentioned is the baking redesign. We really require a way to prepare our assets for exporting purposes effortlessly. An asset made in Blender doesn’t always stay in Blender and if we are talking about pipelines and such, right now baking seems like a main bottleneck, at least from where I stand

    Thanks for all the stuff you guys develop for us to use freely!

    • I completely agree with you the baking in a blender is still a headache

    • Hopefully the rigging will soon get an improvement and when rigging part get node sistem on it boom magic happen 😁

In order to prevent spam, comments are closed 15 days after the post is published.
Feel free to continue the conversation on the forums.