In late June a Nodes workshop took place at the Blender HQ in Amsterdam. The goal of that workshop was to map out the future of the Geometry Nodes and simulation systems. Besides formalizing some design decisions for the projects, a lot was done in regard to the solvers design. However there was still a big inconclusive topic: better attributes workflow.
Now in August it was time for another round of meetings to land this home.
The Geometry Nodes system has already been used in a few productions. Nonetheless, what is now in Blender (2.93) has clear shortcomings. The main one is that artists should be able operate directly on attributes. How to do that, however, was not clear.
After the June workshop the Geometry Nodes team worked on different proposals, and opened a discussion with the community:
- Proposal for attribute socket types – Jun 11
- Fields and Static Attributes – Jun 30
- Attributes Sockets and Geometry Nodes 2.0 – Jul 1
- Expandable Geometry Socket – Jul 11
- Files + workflows and the impact of Geometry Nodes design changes – Jul 13
- Static Attributes, Attribute Sockets and Node Group Processors – Jul 16
To move the discussion forward two different designs were chosen:
- Fields gives the most flexibility when building the node-tree. Its concept is rather abstract, but very similar to the existing shader node-trees.
- Expandable Geometry Socket has a simpler metaphor. There is a single data flow in the entire node-tree (either geometry or a list of values).
A week of development was reserved for each design to build a working prototype. They were available to the community for testing and ready on time for another round of face-to-face workshops in early August. In the end the Fields was picked as the best option to move forward.
Data Flow and Function Flow
Fields introduces two distinct flows in the node tree: a data flow and a function flow. This design is closer to the underlying software design. That design is then exposed to the users in a true “form follow function” fashion.
Data Flow
The data flow goes from left to right and operates directly in the geometry (usually a geometry is passed to the node, which outputs a new modified geometry).
Function Flow
The function flow is a sequence of function nodes that are passed to the geometry nodes and are read backwards, as a callback.
Prototype and Example
The complexity of understanding how the different flows work is overcame when the system is used hands-on. Artists familiar with the shader system may see similarities with the fields and the shader node-trees.
The prototype didn’t manage to cover all the upcoming design changes (e.g., to more easily distinguish between the different flows). But it helped to sell the idea of operations directly in the attributes, and the flexibility of working with fields. Artists don’t need to worry about attribute domains (e.g., converting a face attribute to a vertex attribute), and are free the change the topology of the geometry while the function flow still works as expected.
There are still design topics that need to be addressed, but the main story of the system was better defined and reiterated over during the workshop.
Definitions
- Geometry sockets/noodles: Data transport + operation order (geometry nodes).
- Attribute and field sockerts/noodles: Functions “programming” (function / field nodes).
Names may change when this moves to the final documentation. But the underlying definitions are valid already.
Fields
- A function with variables (such as attribute names).
- Evaluates on a geometry.
- Evaluates to data.
- Can be combined into new Fields.
Attribute
- Geometry data stored per element (vertex, edge, …).
- Can be accessed as Fields.
- Are propagated through Geometry nodes.
- Types:
- Dynamic: always available global names (position, material_index, …).
- Static: user data or generated (UVs, selection, …).
Geometry Node
- A node that operates on a geometry.
- Can accept Fields as inputs (evaluates input Fields on the node’s geometry).
- Can create attributes which are outputted as Fields (generated Static attributes).
- Can edit/write only to Built-in attributes.
- Outputs a new geometry.
Function Node
- Can accept Fields as inputs.
- Has no Geometry input.
- Doesn’t evaluate Fields inputs (since it has no geometry).
- Outputs Fields.
Next Steps
There are still a lot of open design questions on how to integrate Fields into Blender:
- A new socket design for Fields.
- Function and Geometry nodes should look different.
- Visual distinction between the data transport and the functions flow.
- Clear relation between an attribute and the geometries that can access it.
- Spreadsheet editor to show Built-in, Propagated, and Static attributes (through a Viewer node).
On top of the designs, there is work to be done for backward compatibility. This is all scheduled for Blender 3.0 and will be the focus of the Geometry Nodes module. That means the core developers and the community will prioritize this over adding new nodes/features.
After that, the focus will shift to expand the Fields design for future node systems.
Try it Yourself
Download the experimental build (temp-geometry-nodes-fields-prototype
) and the doughnut file to get a feeling of the new system. Remember that experimental builds should not be used in production as they might be unstable or corrupt your data.
Could you add an option to work vertically for the people who cannot stand horizontal graphs, please?
Same for your updated composition graph. The entire world is compositing vertically, please stop being stubborn with your horizontal only nodes.
For many people, thinking with a vertical flow is essential, and the best softwares of the industry share this philosophy as well, why are you not giving your users the option ??
I will be honest the previous design was good and easy to learn. As long as this improves on that everyone should be happy.
Excellent Work!
I was wondering whether there were any plans to add a “Script” node in geometry nodes? Users would be able to write python scripts to modify the geometry in different ways which could greatly expand the capability of the system. (Similar to the script node in Animation Nodes)
Great work as always.
A contribution from the community for this would be welcome. But at the moment there are no plans for this. Until Fields, the API was still being stabilized so it was a bit early for custom nodes.
Hi Dalai;
This note is about a “Fields” branch Physics Solver.
I’m thinking that if the “Attribute” data of each and every vertex, edge, etc can be user defined, then you have the basic structure of a physics solver.
For example, all a rigid body solver really does is take the “attributes” of an object (dimensions, volume, mass, etc), calculate the center of mass and update the position, velocity and angular momentum based on any external forces or collisions if present. It’s just code doing calculations on “internal” attribute data reacting to “external” attribute data.
To allow the new “Function Nodes” to become physics solvers, they would need the ability to solve and store the new “Attributes” for position, acceleration, etc for that specific kind of solver.
For example, a solver could calculate “wind” or “magnetic” forces and a different solver further down the pipeline could solve for buoyancy, then another for collisions, etc. That way, rigid bodies, fluids, particles, etc could get processed in a unified way by different solvers as the data makes it way through the pipeline.
The ability to chain custom solvers would give people the ability to create incredible simulations.
The big issue is users being able to write custom physics solver nodes themselves. The previous poster Stefan Guiton had asked about adding a python scripting node. While I fully support that, I also think that there would need to be an official framework and an easy to follow guide for people to create customs physics nodes.
Anyway, perhaps I should create a new “Fields Physics Solver” thread and add this idea to it.
What do you think?
> I was wondering whether there were any plans to add a “Script” node in geometry nodes?
There are no specific plans for that currently. I do think that we will support custom scripts at some point, but it does not have high priority currently.
Many technical decisions have to be made before we can have one or more proper script nodes.
> This note is about a “Fields” branch Physics Solver.
The architectural difficulty with simulations (physics solvers are a special case of that) is not so much how the individual operations work (computing fores, integrating positions, …) but where the simulation state is stored, how new data gets into the simulation and how the simulated data can be processed further after the simulation. Some more information about that can be found in a previous blog post: https://code.blender.org/2021/07/nodes-workshop-june-2021/.
The simulation step itself is conceptually fairly straight forward then: Take the last simulation state and new data in, and output the new simulation state and possibly other data. For many use cases it will be great if this simulation step can just be a geometry node group that is called on the simulation state for every time step. However, for some types of simulations it might be nice to work on a different abstraction level (e.g. event based particle systems). It’s possible to chain these different systems of course to create even better simulations and give more flexibility.
> The architectural difficulty with simulations…
Thank you for replying Jacques.
In my very limited understanding of your “Fields” architecture, it seems that the logical place to store simulation state is in the “attributes” of each object. The “attribute” data is also where new simulation data could enter and would be available for post simulation processing.
Obviously you would have thought of that, so I am confused as to why there is a difficulty in storing simulation state. Perhaps I am assuming that the “attributes” of an object means it would carry its own data structure with it to store the simulation state for that object.
Would you be so kind as to expand on the difficulty?
When I tried to open the fields version, the startup page crashed
Try launching it with `–factory-startup`
Might also have to do with the way you extract the compressed file (happened to me). Extracting with 7-zip solved the issue
I prefer when/recommend that the data flow stays inside one connection and that the node filters what data to use as input and what to output instead of one connection per attribute..
For a simple donut example, it’s ok but for big projects, this will not scale well.
If Houdini and Nuke work this way, it’s for a reason.
Having one connection per attribute is rather powerful and we were going after this even with the “geometry socket expander” design. In other software this happens in dedicated contexts, but it is still there.
With Fields we have them in the same context as the rest of the data flow. And this is one of the biggest design challenges we still have to improve for Fields. (how to effectively communicate the “context” of a certain attribute/field (i.e., which nodes a field can be connected into).
Well at least, thank me in a few years in your post “fixing the spaghetti graphs problem” :)
I’m still very sad about Geometry Nodes didn’t go with more low-level approach with pre-made (by developers or community) high-level “function” nodes (basically groups), that can be exposed and modified by more experienced user (yeah, Houdini like). I want to be able to tackle and shuffle every bit of .blend data on per-element basis, not being limited by what’s been exposed by developers
What kind of building blocks were you expecting?
The existing nodes are still rather low level. Proper high-level nodes are still planned once the Asset Browser project gets released with nodes support.
I believe you can still access, create and manipulate attributes, right?
Well, the most obvious example is that abstract “geometry” socket which holds a bunch of data that i can’t dissect (yet) into separate components and then work with these datasets (vertex positions, face normals, face centers, ect.) as with numbers and vectors, and then, in the end put those numbers back together and construct something new. Simplest example: I want to construct a mesh of four vertices in a specific locations – it would be great to create four “Vector” nodes, join them in some “Vector list” node, and then turn this vector list into vertex list in some kinda “Construct Geometry” node that will take care of creating some required component attributes and pass this whole thing to mesh datablock. Simple as that (and already implemented in AN in similar fashion). At the moment it’s not just difficult, it’s simply impossible. Manipulation with geometry happens in a cumbersome and extremely linear fashion (which, I know, is already addressed by the developers). Take most of the “Attribute …” nodes. Why would I need “Attribute Math” when we have regular “Math”?? Isn’t it much more logical to create a single node that takes Geometry and outputs given Attribute into a bunch of numbers depending on attribute type (Vertex Locations -> Vectors, Vertex Weights -> Floats, ect.) and than I can manipulate those numbers with low-level nodes without those countless cluttered Attribute Math, Attribute Fill, Attribute Map, Attribute Whatever nonsense and in the end of these manipulations write it back as attributes or completely new geometry.
Yes, from my perspective, I can’t know about all the design limitation you developers facing. Maybe there is a reason GN designed the way it is now. But it’s just odd, given that there are proven working concept with much… prettier design solutions (given the API limitations) created by the very same person, who’s pushing GN developing now :))
Agree 100%. I was a bit turned off from geometry nodes once I started using them. Instead of Attribute Randomize why don’t you just have a random number generator that I can plug into anything?
Also why all this complexity with attributes, fields and functions? Why not just a simple data flow graph with data and functions that operate on that data?
I thought it would still go this route… Like a Particle System you could customize with some “Presets”. This would make Geometry Nodes way more user friendly while still keeping it complex for those who needed it.
This new version seems to be more intuitive this way, but some presets would still be very useful.
Might be off topic, but I’m noticing a lot of “node janitoring” in the video, picking and shuffling them around, when the nodes would fit in a swim lane chart. You’ve already arranged them like that manually.
I think that could be automated.
Auto-snapping nodes to vertical lanes, drag and drop between lanes to create new lanes and moving nodes up or down inside a lane without screwing up the node layout would save quite some time.
When I tested this, I am trying to Extrude and Move, assigning random vector but seems to fail, maybe it was still WIP? Everything else seems pretty solid and quite like the index node, position node, etc.
The prototype is work in progress and will be developed only a bit further before the development happens for the real implementation.
As mentioned above it is great to have a tree instead of a long train ;) Geometry Nodes has been around for 2 versions of Blender, so not that long. In many other more established areas, Blender has broken compatibility of how previous features have worked (especially with the Python API). So some compatibility would be good with the old nodes, but why not invest more time to develop the system further instead? Have all the modifiers ported, more instancing options like face instance, and some mesh edit functions, etc. If someone wants to use the old system and has very detailed node set ups, they can always go to v 2.93 or v3.0 Alpha prior to the Fields geonodes landing in master.
I’ve always found that the beauty (and sometime the controversy) in Blender, that there is no holdback when changing things, which are for the most part for the better! (still not happy about single column UI, but hey, many other areas have improved a lot)
Fantastic! I’ve been waiting for years for Blender to take on the nodal characteristics of Houdini and/or Natron! The road is created… we hope it will be completed soon!
>he main one is that artists should be able operate directly on attributes.
Yay! Finally trees instead one long-train
This is looking quite promising. Keep up the good work!
Geometry node and particle node future do together or separate to
Awesome!!! I’m loving the fields prototype, this is a great step forward for geometry nodes
I did a lot of tests with the field
It works really nice !
i have two things to say with the current state of the workflow tho:
1 – instancing on vertex/faces center/edge center needs more attention, so far there’s no way to convert a mesh into a point cloud type and that can create workflow problems down the line when doing instancing. a simple mesh to point node would be required in my opinion
2 – the way the instance ID of the instancing random collection is weird, why can’t we use this attribute to control how the instances are spawn? the goal of an instance ID attribute is this very task but somehow blender don’t let us have such control
In order to prevent spam, comments are closed 7 days after the post is published. Feel free to continue the conversation on the forums.