Animation Workshop: June 2023

The Animation & Rigging module had a workshop two weeks ago, 26-30 June 2023, at Blender HQ. It was part of the Animation 2025 project, and a continuation of last year’s workshop in October.

The main participants were Brad Clark, Brecht Van Lommel, Christoph Lendenfeld, Demeter Dzadik, Nathan Vegdahl, and Sybren Stüvel, with Nate Rupsis and MohammadHossein Jamshidi joining in remotely.

The goal of the workshop: to speed up animators (and others working with animation data) by reducing manual data management.

The Action has been Blender’s main animation data container for over two decades. In this blog post we propose a new data-block called ‘Animation’, which will eventually replace the Action as the main container of animation data.

This blog post is divided into a few sections:

Recap video of the workshop.

Desires

Whatever data model we come up with, it has to serve as a basis for the bigger ideas from the October 2022 workshop.

At the start of the workshop we set out some desires for the new system. We want animators to be able to:

  • gradually build up animations.
  • try different alternative takes.
  • adjust someone else’s animation (or your own a week later).
  • have a procedural animation system.
  • stream in animation from other sources like USD files or a game controller.
  • and manually animate on top of all of that.

From this it was clear we wanted to introduce animation layers.

Furthermore, we want to keep animation of related things together, which means that it should be possible to animate multiple data-blocks with one Animation. With this you could, for example, animate an adult holding a baby and put the animation of both of them in one Animation data-block.

Finally, a desire was to make all animation linkable between blend files. This is already possible with Action data-blocks, but the NLA data sits directly on top of the animated object, and cannot be linked without linking the object itself. Furthermore, each object has independent NLA data, so doing non-linear animation with multiple objects simultaneously is tedious. We concluded that we should move the NLA functionality into the Animation.

Animation Layers

Currently it is technically possible to work with layered animation in Blender. It would require using the NLA, creating an Action for each layer and placing those on different tracks. Technically possible, but not a pleasure to work with. Not only does this create plenty of different Actions, it also makes it very hard to perform edits across layers, for example when retiming.

To exacerbate the situation, as already described above, NLA data cannot be linked as a separate ‘thing’ like Actions can, so taking these layered animations further through the production pipeline can be problematic.

The goals for a new animation layering system are:

  • One data-block that contains all the layers.
  • Tools for cross-layer editing, much like the Dope Sheet can edit keys in multiple F-Curves across different data-blocks.
  • Open up the possibility to put non-F-Curve data on layers. These could filter / manipulate animation from lower layers, like modifiers, or introduce new animation data, like simulations.
  • Make it possible to animate on top of those non-F-Curve layers.

Multi Data-Block Animation

Wouldn’t it be great if ‘one character’ could just have ‘one animation’? The reality is that ‘one character’ consists of many data-blocks, all of which will get their own Action when they get animated. This is often worked around by driving everything from bones and custom properties on the Armature. This is possible, but this is clumsy and can be hard to set up.

The proposed Animation data-block can keep its animation grouped per “user”:

Through the same system, animating multiple characters is then also possible:

Closer Look

This section takes a closer look at the ideas described above.

Animation Layers

In a sense, each layer can be seen as an Action data-block (but more powerful, more on that later). Layers can contain F-Curves and other kinds of animation data, and are evaluated bottom to top.

Animation Layers can, of course, have different contents, and be keyed on different frames.

How each layer blends with the layers underneath it can be configured per layer.

This is different than Blender’s current NLA system, where the blend mode can be set per strip. Having this setting per layer makes it simpler to manage, and more straight-forward to merge layers together. Each layer has a mix mode (replace, combine, add, subtract, multiply) and influence (0-100%).

Layers can also be nested:

How these nested layers behave is determined by the parent layer’s ‘child mode’ setting:

  • Mix: Combine the children just like top-level layers.
  • Choice: Only allow a single child layer to be active at a time. This can be used to switch between different alternatives.

Whether a layer can have both data of itself and child layers is still an open question. For now it’s likely that we’ll try and keep things simpler, and only allow one or the other.

Multi-target Animation

Contrary to Actions, layers can contain animation for multiple data-blocks. This is done via a system of outputs; the animation data has various outputs, and you can plug a data-block into each of those.

The example above has two outputs, one for Einar and one for Theo. Each of these has a set of independent F-Curves.

How exactly this will be represented in the user interface is still being designed. A connected output will likely just show the name of the connected data-block.

Non-Linear Editing

When we wrote “Layers can contain F-Curves and other kinds of animation data”, we simplified things a bit. Let’s dive in deeper.

The animation data of a layer is actually contained in a strip on that layer. By default there is only one strip, and it extends to infinity (and beyond). This is why we didn’t show that strip in the images above.

When desired, you can give the strip bounds and it will behave more like an NLA strip. You can move it left/right in time, repeat it, reference it from the same or from other layers, etc.

By having the animation data always sit in a strip, your data is handled in a uniform way. This should avoid making a big switch like you’d have to do now to use the NLA. Also tooling will become more uniform, and add-on writers will have an easier time too.

Strip Types

All layers are the same. The strips on them can be of different types, though.

These strip type names may change over the course of further design work.

  • Keyframe Strip is just like Actions, but with the multi data-block animation capabilities.
  • Reference Strip references another strip in the same Animation data-block. It can also remap the data to another output (i.e. use Cube.001 animation for Cube.002) or to another data path (after a bone got renamed, remap the FCurves to the new name).
  • Anim Strip is similar to the reference strip, except that it doesn’t point to another strip but to a different Animation data-block in its entirety.
  • Meta Strip contains a nested set of layers, with their own strips, which can also be meta strips, making it possible to have arbitrary nesting. Effectively it’s an embedded Animation data-block.

We have ideas for other strip types too; these need more work to properly flesh out. For now these are rough ideas, but still a core part of the overall design.

  • Simulation Strip simulates muscles, cloth, or other things. And animate on top of that in another layer, of course. This needs more work in Blender than just the animation system, though, as simulation time and animation time may be using different clocks.
  • Procedural Animation Strip has a node system to process the result of the underlying animation layers. This would split up the evaluation of the data into several parts (animation layers, then process by other code, then further animation layers), which needs support in other areas of Blender.
  • Streaming Data coming in from other sources, like Universal Scene Description (USD), Alembic files, and motion capture files. This also needs changes the the current approach of working with such files, possibly in combination with #68933: Collections for Import/Export.

Data Model

Since this is the Developer Blog, of course we have data model diagrams. The green nodes are all embedded inside the Animation data-block itself.

Animation is an ID so that it can be linked or appended from other blend files.

Other IDs can point to the Animation they are influenced by, similar to how they currently point to an Action.

Each Animation has a list of layers, and a list of outputs.

Each Output an ID pointer, which determines what data-block that output animates. The label is automatically set to the name of the data-block, so if the pointer gets lost, there’s still the label to identify and remap the output. The id_type makes sure that different data-block types cannot be mixed, just like you cannot assign a Material Action to a Mesh data-block.

An idea we are exploring is the possibility of ‘shared’ outputs, i.e. making it possible to animate multiple data-blocks with one output, the same way you can currently have multiple Objects using the same Action.

It is not yet known whether this would actually be a desirable feature, as it would complicate working with the new system.

The diagram shows that an ID points to its Animation, and an output points back to the ID. This seems perculiar at first, and earlier designs did not have that second pointer. Instead, each output had a name, and the ID would declare the name of the output it would be animated by. Appearing straight-forward at first, we found out that such a name-based approach will likely be fragile and hard to manage. Because of this, we chose to use pointers instead, which Blender already has tools for managing.

Strip Types

Like the diagram above, green nodes are all contained inside the Animation data-block itself.

Keyframe strips define an array of animation channels (more on those below) for each output.

How exactly outputs are referenced by the strips is still an open design question. We are considering simply using the output index, but that has some fragility. Pointers could work, but they’d need remapping every time a copy is made of the Animation. This also happens for the undo system, so it’s not as rare as it might seem at first.

The reference types ReferenceStrip and AnimStrip can do two kinds of remapping:

  • Remapping Outputs: An animation for some data-block gets applied to another data-block. For example, the animation of Cube.001 gets applied to Cube.002.
  • Remapping Data Paths: An animation for some property gets applied to another property. For example, all animations for pose.bones["clavicle_left"].… gets mapped to pose.bones["clavicle_L"].…. This would be done on a prefix basis, so any data path (called ‘RNA path’ internally in Blender) that starts with the remapped prefix is subject to this change.

Channel Types

The new animation model should be appliccable to more than F-Curve keyframes. This would allow for tighter integration with Grease Pencil animation, to name one concrete example. Furthermore, the current system of using camera markers to set the active scene camera is a bit of a hack, in the sense that the system is limited to only this specific use. It would be better to have animations of the form ‘from frame X onward use thing Y’ more generalised. For these reasons, a KeyframeStrip can contain different animation channel types:

FMap is a mapping from a frame number to an index into some array. Unlike an FCurve, which has a value for every point in time, an FMap can have ‘holes’. It is intended for Grease Pencil, to define which drawing is shown for which frames.

IDChooser is a generalisation of the current camera markers. Basically it says ‘from frame X forward, choose thing Y’. It is unlikely that this can be applied to animate all data-block relations, as it could be very difficult to create a system to support all of that. We’ll likely pick a few individual properties that can be animated this way first, to gain experience with how it’s used and what the impact is. Later this can be expanded upon.

To Be Designed

There is a lot still to be designed to make this a practical system. Here we list some of the open topics, so that you know these have not been forgotten:

  • Linking Behaviour & Tooling: Linking animation data from one file into the other is a common occurrence. The basic flow should work well, and new tools should make more complex workflows possible.
  • Simulation Layers: Animation and simulation could use the same time source, but using different clocks for these should also be possible.
  • Procedural Animation: A lot of different things fall under the umbrella of ‘procedural animation’. This could be a full-blown node-based animation generation and mixing system, or as simple as F-Curve modifiers at the layer level.
  • Animating the Animation System: It should be possible to animate layer influence, various strip parameters, etc. Where does that animation get stored?
  • Rig Nodes: One of the big outcomes of the October 2022 workshop was ‘slappable rigs’: a control rig system that can be activated at different frame ranges. Rig Nodes is a prototype for this. How such a system would integrate with the bigger animation system needs to be designed.

Grease Pencil Collab – Ghosting

In collaboration with the Grease Pencil team, represented by Falk David and Amélie Fondevilla, we discussed ghosting. This is also known as ‘onion skinning’ in the 2D animation world; we chose ‘ghosting’ as the term for Blender as this seems to be more widely used in 3D animation, games, and other fields.

Ghosting prototype by Christoph and Falk.

Goals

The goals of the ghosting system are:

  • To show data of different points in time, overlaid on the current view.
  • unified system that works for all object types.
  • Editable ghosts, so that you do not necessarily have to move the scene frame in order to edit a ghost.
  • Movable ghosts, to shift & trace, or the opposite, to space them apart to get a good overview of the motion.
  • Non-blocking to the rest of Blender. Playback and scrubbing should not be slowed down by the ghosting system.

Features

The following features are considered important for the system:

  • Define an object to ghost, or a subset of it, like only the hand you’re animating at that moment in time.
  • Define the time of Ghosts, either relative to the current time or as absolute frames.
  • Clicking on a ghost to jump to that frame.
  • Offset Ghosts in Screen and World Space.
  • Ghosts can be locked, so they don’t update. This can be useful to keep a reference for exploring different animation takes.

Technical Challenges

Of course there are various technical challenges to solve.

Currently selection can already be tricky. For example, when two objects share the same armature and both are in pose mode, the selection is synced between them. Selection across different points in time would likely require more copies of the data, which needs to be managed such that Blender doesn’t explode.

The dependency graph will have to be updated to account for multiple points in time being evaluated in parallel. This will likely also cause ‘completion’ states to be per frame, so that the current scene frame can be marked as ‘evaluated completely’ before the ghosts are.

Finally there is the question of how to ensure the speed of the system. If we use caching for this, there’s always the question on how to invalidate that cache.

Operations / Possibilities / Future Ideas

The workshop focused on the data model, and less on operations to support this model. The following operations were briefly discussed, and considered for inclusion. This is not an exhaustive list.

  • Split by Selection: Selected FCurves go to another layer, the rest stays in the current layer.
  • Configurable ‘property set’ per layer that can work as a keying set. When inserting a key, these properties are always keyed on that layer. Multiple layers can each have a ‘property set’. Example: have a ‘body animation’ layer and a ‘face animation’ layer, where Blender automatically puts the keys in the right place.
  • Frequency Separation: F-Curves are split between low frequency (i.e. the broad motions) and high frequency (finely detailed motions), which are then blended together to result in the same animation. These animations can then be edited separately. Such workflows are already common in photo and music editing, and could be very powerful as well for animation.
  • Streaming & Recording: Stream animation in from some system, and have it recorded as F-Curves, for example from a live motion capture system. Which could be just a game pad you use for puppeteering.
  • Combined Animation Channels: Ensuring that quaternions (always 4 values) or maybe even entire transforms (location + rotation + scale) are always fully keyed can open up interesting new ways of working. When Blender ‘knows’ that every frame has all the keys, we can bring more powerful, easier to use animation techniques to the viewport.

Timeline

It is a bit too early to come up with a detailed timeline, so here is our goal in broad terms:

  • 4.0: might already see some of this functionality as experimental feature
  • 4.x: expanding the functionality & creating tooling to convert existing animations to the new system.
  • 4.y: change the default animation storage to the new system, retaining the old one.
  • 5.0: automatic conversion of the old-style animation data to the new system, and removal of the old animation system from the UI (some datastructures will have to be retained for versioning purposes).

Conclusion

This blog post has given an overview of the current state of design of a new animation data model. Although this broad direction is pretty well decided on, things are still in flux as we create prototypes and try to figure out how to integrate other ideas.

If you want to track the Animation & Rigging module, there are the weekly module meetings (meeting notes archive) and of course the discussions on the #animation-module Blender Chat channel.

Support the Future of Blender

Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases.

The Future of Overrides
This Summer’s Sculpt Mode Refactor
Geometry Nodes Workshop: October 2024
New Brush Thumbnails

13 comments
  1. I’m really excited to see the upcoming improvements in Blender’s animation capabilities.
    Below is a list of feature requests that I have summarized for Blender based on my experiences using various DCC (Digital Content Creation) software. I’m posting them here in the hope that they can be seen and considered.

    Urgently needed features:
    1. A user-friendly brush that functions like sculpting, allowing for easy usage with mouse + alt, ctrl, shift, to add weight, reduce weight, smooth weight, and so on.
    2. Animation layer! goes without saying!
    3. in weight mode, there will be a more user-friendly point, face, and bone selection, as well as selection addition and subtraction. Various operators can be filtered based on selection.And it is possible to finely adjust the weight of selected elements.
    4. Lockable vertex weights (a rudimentary version already exists, but it can only affect itself and is only used to prevent accidental modifications), It should be able to participate in normalization calculations and influence the computation results. When performing normalization operations or when the brush has automatic normalization enabled, the locked weights should remain unchanged.
    5. Constrain panels categorized by transformation, move, rotate, and scale. Otherwise, it is difficult to manually extend complex bindings.

    Anticipated advanced features
    1. Organizable vertex weights in hierarchical layers (vertex weights can be combined like layers in Photoshop, and advanced blending modes can be set. For example, in overlay mode, the weights of child layers will offset the weights of parent layers. When the “limit range” option is checked, the weights of child layers will not exceed the weight range of the parent layer).
    2. Shape Keys that can be dynamically altered (Shape Keys are based on targets, and when the targets are modified, the Shape Keys can change simultaneously).
    3. Editable motion trajectories.You can preview motion trajectories with handles in the viewport, and adjust keyframes or handles to modify the animation. (When you have a look-at constraint, even if the motion trajectory can only adjust the position, it will also affect the rotation of another object.)

    Other less essential but still very useful features.
    1. bone Custom shapes can inherit material appearance.
    2. Allow individual selection of each bone’s relationship line to its parent bone’s display mode (instead of the current setting in version 3.6, which is based on the entire skeleton).

    great Blender!

  2. One more thing on ghosting/onion skin :
    Don’t forget that when animating, there are 2 ways to use onion skin/ghosting ! The basic way is where you can see your previous pose/frame at its place. It’s perfect when animating with a fixed camera, but when working with a moving camera, that doesn’t work anymore, and we also need to have a camera-relative-ghosting, where you see your previous pose from the previous camera position !
    https://blender.community/c/rightclickselect/dZdbbc/?sorting=hot#

  3. This is a really great start – thank you all for the work you’ve clearly put into this already. :) Bringing NLA and layers and F-Curves into a single datablock definitely looks like a sensible way to go, and clearly brings a lot of benefits. Being able to drive multiple rigs with a single Animation datablock sounds like a great change too.

    One thing that I didn’t see mentioned in this proposal is constraints. I’m guessing constraints will need to become a part of this system too, to support characters holding props (think “Gene Kelly Singing-In-The-Rain” for example, where he throws his umbrella from hand to hand as he dances around). I guess in a scene like that we’d want the prop and the character to be part of the same Animation datablock, and to do that, we’d need constraints to be part of it.

    I presume that constraints would come under the umbrella of “procedural animation” that you mentioned (e.g. as a constraint layer that we could then layer keys on top of)… but since constraints weren’t explicitly included in the proposal, I thought it might be a good idea to mention it, just to make sure it doesn’t get missed.

    Apart from that, great stuff – really encouraging to see. Keep up the good work!

    • You’re partially right, we are looking at different kinds of constraints, like RigNodes.

      However, for the time being the constraint system isn’t going to change, at least not as part of the work presented here. It’s already a whole lot of work to implement this, and tacking more on top of it simply makes it even longer until we can release something useful. Of course constraint parameters will be animatable just as they are now.

  4. I’m interested about Rigging Nodes, when can we expect that in Blender? Blog post about that would be great too. Exactly what kind of rigging nodes are we talking about, would I be able to set-up rig entirely there, or is it just that purpose mentioned here, to change when/how entire rigs work?

    Also with ghosting (onion skinning still better name btw), can we also please have editable motion paths? If in the short term we had that, and at the very least if they were updated automatically, that would be great. That is the biggest hindrance on my animation workflow at the moment and it pains me that motion paths aren’t receiving any attention :(

  5. To infinity and beyond… 😂
    Really exciting, seeing things take shape little by little. I’m glad the animation module is in such good pairs of hands.

    1. Each layer strip can be shifted in time, but can the entirety of the layer strips contained in the animation datablock be moved at once? without having to create a meta-strip? I think you’d want to be able to move these things in bulk.

    2. I’m not sure what these different strip types bring. It’s not clear what they contain in terms of data. If it’s not keyframe data, what is it? You mention simulations, but wouldn’t these likely exist as internal or external geometry caches?

    3. It sure sounds good that keying sets would survive in the form of “configurable property sets”, but I consider the autokeying workflow to be the most in need of improvements. Blender should let the animator go from block start to polish not having to change settings halfway through the animation process.

    4. Frequency separation : some kind of Fourier transform?

    5. Besides that, I’m particularly interested in procedural animation and remote-controlled characters. Procedural/parametric animation is ubiquitous in game development, and there’s a lot to pull from there. I have worked on feature-lengths as an animator that required a lot of walk-stop-run transitions over uneven terrain -with current tools it’s a pain to achieve, because you want to make use of your painstakingly-authored animations cycles, but then the blending between those has to be explicitly authored, which is a destructive process that goes to the bin if there is ever a change in shot direction.
    Now with a solver within the armature taking care of layer blending, coupled with the interactive mode, we could record shot animations by driving characters with a game controller and a few actions mapped to them, and use that as a base for further work. I’ve animated by recording mouse movements before in Blender, it could be expanded on and made into an actual workflow. Of course you’d have to write the character skeleton simulation routine, but with a few high-level nodes… it doesn’t seem too far-fetched.

    • 1. That’s the idea, yeah. Just like you can do now in the dope sheet (although there it’s across different objects, not layers, but you get the gist).

      2. That’s why these things are in the “to be defined further” section. Also rigid body simulations wouldn’t use any geometry caching, as that’s just about transforms.

      3. Yup.

      4. Yup, something like that. Again, still very much in the “this is a cool idea” phase.

      5. I think that describes pretty much what we have in mind too.

  6. On the onion skinning, I really dont’ like that they’re showing the object every frame. They need to have them displayed in transparencies to make it clear which ones are the ghosts and which one is the actual object. And each successive ghost needs to be more and more transparent the more it moves away from the current keyframe.

  7. A solution is needed to link an animation from one character to another : that is basis of retargetting.
    If animation datablock is not able to do that, and it is expected to be handled by procedural animation system : this part of procedural animation system have to be ready before a merge in main branch.
    So, I suggested, you find a solution for multiple datablocks with one output or start a minimal design of procedural animation system as quick as possible.

In order to prevent spam, comments are closed 7 days after the post is published.
Feel free to continue the conversation on the forums.