A look at point cache

I’m the main developer of the point cache functionality and I recently did quite a big reorganization of the code. I think the system is starting to get quite stable, so I thought of writing a short series about the point cache in general, the current implementation, what the new improvements offer, and what might lie in the future. Some parts might more technical than others, but I’ll try to keep the text readable for non-coders too.

So what is the point cache and why is it needed?

The way most physical simulations work is to apply physically based rules (for example gravity) to the dynamic data (like a particle’s location) in small time steps. Based on the state of the physical system on frame 1 we can get the state on frame 2 by applying the rules for a duration of one frame, and again from frame 2 to frame 3 by applying the rules for a duration of one frame again. Theoretically we could of course get the same result for frame 3 by applying the rules to frame 1 for a duration of 2 frames, but in reality the bigger the time step, the bigger the errors in the calculations will be. To keep the simulation stable and accurate the time steps need to be relatively small (the default in Blender is one frame), so in order to be able to view any frame of the simulation after the calculations there needs to be a way to save the dynamic data for all frames. This is basically what the whole idea of the point cache is. It stores the dynamic data (location, velocity, etc.) for all simulation points (particles, cloth vertices, smoke cells, etc.) and for all frames of the simulation, so that after the simulation has been calculated once we can view any frame of the simulation without recalculating the simulation again. In addition it offers a common way for all physics simulations to store and recall their dynamic data.

During a general frame change (animation playing, or user changes frame) the basic interaction between the simulation and point cache is as follows. First the current frame is checked against the cached frames to see if valid data exists for the current frame. If it exists, it’s read from the cache to the simulation’s internal data. If there is no data in the cache the rules of the simulation are applied to whatever the old simulation data is, and after that the new simulation data is stored into the cache as the data for the current frame. This allows for a very flexible work flow, as the actual simulation code doesn’t really have to know where the previous simulation data comes from. At any time it could be straight from the previous simulation step, or it might have been read from the cache. For example every time the user changes a simulation setting in the middle of the simulation all cached frames after the current frame can just be cleared and the simulation will automatically start calculating new data based on the new settings. To get a completely accurate cache the calculations will of course need to be started from the first frame of the simulation at some point, but the important thing is that the user never has to wait for recalculations from the beginning while playing with the settings.

Some times a full recalculation from the beginning would be preferred though. If for example particles need to reach a certain location during their lifetime it’s nice to see the particles all the time at the end of their life while adjusting the simulation parameters. This kind of functionality can be achieved with the quick cache option, which sacrifices accuracy to gain fast calculations. With this option enabled the simulation is calculated from the beginning with cache step sized simulation steps up to the current frame on every change made to the simulation settings. The length of simulation that can be interactively tweaked via this method depends on many factors, like the amount of simulation points and the speed of the computer running the simulation. It’s also worth noting that the results achieved with quick caching may not represent the correct result, but usually give a reasonable feeling of how the simulation behaves. The option is only available for simulations that don’t necessarily need to be calculated in one frame steps.

After suitable simulation settings have been found the simulation can be baked. This means that the whole simulation is calculated in one frame steps from start to end and the resulting cache is marked as being protected from changes by the simulation. For simulations that store actual point like data (particles, softbody and cloth) the baked cache can be edited in particle mode by directly manipulating the cached point locations.

Depending on the number of points in the simulation and the amount of dynamic data associated with each point every cached frame will take a certain amount of space, either in memory or on disk. To be accurate the simulation needs to be calculated for every frame, but often enough we can save some space by skipping some frames from the cache. If the simulation data doesn’t change too much during the skipped frames it can be interpolated nearly perfectly from the frames that weren’t skipped. Again this is all handled by the point cache and the simulation code can just concentrate on doing the thing it does best.

Currently the point cache supports a simple cache step parameter, which determines the frame step between each stored cache frame. Big cache steps can lead to big savings in space, but this doesn’t come without a price. If for example particles are colliding with a horizontal plane (with high damping) they quickly come to a stop on the plane. However there are still forces acting on the particles (gravity and collisions), so the particle velocities aren’t necessarily zero for the whole duration of the frame even if the particles seem stationary. If all simulation frames aren’t cached (step > 1) these velocities are interpolated into movement for the non-cached frames. The result will look like the particles are oscillating around the collision location. So for example for accurate reproduction of collisions the cache step has to be set to 1.

Any dynamics simulation can have multiple caches, although only one of them can be active at a time. A cache is made active by simply selecting it from the cache list. This allows for example testing some simulation settings in one cache and other settings in a different cache without losing precious simulations.

The new improvements:

For most simulations the point cache can be stored either in memory (default) or on disk. Memory caching is very fast and enables editing of the baked data, but for larger simulations the system memory usage can grow quite large (Smoke simulation data can easily get very big, so this is why the smoke simulation doesn’t even allow storing to memory). Hard disks usually have a much higher capacity than memory, so storing the cached frames to disk leaves the memory to other uses, but can be quite a bit slower due to the needed disk write and read operations.

The smoke simulation uses the point cache in a slightly different way compared to other simulations and it had separate options to compress the cache files that were written to disk. Now these options are available to other simulations too, and it appears that a little compression isn’t bad at all. Compressing the data before writing it to disk takes some time of course, but using only light compression makes the data small enough that some time gained from the shorter time spent on the read/write operations as can be seen in the graph below.

The graph was made from a simple particle simulation of 100 000 particles with a cache step of 1 and the values are the average “worst case” times per frame (maximum amount of particles stored/read). In the graph the dotted line is the time of the actual simulation calculations per frame without any caching and serves as a starting point for the write time (red line). Writing the cache to memory takes next to no time at all compared to the time taken for the simulation calculations. With disk cache the write time is actually bigger than the simulation time, so the combined time of simulation and writing to cache is over double (for no/light compression) of the pure simulation time (note the logarithmic y-axis). With heavy compression the write time is over 10 times longer than the simulation time!

The disk cache read speeds on the other hand compare much better to the pure memory cache. With light compression the read speed is actually really close to reading straight from memory! The read speed for heavy compression is still quite horrible, but on a positive note the file sizes are under half of what the simulation would have taken in memory or without any compression. As a summary it’s quite clear that if there are no other requirements than to use the disk cache then light compression is the optimal choice for it.

The other new addition might not be very visible to users, but will allow for much greater usage of the point cache in the future. The original design of the point cache only included writing a certain amount points with fixed amount of data associated to each point. This works well for most cases, but makes storing more complicated things impossible. A “more complicated thing” might for example be a map of springs between the points. The springs are defined by two points and their amount can’t directly be derived from the amount of points. For these kinds of situations the point cache can now store extra data which can be fully customized as needed by a simulation.

Some basic implementation notes:

PTCacheID

In order to use the point cache each simulation has to fill a point cache ID with data and functions that will be used to translate the actual simulation data in to point cache data and vice versa. For example in the simulation code the ID is filled with the call BKE_ptcache_id_from_[simulation type](…). This id can then be used to query the cache for cached frames for that simulation. After calculating new simulation data the same id can be used to write the current frame into the cache. The read, write, and interpolation functions can be fully customized to suit the different simulations and their possible variations (normal particles vs. boid particles for example). The ID also allows for cache operations without specific knowledge about the simulations in a specific object. A call to BKE_ptcache_ids_from_object(..) creates a list of all the ID’s of the simulations in an object. These ID’s can then be used to for example clear all cached data or just flag the cache as being outdated after the object has been edited or otherwise updated.

Memory and disk cache

With the exception of stream data (smoke sim) all operation that access simulation data go through a memory cache frame so that the operations are as fast and simple as possible. In the case of disk caching the memory cache is only a temporary destination which is created from the disk frame before read operations and from which write operations are done to disk. The data in a memory cache is always in separate streams, but since operations in memory are fast a simulation can write the data either point by point or by data type (all locations, then all velocities) without a big hit on performance. For disk caching these streams can then be either compressed as streams or written point by point to a file. The ability to convert easily between memory and disk caches also means the user can change where the cache is stored at any time without loosing the cached data.

For disk caches the actual data is stored in a “blendcache_[.blend file name]” folder under the folder where the actual .blend file is saved. The file names are made unique by using an identifier derived from the object that contains the simulation, the “internal index number” of the simulation, and the frame for which the data was saved for. The file will then be of the form “[identifier]_[frame]_[index].bphys”. A user chosen name can also be selected in place of the identifier generated from the object. By using a user chosen name most baked caches (or caches created outside of blender) can also be loaded to a different scene/simulation by using them as external caches. In these cases there will be no actual simulation, but just a representation of the data that the cache contains.

The actual content of the cache files has to start with a header describing the data, with the first 8 bytes being ‘BPHYSICS’, followed by an unsigned int with the lower 2 bytes defining the simulation type, and the higher 2 bytes defining additional flags for the following data, such as compression or extra data after the main data. What comes next can be customized by the simulation in it’s point cache ID, but usually there is an unsigned int defining the specific data streams that will follow, and an unsigned int for the amount of points each data stream has. After this the actual data streams can start. By following these guidelines (and a quick look in DNA_object_force.h and pointcache.c for the basic data stream definitions) it should be relatively straight forward to create an exporter/importer from/to any program (dealing with physics data that suits the simulations in Blender) and for example use an exported data as an external cache for visualizing the data in Blender.

Possible future plans:

Currently one of the biggest problems in point cache is the static cache step. A cache step of 1 will store the dynamics accurately, but it’s almost never necessary to store every point for every frame as the interpolated data is just as good. On the other hand using a bigger cache step will almost surely miss some critical data for some points unless the dynamics are extremely simple. The solution to all this would be to use a “dynamic per point step size” in the cache. This would mean that a point only needs to be stored to a certain cache frame if it’s dynamic data has changed enough since the point was last saved to create inaccuracy in the interpolation. The good thing is that only what’s actually needed is stored without sacrificing accuracy, but the bad thing is that writing and especially reading becomes much more complicated as there’s no way to know in advance which frames will need to be checked in order to get the data of a point for a certain frame. One other nice thing about this would be that the amount of cached data would reflect the dynamics (a lot of collisions -> a lot of cached data) and the cached frame sizes could easily be represented for example in the timeline as a graph of “simulation activity”. I’m still in the very early stages of implementing this whole thing, but so far so good!

One other problem is the current state of the baking controls. Having a single bake button for the actual simulation is fine in the actual simulation’s cache panel, but a better “dynamics baking interface” for the other buttons would be much clearer and easier to use. This interface could also list all the different dynamics in a scene and their relations, but currently there’s no good proposal of how and where this all should go, so I’m open to all suggestions.

Point caches are basically just containers of raw data which is processed to actual simulation data from the cache for any requested time. In this regard baked point caches are quite a bit like NLA strips and I’d like to investigate possibilities on actually using them as such to re-time baked simulations or even to blend the different caches of simulations. No work on this front yet, but the possibilities are certainly intriguing.

The Future of Overrides
This Summer’s Sculpt Mode Refactor
Geometry Nodes Workshop: October 2024
New Brush Thumbnails

18 comments
  1. Many thanks for your effort, any news about baking cache (clustering) on overall computers at same time ? Thx.

  2. Has dynamic cache got anywhere? I ask because I have version 2.78 and still no dynamic cache.

  3. It would love to render my scientific simulation data by converting it to a blender readable voxel format or particle point cache. Anybody willing to make a basic tutorial for non-programmers? Thanks!

  4. Cheers,
    a hint towards the type:
    data type which is a bit flag: I assume a bit flag can be determined from a conversion to binary data. Combined with this:
    data flag contains
    —————————————-
    index (1<<0) 1 int (index of current point)
    location (1<<1) 3 float
    velocity (1<<2) 3 float
    rotation (1<<3) 4 float (quaternion)
    avelocity (1<<4) 3 float (used for particles)
    xconst (1<<4) 3 float (used for cloth)
    size (1<<5) 1 float
    times (1<<6) 3 float (birth, die & lifetime of particle)
    boids (1<<7) 1 BoidData

    that would mean 111=up to velocity = type 7 (decimal of 111)
    1111= up to rotation = type 15 (which kinda contradicts the previous post…)

  5. Nevermind. I found some relevant info in the source code, in blender/source/blender/blenkernel/intern/pointcache.c

    I am posting it here for other interested people. I wish this was all more documented. Right now, I know only a couple of things about the file format, but it was enough for to make the pointcache files directly from our software, and make Blender believe it was a pointcache it had createad.

    The Blender file format for the pointcache is

    Header:

    8x char = BPHYSICS
    3x uint = 1 n t

    where n is the number of points described in the file, and t is the type. So far, I have figured out types 7 and 31. Type 7 only has info on position and velocity, and type 31 includes info on rotation. The number 1 is always there.

    After the header, the particle data comes and each particles is described as

    1xuint // Data point index
    3xfloat // Point location
    3xfloat // Point velocity (type 7 is up to here)
    4xfloat // Point Rotation (type 31 is up to here)
    3xfloat // Point angular velocity/data xconst (don´t know what is this)
    1xfloat // Point size
    3xfloat // Times (current time, die time, lifetime)
    Boid Data // Point boids (don´t know how this works)

    Apparently particle spring info comes later but I don´t yet know how it works.

    There is one file for each frame (actually that´s how I do it because I don´t want Blender interpolating my results), the naming convention is
    name_xxxxxx_00.bphys where xxxxxx is a six digits number that represents the frame. There usually is a 000000 file that, as far as I can tell, contains the max number of particles that there will be simulated (so n=1000 or something like that), and the data (position, velocity,etc) is all garbage, just the indexes are correct.

    Hope this helps someone.

  6. Jahka, we develop scientific software to simulate fluids, particles, solids, and more, in supercomputers around Europe. We are currently looking into exporting the results of our simulations into 3D software like Blender. The way we do it is through a few python scripts that make objects and keyframes, but this becomes very inefficient for a few thousand pbjects/particles. I was wondering if you have available the file structure of the point caches, we´d love to generate them directly from our software and thus make the import into Blender directly as a particle system. We have already done this for voxel data and it works really nice. Thanks!

  7. Does anyone know the type of compression used in the .bphys files? I’d like to have a look at the file structure, to see if I can query the smoke density at a particular 3D coordinate.

  8. Nice! :)

  9. Very fun read. Thank you.

  10. Imbusy & Matthias: Currently I’m comparing three cached frames with the actual cache interpolation function to see if the middle one can be removed. There’s still a lot of work to be done, but from the initial tests it seems this is going to work really well!

  11. That’s quite impressive… I was discussing just yesterday about the ability to mix different cache via NLA.
    The problem was related to the ability to change the hairstyle after a first pass of bake and then mix the result with the first pose…
    I’ve sent the link to this really interesting article to the artist who’s working on that problem… maybe he can write a much complete explanation of his idea!
    Great!

  12. Thanks for this article, I really dig your thoughts on NLA-like editing of point cache data…

  13. Fitting curves is too slow to encode and it wouldn’t work for discontinuities.
    Did you consider storing raw keyframes and compress the residual of the linearly predicted data ?

  14. Thanks for this article, point cache methodology has always been a bit of a black box to me. This has helped me understand it a lot better.

    J

  15. One could also try fitting curves to the data to minimize memory usage allowing some minimal error – just like animation data in games.

  16. Jahka, many thanks for your effort in making the point-cache thing much more clear. It’s so important to have some devs like you that introduce their work to the people that are interested in the inner work of blender..

    So again, great article and thank you

In order to prevent spam, comments are closed 7 days after the post is published.
Feel free to continue the conversation on the forums.