Warning, this will get very technical, very quickly. Not for the faint of brain. You have been warned.
Reusing vertex data with indexed drawing
Blender uses tessellation to convert quads and ngons to triangles, the only diet a GPU can consume. Currently blender writes all data associated with a triangle in a buffer and draws the buffer with a single command. For simplicity let’s consider an Ngon with 5 vertices with position and normal data. The problem is that this introduces quite a lot of data duplication as you can see in the following picture:
As is evident, Position 1, Normal 1, Position 3, Normal 3, and Position 4, Normal 4 are used duplicated for each triangle that uses those data. Notice that these data are identical. Blender does allow normals of different polygons to be different for each vertex, but vertices and normals that belong to the same polygon are the same. Therefore they can be reused.
OpenGL has an easy way to to reuse data, called “indexed drawing”. Using this, what we do is upload all vertex and normal data once and then use indices to create triangles from these data. This looks like this:
This does not only de-duplicate vertex data but has another benefit. GPUs have a small cache of vertex indices where vertices that are transformed by the vertex shader are stored. Every time a new vertex index is encountered, the GPU checks if the index exists in its cache and if it does, the GPU avoids all shader work on that vertex and reuses the result of the cache. Not only have we eliminated data duplication, but every time a duplicated vertex is encountered, we get it for free (almost).
Vertex indices require a small amount of storage, which is 1 integer per vertex, or 3 per triangle, so they are not free. Also, they only save us memory if we use quads and ngons. In full triangle mesh case, we end up using more memory (remember, those tricks work only if triangles belong to the same ngon). However, given that good topology is based on quad meshes and that data savings get more substantial with more complex data formats, the benefits by far outweigh the issues. Here we only contemplated on a simple position – normal format, but if we include UV layers, tangents and whatnot, cost per vertex is much higher and so is the cost of data duplication and the savings we get when we avoid it.
When benchmarking indexed drawing in the blender institute, I found a pleasant surprise: Even though full triangle meshes need some extra storage for indices and will not use tranformed vertex cache, the NVIDIA driver in my GPU still draws such meshes faster. This is quite weird because the data that are sent to the GPU are actually more but this is still a positive indication to go on with this design.
Finally, we can go even further and blend together vertices from nearby polygons whose data are exactly the same. This gets much more complex with heavier vertex formats and could lead to slower data upload due to CPU overhead to detect identical vertices. Also it breaks individually uploading loop data (see below) because any change in any data layer will potentially invalidate those identical indices.
Testing of this optimization in a local branch gives about 25% reduction in render time compared to master and those optimizations will be part of blender for version 2.76.
Easy hiding, easy showing
Using indexed drawing is not only useful for speed and memory savings. It also allows us to rearrange the order of how triangles are drawn on will. This is especially useful if we want to draw polygons with the same material together: Instead of rearranging their data, we just rearrange the indices (less data to move around). But there is one use case where indexed drawing can really help: Hidden faces.
A lot of blender’s drawing code checks if a face is hidden and then it displays it. This check is done every frame
for every face. However there are few tools that invalidate those hiding flags. In fact we don’t need to do this check every frame, but instead cache the result and reuse it.
By using indexed drawing, we can arrange the indices of hidden triangles to be placed last on the triangle list. This makes it quite easy to draw visible only triangles, by just drawing up to the place where the hidden triangle indices begin. Blender master now employs such an optimization in wireframe drawing which reduces drawing overhead by about 40%
Update data when needed
Another big issue with blender is that we upload data to the GPU too often. A scene frame change will cause every GPU buffer to be freed, causing a full scene upload to the GPU. Apparently we don’t want that. Instead we want a system where certain actions invalidate certain GPU data layers. For example, if UV data are manipulated it makes no sense to reupload position or normal data to the GPU. If modifier stack is comprised of deform only modifiers, it should have a way to only reupload position data to the GPU for final display.
For this we need a system like the dependency graph, where certain operations trigger an update of GPU data. Without such a system, the only way to ensure that we see a valid result is to upload all data again and again to the GPU every frame. Which is pretty much what is happening right now in blender for the GLSL/material mode.
GLSL material mode basically iterates through every face of every mesh in the scene every time the window is refreshed and gathers the same data over and over, independently of whether they have changed or not. If we want to avoid this we need to cache those data but if we do that, then we also need to be able to invalidate them if an operation changes them, so that they are uploaded to the GPU properly and the user sees the result of that operation.
This is not the result of crappy programming, rather it’s due to the history of how blender’s drawing code evolved from an immediate mode drawing pipeline, where the only way to draw meshes was to basically re-upload all data every frame.
Bottom line, fast GLSL view means having such a system in place. Fast PBR materials and workflow shaders from the Mangekyo project imply fast GLSL, which means having such a system. So it is no wonder that we first have to tackle such a target first if we want to have a fancier viewport with decent performance.
Really glad to hear!
So… which version will become with this improvements?, there’s any idea?
Most of the improvements are in blender grab youself a build from here https://builder.blender.org/download/ so in 2.76
I too would like to know more about that project :)
Great work btw. As an animator I cant wait to get faster playback in the viewport. Sculpting needs it badly as well.
Hi Antony,
that was a very informative read and it was very well written…absolutely not too technical. As this will be the base for all our future efforts towards bringing Blender into the new century this should be carefully designed and thought…
Blender has a bright future – I see it every time I create a sneak peek :)
Many greetings and keep up the awesome work!
Thomas
Mangekyo project = ?
Wow, all things point back to depsgraph improvements :D
Thank you for your work on the viewport project. These improvements are most welcome. An immediate speedup on heavy scenes can be achieved by reforming the “OUTLINE SELECTED” option – disabling it gives massive improvements of the responsiveness of the viewport. An option for drawing only the bounding box of the selected object would be much appreciated.
Thank you and good luck!
… Mangekyo project imply fast GLSL …
Please share with us more infos about this secret project…
would improvement in this area impact cycles speed rendering too.
Or is this only for blender game and blender internal ?
If its not for cycles then why put in dev time here, since blender internal is end of development and blender game engine is probably to be merged in cycles, or (sadly) starve as well since there are engines who are optimized to be only game engines and thus will always be more optimal to the task.
Or will we get to a more future rich* GLSL render mode?
(as a new render engine) (* glass, smoke, reflections etc).
Once in a while i do short animations that easily take 30 min per frame. Thats why i hope it will improve cycles, but i cannt tell from the article, also whats the current state on this, are we near to this or is it a far future wish ?
I don’t think it only improves blender internal or the game engine… I suppose this will speed up your viewport for sculpting, editing and displaying models…
Bert
Mangekyo project? Tell us more, please :)
dito
yeah, typos.. ‘ditto’ was meant to be typed. :p
Is this ‘new drawing’ allready happinging in blender or is this a future plan?
Bert
Really happy to hear these improvements and have been following the development closely.
Note: Not heavy read. Pretty Basic stuff ;)
In order to prevent spam, comments are closed 7 days after the post is published. Feel free to continue the conversation on the forums.