Accelerating Cycles using NVIDIA RTX

Over the past few months, NVIDIA worked closely with Blender Institute to deliver a frequent user request: adding hardware-accelerated ray tracing to Cycles. To do this, we created a completely new backend for Cycles with NVIDIA OptiX, an application framework for achieving optimal ray tracing performance on NVIDIA RTX GPUs. Now, Cycles can fully utilize available hardware resources to considerably boost rendering performance.

If you’d like to try it out, the source code is currently undergoing review before being merged and is available for anyone to download, build and run. I’ve provided instructions in the linked review.

In this blog, I’ll discuss the technical approach taken, review performance you can expect and share possible a future direction.

What was done?

Cycles today already supports a range of hardware types, including options for both CPU and GPU rendering. To achieve consistent images across the options, most of the rendering code is shared.

NVIDIA OptiX is a domain-specific API designed for accelerating ray tracing. It provides a complete package with programmable ray generation, intersection and shading while using RT Cores on NVIDIA RTX GPUs for accelerating Bounding Volume Hierarchy (BVH) traversal and ray/triangle intersection testing.

Our approach was to implement a new backend to Cycles which uses the OptiX API to manage acceleration structures plus ray intersection with its programmable parts calling into Cycles’ existing code for ray generation and shading.

The Blender preferences expose this as a new device type, which lists RTX supported GPUs in the system and supports both single and multi-GPU rendering:

Almost all of Cycles GPU-supported features (like hair, volumes, subsurface scattering, motion blur etc.) work with the OptiX backend already, so improving render times is as simple as flipping the switch in the settings.

How much faster is Blender Cycles with RTX?

The new OptiX backend is showing some significant speedups compared to the existing options. Below is a graph showing the render times measured for several Cycles benchmark scenes on a CPU, with CUDA and with OptiX (smaller is better):

These speedups are now also possible in thin and light laptops and mobile workstations. NVIDIA’s RTX Studio laptops feature NVIDIA GeForce and Quadro RTX GPUs and are the first to support real-time ray tracing and advanced AI capabilities.

What else is happening?

The OptiX SDK includes an AI denoiser that uses an artificial-intelligence-trained network to remove noise from rendered images resulting in reduced render times. OptiX does this operation at interactive rates by taking advantage of Tensor Cores, specialized hardware designed for performing the tensor / matrix operations which are the core compute function used in Deep Learning. A change is in the works to add this feature to Cycles as another user-configurable option in conjunction with the new OptiX backend.

This can be especially useful when doing viewport rendering, so stay tuned!

72 comments
  1. hi i need too downloud blander2.5

  2. Can you use OptiX plus CPU rendring?

  3. Hi I find something unusual in blender 2.8. My RTX 2060 suddenly not detected to desktop while rendering and need to shutdown the computer then on again to make it detected on computer desktop. restart wont work, Also it broke my laptop GTX 1050ti need me to replace its mother board due to the power wont continue. Try rendering only in 128 sample in 14000 x 14000 3 image only. Hope to solve this, it scares me to use 2.8 because of this and back to 2.79b again. Hope blender dev to resolve this your application was very useful and hope to fix this issue

  4. It doesn’t seem to work for my RTX 2080ti. I installed the drivers and optix and there’s no option for optix still only Cuda and open CL

    • With version 8.3 Optix shows up as an option (noted as experimental) and recognized my GeForce RTX 2070 Super, and denoising can be selected in Render Settings – Sampling – Select Viewport Denoising: Optix AI Accelerated. Working just fine @ 256 in 5100 x 3300 resolution with the simple render test scene I have. Please note I am a total Blender newb and for some reason I could not toggle the camera view correctly until after I had opened the Render tab (a step which was not shown in the tutorial video I was watching).

  5. things are really getting exciting but we need eevee support too, once there is vulkan implementation.. …..etc i’ll upgrade my hardware to the latest and greatest, i know 2.8x still has more work to be done but the future is bright.

  6. Also NVIDIA has also put in the studio set of drivers optimisations for Blender 2.8. I happened to mention Blender to them during a support case on another issue. In regards to choosing the best driver option from them.

    This was during May to July of this year – likely got the their attention. So worth running their up to release if you want to really see Blender fly.

    • That sounds cool. i have tested today on windows10 with Quadro rtx400 but the difference between CUDA vs RTX isn’t there yet.

      As such how to get the Nvidia drivers for Blender2.8 and how to optimize it?

  7. How to Buy Retail Software?

  8. Does this update supports overclocking. Currently I can’t oc my rtx2070super or I’ll get cycles rendering bug like cuctxcreate: illegal address. I’ve seen people oc their gtx cards so I assume this is a rtx only problem. If anyone have a 2070super are u able to overclock it?

    • If you get Cycles errors when OC’d, that means the OC settings are not stable. Blender may be the program giving you the error, but games and other software using the GPU with the OC’d settings will also have problems, even if you don’t notice them. (Stutters, restarts, FPS lags – up to crashes, etc.)

      Try pulling back your OC settings just a, until Blender can easily render any scene without failing.

      As for this build, it may or may not fail, as it uses different parts of the GPU more and areas that Cuda use, a bit less.

      (Blender is one of the tools I use when testing OCing video cards, as it depends on stability for calculation precision and pushes the GPU to its limits.)

  9. thanks but can you add RTX to eevee next?

    SSR sucks compared to RTX :D

  10. Hey guys im having troubles, after install new drivers, optix , etc. Blender is unable to recognize my RTX 2080. i did clean install, remove old drivers, and it seems to work on other sofwares like maya, Houdini. but not in blender, even on the oficial release. ¿ sometbody got same issue? any idea what cuold be happening?

  11. I’m wondering:

    When CUDA is active, there’s a CPU + GPU option. Is this a CUDA-only option, or could the CPU be added to the OptiX method as well. In that case, I hope that will soon follow for even more rendering speed.

  12. I hope this OptiX won’t add like 100Mb of apis files to blender.

  13. Is there documentation showing what shaders are supported right now? Every project I open that worked in Cycles is now giving me an error: “shader does not support ray tracing with Optix yet”. I just use principled shaders for everything so this really is stumping me

  14. Does it work under Linux, or is it a Windows only thing?
    This said, this is awesome news!

  15. Good news, but I have RTX 2060 Bad result. My render times 02:29 OptiX Blender and 01:53 Blender 2.8 stable with my test scene.

  16. How much memory is required for RTX support? Does it work with the shared CPU – GPU memory concept which is available now on Blender 2.8?

    I mean I have a Blender Project using 25 GB. I have 32 GB memory in my Computer and 8 GB on my Graphic card. And the GPU is usable now. Can I use for example a Geforce RTX 2080 Ti with 11 GB for my project which uses 25 GB RAM and take advantage of the RTX support? (Assuming that I still have 32 GB main memory)

  17. Hello,
    Thanks for this! I did find one thing while testing the current build. It seems like you can have hair or subsuface scattering but not both. If i turn off subsurface scattering the hair shader works fine but if I turn on subsurface scattering that works but I loose hair

  18. I think it’s cool, but it will be way better to also implement ray tracing capacities into eevee, to have half rasterize half ray traced engine, a sort of mergure of cycles and eevee, for much faster(dare I say real time?) render,

    • This is how the Unreal RTX implementation works, and it gives brilliant reulsts, at great speed!

      https://www.youtube.com/watch?v=Qjt_MqEOcGM

    • Think. An unbiased path tracer with max speed possible to compare with ground truth is an important partner for every rasterisation development.
      Most important is to bake it down for diffuse and specular constribution to reach max speed and quality.
      Even when we have full vulcan rt features in eevee. Precalculation gives you always more quality and speed.
      EEVEE would not be so great when cycles was not there before.)

  19. I think in first place there should be a big ThankYou Nvidia for some months of work to allign Cycles with the competition.

    So Nvidia is no organisation for charities but forgot a little that most of his customers are nerdy, fans and have some clear view and understanding about the marketing strategies from last years.

    I am a fan since more than 10 years but last generation marketing campaign stopped this.

    However.
    Some little critics.
    There is also the pure Cuda option for Cycles. There also is some big optimisation potential when you compare ground truth cuda cycles with e-cycles.
    Giving some support for this optimisation also reachs millions of Blender Users they cannot afford an RTX for now but will remember.

    So. Start again to make more fans.
    There s also an ROI.

  20. I tried it and got slightly slower renders with Optix compared to CUDA on an RTX 2080 Ti. I’m assuming it’s Nvidia that created the benchmark chart…

  21. Please stop supporting nVidias money hatting and industrial sabotage.
    As blender is a free open source multi-platform tool it doesn’t make sense for you to be engaged in this companies attempts at cornering the market.
    I get that as a donation run project support via video cards and money and tech support is nice. But the graphic industry has been working on standards like Open CL for a reason. Allowing nVidia to use the blender foundation as it’s personal marketing tool is an affront to the free and open source spirit of Blender.

    • nVidia is like most other companies, with Intel and AMD also having a history of being horrible.

      However, the base technologies of real-time RT and new AI/ML features are not nVidia’s, and won’t be locked to nVidia – as the next AMD GPUs will have similar hardware support. (See PS5/XBox 2020)

      As for ‘cornering the market’ – RTX features and technologies are already in Vulkan and other full OSS frameworks. These are NOT nVidia specific, even though nVidia uses their CUDA and OPtix technologies in implementation.

      The technology nVidia is using in the RTX GPUs is hardware based from Microsoft’s research technologies that were publicly released as DXR technologies for Ray Tracing and WinML(+) for ML/AI in early 2018. nVidia RTX cards are built from Microsoft’s work, and it isn’t exclusive to nVidia at all.

      This might sound more dubious, but remember Microsoft in the 2010s is not Microsoft from the 00s – and even though they had a lot of OSS projects and contributions in the old days, they now are shoving tons of proprietary code out with unrestricted OSS licensing.

      Which is why it is important to notice that Microsoft has also been working to help Vulkan use the DXR/WinML technologies – making the technologies fully cross platform. *Except for OS X that is still trying to break/beat Vulkan while also killing support for OpenGL or OpenCL.

      OpenCL was rather good, but even after Apple pushing it, it was a problem for them, as OS X couldn’t handle ubiquitous OpenCL calls through the OS or in several applications. I also don’t like to see OpenCL go away, but right now it has problems as it can’t support faster GPU technologies that are only being implemented through Vulkan and DX12. OpenCL news a revamp or we all need to move to Vulkan.

      Linux also suffers from the issues Apple had with heavy OpenCL and GPU usage. Linux and OS X both need full GPU preemption and SMP features like Windows has offered since 2007. It is this core lack of GPU technologies in all non-Windows operating system technologies that has handed the graphical future to Microsoft. Windows can hit faster GPU performance while still allowing tons of GPU code to execute in tons of applications and throughout the OS without the worry of locks or cooperative multitasking scheduling like Linux and OS X hit.

      I digress…
      So, the ‘technologies’ that RTX hardware is showcasing, isn’t technically pure nVidia and they also don’t have control over it. This is how/why AMD is planning on RT and AI hardware for their next GPUs next year, along with basic support in upcoming drivers, just as Intel is planning on GPUs and embedded support in the future, and like nVidia, both Intel and AMDs technologies come from Microsoft Research, which Microsoft has provided to the entire hardware industry for free to use.

      The XBox and PS5 stories map out AMD’s RT and ML/AI upcoming features, as those AMD GPUs will have the RT technologies, via Vulkan on PS5 and both DX12 and Vulkan on XBox.

      So the Optix and Cuda are nVidia implementations, but the originating concepts exist outside nVidia for anyone to use, for free, and is already really strong and doing well with Vulkan impelmentation.

      Take care of yourself, this isn’t worth worrying about. Also note you will be able to do this stuff on AMD and possibly Intel GPUs next year – so just hold on and let nVidia do the heavy lifting, which seems to be AMD’s strategy as well.

    • No. I don’t think it has to do with money. Right now with the rtx technology nVidia RTX cards are THE best performing GPUs for rendering in the entire market. If blender doesn’t support the newest and fastest technology, then why would people choose blender?

      However you are right about openCl being a nice open sourced platform and I’m sure AMD and possibly Intel will also have great hardware raytracing product in the near future. And I would be happy to see if AMD can continue their tradition of being very powerful at computing (like vega) while also adding raytracing cores. At that time, cycles will surely add support to those technology, because if not, then again, why would someone want to use blender?
      The point is that right now nVidia has the fastest cards, especially with rt cores. Lets hope the competition in the highend market will be back in the future.

      Take care.

    • Blender is open source, if Nvidia wants to implement better support for their products i don’t see why they should be denied that, especially when majority of industry is running on their hardware.

      AMD is free to support Blender in same way.

  22. Hi
    Are you going to be able to use Optix and Cuda together? Like using rtx cards and gtx 10xx cards to render an image at the same time.
    Also, is there any plan on bringing nvlink memory pooling on RTX 2070super+ cards to cycles? Puget system showed that the nvlink on RTX cards are able to handle memory pooling, but need software support.
    These two can be the game changer for future GPU rendering

  23. any demos for the peasants like us who don’t have RTX cards yet? and what about eevee or it’s planned after porting to vulkan!

  24. What about AMD?

    Raytracing APIs are crossplatform. I presume this implementation tho isn’t?

    • AMD don’t have any raytracing GPUs yet. They won’t until next year at the earliest with Navi 20.

      • Yes it’s not available right now, but it doesn’t make sense to use a proprietary api now instead of something that will work with AMD as well when it releases

  25. Nice to see that NVidia supports the Blender community this way. Thats customer care done right.)
    Hope you get the chance for further improvements.

  26. Wow, that’s awesome. Up to double performance for free, it seems. Can’t wait to test it! ?

  27. This is great news!

    I have a question. If we have a multi-GPU setup of one 1080 and one 2080, and the “Cycles Render Devices” is set to “OptiX”, would Blender only take advantage of 2080? Or would 1080 still help with rendering?

  28. Will Optix render works on combine mode with CPU render?

  29. Cool now we just need to get baking I. Eevee going

  30. With the Optix backend in place, I’m assuming it should be relatively straightforward to add MDL support now, right?

  31. E-Cycles does even more or the same without the need of RTX GPU. Please consider optimizing what can be optimised.
    https://blendermarket.com/products/e-cycles

  32. Great news! When can we expect Optix to be a part of the daily builds?

  33. rtx 2060 doesn’t show up under optix devices?
    Is this a bug or will it not be supported?

    • my apologies I didn’t read the instructions correctly!

      • rtx 2080 same problem. Did I messed to read instruction? Really such a answer is empty. Useless and nerves. I want to know why isnt work for me.

    • My RTX 2070 does not show up under optix either. I have 3 graphics cards installed on my system two are GTX 970 cards. I wonder if they are interfering with optix being detected.

  34. It would be an extremely impressive improvement if it wasn’t thanks to a dedicated hardware acceleration.

    You would expect to get 10x out of custom ASIC. And that’s kind of what it does. I guess it can trace ~10x more rays than GPU without RT cores, but the shading is the bottleneck, so all those rays don’t make it that much faster in the end.

    I think everyone expected more from RTX, considering the miracles PowerVR Wizard was showing years ago using a few watts of power. Either PowerVR was misleading us or Nvidia shipped something relatively less effective.

  35. Another improvement of the benchmark overview could be
    some MultiGPU additions.

    For now we get strange outputs
    for
    1xTitan RTX
    vs
    2x Titan RTX
    NV Link installed or removed
    Some recomendation here would be nice.

    1x Quadro 6000/8000 vs
    2x Quadro 6000/8000

    At the moment we delegate different jobs to seperate Blender instances with cuda_0 and cuda_1 settings to get close to linear scale.
    For 4GPU setups cuda_0 to cuda_3.

    Would be nice to have an eye on such use cases by further optimisations and to add them to your benchmark scenarios.

  36. This is such a great news.. and i am amazed that u guys pulled it off as a side job during crazy work on 2.8! Just shows that blender foundation and cycles are best thing ever! <3

  37. now for eevee to use the power of the gpus.

  38. Great news.
    Could we expect similar speedup in cycles bake with the OptiX device type?

    For further Optix AI denoising activities would be also cool to have an extra look on denoise low sampled bake output.

    All in all would be nice to also when these optimisations get an additional cycles bake view and an benchmark scene.

    • Baking is one of the few features not yet implemented in the patch, so I don’t have numbers for it. It’s in the works though.

      • Thanks for the info. A lot of baking is happening these days on MultiGPU in Blender on Quadro RTX and Titan RTX.
        So when you need a buisness case.) I could give you some.

  39. Does OptiX work like RTX on Pascal, Turing and Volta GTX cards too?

    • Don’t think so. GTX cards have no RT cores, so they just use their Cuda cores as before. RTX backwards compatibility is basically just emulating RT core functionality in Cuda cores, which doesn’t make it faster, but just makes the API compatible. Since Cycles already renders well on Cuda, there’s no reason to add further levels of complexity without added performance.

      • Pretty much. The OptiX device in Cycles is currently disabled for non-Turing RTX cards since there is little benefit over just using CUDA there.

        • Is there any benefit for Turing non-RTX cards, such as the RTX 1650, or the Quadro T2000? They lack the Tensor cores (though still have separate FP16 ALUs) and lack the RT cores, too.

      • I’m currently getting slightly faster renders using a GTX1080Ti & an RTX2070Super in Cuda mode than when using the single RTX card in Optix mode. Even if it was emulating the RT cores, it would be great if both cards could be used in Optix mode. I imagine some people with 1+ GPUs didn’t upgrade all of them to RTX.

        • Hola, tengo una duda acerca de como activar el optix, es decir, simplemente debo activar la opción en blender o como usuario común tengo que saber hacer la compilación de manera independiente??, gracias por su ayuda.

          • Hello, I have a question about how to activate the optix, that is, I simply have to activate the option in blender or as a common user, do I have to know how to compile independently? Thanks for your help.

  40. I cannot wait for a potential eevee + RTX integration later ! what an exciting time !

  41. YES !!! WELL DONE !! so will ai viewport denoising be avaible with blender? i heard that its not gpl compatible ?

    • You can get optix Denise now as an add-on. I’m sure the optix rtx stuff will work in a similar way. The hooks are built in and the libraries are downloaded separately. Maybe. That’s a guess.

In order to prevent spam, comments are closed 15 days after the post is published.
Feel free to continue the conversation on the forums.