Hey there,

I’m making an update on my workstation/toolset and I’ve been playing around with many software. 

The recent release of Arnold’s with beta GPU feature really put this renderer back on my radar.  So I decided to get a Trial license and see how it runs. 

I’ve been using the Gaffer at home for the tests. If you’ve never heard about it, here goes a brief introduction:

It’s an incredible open source tool developed in Imagine Engine. Its focus is lighting and look development. It offers a powerful and intuitive node based system, similar to Katana in many ways, with its own pros/cons. Gaffer was originally developed by John Haddon and receives now the contribution of many developers from around the globe. 

You can use Arnold with most of its features in Gaffer,

Since Gaffer uses the SDK version of Arnold, I couldn’t find a way of getting rid of the Arnold’s watermark  with a trial license, sorry for that.

First impressions:

I’m using a really simple scene containing a ground plane: the Gaffer’s robot, a Arnold’s Sky dome light and a quad light.

This First image, is the result a CPU render. 

frame time :  9m43s   

With the same render settings, using GPU render, I got this:

frame time:  1m56s !!!

It is 20% of the CPU time.  SO FAST!!

Great, Right?

Well, not so fast…

Even without touching the samples settings, The image generated by GPU rendered came out much noisier, as you can see in the zoomed detail:

My first reflex was to increase the samples to try to match the result I had with CPU.

frame time   7m50s

It feels like something is not quite there yet.  The render time was nearly the same as CPU and I can notice more noise caused by indirect light.

This is expected, Autodesk announced with the beta release that some features are still a WIP and Arnold GPU uses Camera (AA) sampling only with adaptive sampling mode.

Running the same test scene with the adaptive sampling was a great improvement. The noise went away and with a faster render time, I was able to get a much better result than CPU:

GPU and Adaptive sampling
frame time   5m44s - 4 minutes faster than CPU!

Hardware bump

I’ve been using a NVIDIA 1070ti for this test, which is a reasonably powerful graphic card and still didn’t get that huge performance with the GPU renders in Arnold. Though I can’t blame Autodesk or Solid Angle for my disappointment,  they are focused on developing Arnold GPU to run in the latest models of NVIDIA graphic cards, the RTX family. 

I wrote this article during my upgrade from a GTX 1070ti to a RTX 2070.

The hardware specs of both graphic cards are not so far from each other. Both have 8GB of DDR memory. The older even has around 100 CUDA cores more than the newer RTX. But the big difference  is in the new Turing processor and all the software improvements that allow the RTX family execute some impressive tasks, like real time ray tracing and Deep Learning Super Sampling (DLSS)

Unfortunately the DLSS is not supported by Arnold in Gaffer, though Arnold for Maya has a similar feature that uses Machine Learning to denoise IPR images on the go. 

Flying with RTX

I increased the complexity of my test scene a bit by adding more lights and atmospheric volume. 

I lit the scene using the IPR with GPU enabled. 

And holy S***t , it is smooth!!

You can tell right away that the RTX refreshes the images interactively with great performance improvement. 

Here is the comparison of the first render test:

frame time    24:07.06 

frame time    5:25.17   

A surprising inconsistency in the look…. The GPU render didn’t trace all the reflection rays of the red light for some reason. So, when I finally rendered with the CPU, I got a lot of unexpected red highlight happening in the back of the FG robot and the chest of the BG Robot. I couldn’t find in Arnold GPU Beta’s release notes any mention for that kind of problem. But it’s alright. It’s a Beta.... and the speed! RTX made it so much faster.  Nearly approximately 20% of the CPU speed with similar level of noise. 

Just to validate the comparison, I decided to run a new test with the same images but without reflection rays coming from the Red Light. Then the look came out consistent.  I also ran the same render using the GTX 1070ti. Check the results:

Frame time  20m10s  

GTX 1070ti:
frame time   10m40s

RTX 2070:
Frame time  4m35s 

Look Dev is the initial goal of Autodesk with Arnold GPU, they mentioned in their release video that the GPU delivers ALMOST production quality. But I believe GPU is totally fine for any of my Lighting/comp personal project. I may need to check the CPU render now and then just to be sure nothing weird is going on like the red specs on the robots.

On the IPR side, I felt the CPU is more stable than the GPU. Running IPR with GPU is clearly faster, but I had to restart the render more often because some lights would turn off out of the blue. In the other hand, the responsiveness of the IPR while moving lights and objects make the GPU an easy choice for interactive lighting. Arnold GPU is really making the lighting process more fun.

"GPU renderers powered by RTX are definitely an improvement  on Lighter’s work" 

I think we (cg artists) will be benefited even more than final users like gamers. I’ve seen a lot of reviews online complaining about the FPS benchmarks of RTX in a certain game coming from people who really never cared about the idea and complexity of bouncing light rays around a 3D scene. While for lighters and other CG artists , this is a constant challenge of our day-to-day work.  

It means it will be faster for us to have our renders done, in some cases faster than the others, but RTX graphic cards are still a big step on technology, and these steps don’t happen often. 

It is still very hard to render photoreal ray tracing with complex feature film like assets in real time inside a video game engine like UE. Most of the times the required process of asset’s optimization will end up more expensive than rendering everything with an offline renderer. Which is quite a bummer. On the other hand, it’s great to see that Arnold is also being benefited by the improvements that came with real time ray tracing in the RTX graphic cards. The path tracer's ability of rendering ultra-complex scenes is still unrivaled and I think it will take a long while until the technology allow us to completely replace them by real time render engines. 

A final note on my computer tech specs:

I’ve been using a 3 years old CPU (i7-6800K CPU @ 3.40GHz with 12 cores) , which really doesn’t help the CPU times. While the RTX 2070 was released a year ago.
The OS I’m using is Linux Fedora 30.

Share this post: