GPU Rendering vs. CPU Rendering – Empirical approach for comparing render times

GPU Rendering vs. CPU Rendering – Empirical approach for comparing render times

As GPU render engines become more popular and feature rich, you may be thinking (for the purposes of final frame rendering) that it’s time to jump in and integrate GPUs into your workflow. The driving force behind a migration to GPU rendering has always been speed. In fact, BOXX customers frequently ask, “How much faster is GPU rendering as compared to CPU rendering?” This is a tricky question to answer because of the many variables involved.

The goal of this article is to provide a better understanding of image quality with respect to the render times of different rendering engines using different compute devices. I'll also propose a method to accurately compare CPU rendering to GPU rendering.

Note: The image analysis technique described below works very well on the BOXX Benchmark Scene which you can download here. However, we have noticed that the image quality calculator utility is not as reliable on certain scene files. Users are encouraged to experiment on their own with the image analysis techniques discussed in this article.

Benchmarking
Most likely, you are familiar with CPU render benchmarks. The concept is simple: setup a benchmark scene and evaluate render times where the only variable is the CPU model being tested. The scene file, meshes, lighting, shaders, and render settings remain constant throughout the testing. When comparing CPU rendering to GPU rendering, benchmarking becomes more complex. For example, Chaos Group’s V-Ray has a popular GPU rendering engine (V-Ray RT for GPUs). While its very similar from a usability standpoint to the traditional CPU based V-Ray Advanced, V-Ray RT is still a completely different rendering engine featuring its own unique settings that render slightly different results within the same scene.

Because of this, many users will attempt to match the visual image quality of the CPU renderer. But doing this by observation alone is subjective and unreliable for benchmarking. Referring to a GPU-rendered image, you might say “This looks as good as what I’m getting on the CPU,” and use that render time to compare against the CPU’s render time. A more empirical approach is necessary for a fair comparison.

Which image is better visually?

Calculating Image Quality
In general, anyone who has spent time processing digital photography has dealt with the presence of noise in their images. Amateur astrophotography, a specific branch of digital photography, particularly suffers from excessive noise. Astronomers only want to work with the highest quality images when stacking image data, so in order to help them weed out poor quality images (excessive noise) from their data sets, an image quality analysis needs to be performed on each image.

In order to accomplish this, The University of Manitoba’s Physics and Astronomy Department developed an Image Quality Calculator. This utility, which is a plugin for ImageJ, does a great job analyzing the quality (amount of noise) of 3D renderings as well. To demonstrate its ability to analyze noise in 3D renderings, I incrementally increased the render time using V-Ray RT for GPUs and plotted the image quality value for 30 rendered images.

This produced a curve that one would expect. The amount of measurable noise continuously decreased over time. This curve also mimics observation in that the amount of visual improvement becomes more difficult to appreciate as render time increases (curve asymptotes at approximately 11.3 noise amount)

Computing The Image Quality Factor
The factor is computed by taking the sum of the differences of adjacent horizontal and vertical pixels. This is done at three different scales (original size, half size, and quarter size). Images that contain a lot of contrast and sharp edges should have a higher quality factor using this method.

Establishing Baseline Image Quality
Now that we have an accurate method to measure image quality between two different engines, we can more confidently gauge the performance of GPU rendering vs. CPU rendering. In order to do this, an image quality baseline needs to be established which satisfies your project’s final deliverable requirements. For example, if you are producing an animation, baseline image quality would be based on render settings that generate acceptable noise/flicker free animation sequences.

The underlying assumption in this article is that users are migrating from V-Ray Advanced to V-Ray RT GPU. In the example below, I used a rendered image generated with V-Ray Advanced where the render settings yield satisfactory results for animation. In my example below, the baseline IQ came out to approximately 11.35.

Note – The baseline image quality you establish is subjective. Only you can determine the acceptable amount of noise, flickering and GI artifacts in your final product.

Comparing Results
With the IQ value established (and using the GPU rendering engine) I now have a target to match. To compare the CPU to the GPU, I simply note how long the GPU engine took to match the baseline image quality. In this example, the baseline image quality that was rendered on the CPU took 19 minutes and 11 seconds, while the GPU took 3 minutes and 4 seconds to match the baseline. Thus, the GPU (2688 CUDA cores) was 6.2 times faster than the CPU.

Choosing the right hardware
Whether you rely on CPUs or GPUs in your rendering pipeline, BOXX has you covered. We offer dedicated CPU-based rendering solutions like renderPRO and renderBOXX, as well as the the GPU dense APEXX 4 workstation with support for up to 4 GPUs.

For more information and to discuss your specific hardware needs call 1-877-877-2699.

John Petersen

Scientific Director, AttoLab & Adv. Patterning at imec & Adjunct Professor at UMD

9y

Joe, very nicely done.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics