nVidia's LOD Effect on HWBOT Benchmarks

Published by Christian Ney on 09.01.13
(1) 2 3 4 ... 11 »

Last October NVIDIA published its first 310.xx series driver which brought lot of improvements. Especially their Kepler based cards were able to benefit significantly, when it comes to gaming performance. In addition to performance improvements, adding SLI profiles and bug fixes nVidia added a "new" setting to the driver that went under the radar. The setting in question is called "LOD bias" and is certainly known by most of you because it exists and it has been present in our drivers for a long time. However, it did affect only DirectX 9 games and applications, the LOD bias wasn't working with DirectX 10 and 11. nVidia found a way to make it work in DirectX 10 and 11 and here we will see the impact in the HWBOT approved 3D benchmarks.

  Article in English   Artikel in Deutsch   Article en français   Articolo in italiano

LOD bias: What is it?

In order to understand what the LOD bias is you have to know that in a game there are certain texture levels for each and every surface. If the camera is at 500m from the object the applied texture is not the same as the one applied when the camera is only 2m away. The further away the camera the less detailed and the more blurred the texture becomes. This way it doesn't overload the graphics card with huge textures you don't need.
The developer chooses when there will be a texture change and filters will ensure that the transition is not visible (Mipmapping). The transition becomes almost invisible, it is called trilinear filtering.

The LOD bias (Level of Detail bias) can delay or accelerate this process. Increasing the LOD bias will use lower level of textures while the camera is close to the object, the object is then more blurred with less details.
Decreasing the LOD bias will have the opposite effect, higher quality texture will be used instead of giving the impression of greater sharpness but with a pixelation effect.
So decreasing the LOD was making some old games that had a significant mipmapping more "beautiful" like Quake 3, Serious Sam, ...
I think there were only benchers and overclockers who were actually increasing this setting, to boost up benchmarks a bit by using less complex textures. But in the meantime the significance of the LOD bias became less and less, since DirectX 10 and 11 didn't support altering this setting.

Pictured below:

LOD -20

LOD -10

LOD -3

LOD -1




LOD 10

LOD 27

How to enable it?

nVidia is in fact "cheating" and changing the rules. The LOD we used to see, disappeared in the latest drivers series 310.xx and has been replaced with something similar. Similar because while the setting keeps using the same name it needs another setting to be able to work. This one is called Sparse Grid SuperSampling (SGSSAA) and is a form of AntiAliasing that acts at the post-processing level. The visual appearance is looking like the normal DX9 LOD.

To ajust the "LOD bias" with the drivers series 310.xx you need to change the following settings in nVidia Inspector.

What we tested:

We wanted to know if increasing the LOD shows a measurable impact on performance in all HWBOT approved 3D benchmarks and also it's visual impact.
To measure the performance we used the test setup below and ran the different benchmarks with different SGSSAA and LOD values. The results can be found on the following pages.
We also captured all the benchmarks using Fraps to show you the visual difference between LOD 0 and LOD 15/27. The corresponding videos to the benchmarks on the same page as the benchmark.

Some details:

There is a very tiny performance difference between SGSSAA x8 and x2 in most benchmarks, by default I used x8 but if in the results tables I also benched x2 then it's because under this specific benchmark the difference was not that tiny anymore. For HWBOT it is recommend you to use x2 as it was always as good or faster than x8 but never slower.
Maximum LOD bias for DX 10 and 11 is 27.
Maximum LOD bias for DX 9 is still 9.
I also tried to see if DX9 benchmarks with 30x.xx drivers and LOD was faster than the 310.90 driver plus LOD plus SGSSAA. Unfortunately the 310.90 driver was always faster than the 306.xx driver I used. So no LOD comparison was possible.

Test Setup

  • ASUS P8P67 Pro (BIOS 2303)
  • ASUS Maximus V Gene (BIOS 1408) (3DMark01)
  • Intel Core i7-2600K @ 3.4 GHz (Turbo Off / HT On)
  • Intel Core i7-3770K @ 3.5 GHz (Turbo Off / HT Off / 2C/2T) (3DMark01)
  • G.Skill RipjawsZ Dual Channel 4x4GB CL9-9-9-24-DDR3-1600 MHz
  • Crucial Ballistix Smart Tracer 2x4GB CL9-9-9-27-DDR3-1866 MHz (3DMark01)
Graphic Cards (Driver)
  • nVidia GeForce GTX 680 (MSI Twin Frozr)
  • ForceWare 310.90 WHQL (Windows 7 x64)
  • ForceWare 310.90 WHQL (Windows XP) (3DMark01)
  • nVidia Inspector
  • OCZ Octane 512 GB
  • Seasonic Platinum SS-1000XP / 1000 Watts

Page 1 - Introduction Page 7 - 3DMark 11
Page 2 - 3DMark2001 SE Page 8 - AquaMark 3
Page 3 - 3DMark03 Page 9 - Unigine Xtreme
Page 4 - 3DMark05 Page 10 - Unigine Basic
Page 5 - 3DMark06 Page 11 - The End!
Page 6 - 3DMark Vantage

Discuss this article in the forums

comments powered by Disqus

nVidia's LOD Effect on HWBOT Benchmarks - Articles - Reviews - ocaholic