Influence of ASIC Quality on Max Boost Clock - EVGA GeForce GTX 980 Ti Reference

Published by Marc Büchel on 18.02.16

Testing more than ten identical GeForce GTX 980 Ti graphics cards at the same time isn’t something that’s trivial to arrange for a review site like ocaholic. The likelihood that this number of high-end pixel accelerators is going to fall off a lorry and that this coincides with us being at the same place at the right time, is very close to zero. This means that waiting for us winning the lottery is not getting the job done and we had to find another way to solve the supply issue in order to analyze the connection between ASIC quality in the case of NVIDIA GeForce GTX 980 Ti graphics cards.

Thanks to a binning session at MIFCOM GmbH we had the chance to put our hands on 11 identical GeForce GTX 980 Ti graphics cards., which are all based on NVIDIAs reference design.

What does ASIC stand for and what is ASIC Quality?

ASIC is basically just a simple abbreviation and stands for application-specific integrated circuit. Having a look at the corresponding article on Wikipedia it quickly becomes clear that any chip that performs specific tasks is an ASIC. Basically every CPU, all the SoC mobile phones and smartphones are based on as well as any GPU are ASICs and they’ve been designed to tackle a specific problem in a highly efficient way.
But what is ASIC quality all about? In order to properly explain the connections and relations we’re going to dig a bit deeper. Todays processors, no matter whether CPU or graphics processors, are made using ultrapure silicon. A complex lithography process is being used to create the tiny transistors, which are actually the building blocks of all of todays microchips. In the end a graphics processor is made from billions of transistors. As with any element that’s engineered, also a transistor is being manufactured complying with certain tolerance levels. The thickness of insulating layers is especially important, since they prevent electrons from tunneling thus reducing leakage current. Leakage happens whenever there are high currents and high voltages being forced though a chip. Imagine a water pipe for example, where pressurized water is being forced through the pipe. This causes stress on the walls and should the pressure be too high then the walls can crack or even burst in the worst case. Imagine now that the insulating layer of a transistor is the pipe wall, the water is the voltage and the pressure is the current and you have an analogy that works perfectly fine. Going back to the transistor you can imagine that the lower the default current the less pressure there is on the insulating layer and as a result leakage is lower as well. Therfore chip designers are keen on keeping voltages as low as possible since it allows them to build more efficient chip. In the end these chips have a lower power consumption which actually brings us to the next topic. NVIDIAs recent graphics chips feature a socalled power target, and understanding this will bringt us even closer to understanding the link between ASIC quality and max GPU boost clock speeds.

What's the link between NVIDIAs Power Target and clock speeds?

Since quite a few years meanwhile NVIDIA is using a socalled "Power Target" with their GPUs. Every NVIDIA GPU is only allowed to draw a certain amount of power. The power an NVIDIA GPU actually needs can be calculated using a formula consisting of the number of transistors, the clock speeds as well as the voltage. If you now own a chip with higher ASIC quality this means lower leakage power and that in return allows for applying lower currents. The "Power Target" now ensures that the GPU always hits the maximum power consumption, thus being able to increase clock speeds. In the end this means, higher ASIC quality leads to lower leakage, lower leakage leads to lower power consumption at a given clock speed, allowing for higher clocks within a given power envelope.

And in reality?

After quite a lot of text we’re at the point, where we can talk about the test results. After having had a look at 11 different GeForce GTX 980 Ti reference graphics cards we’ve gathered quite a few interesting results. The lowest ASIC value we measured was 56.4% and the highest was 80.2%. In our opinion the spread is huge and we were suprised this fact. The difference is a whopping 23.8% and things become especially worrying looking at the corresponding clock speeds. The card which had an ASIC quality of 56.4% had a maximum boost clock of 1139 MHz, whereas the 80.2% card ran at 1240MHz.
At this point all the results:

ASIC Quality Max Boost Clock
56.4 % 1139 MHz
56.6 % 1140 MHz
59.5 % 1152 MHz
61.0 % 1164 MHz
64.7 % 1177 MHz
68.6 % 1189 MHz
68.6 % 1189 MHz
69.3 % 1202 MHz
70.9 % 1202 MHz
73.0 % 1215 MHz
80.2 % 1240 MHz

And here the maximum boost clock as a function of the ASIC quality:

In the end we believe there are two really interesting facts, which can be extracted from these tests. First of all there is the fact, that vendors sell one and the same graphics but the clock speeds vary by up to 100MHz or even more. Therefore, depending on luck, you can end up with a graphics card which is almost 8.9% faster, then another one, which is supposed to be the very same card. Apart from that it is interesting to see that there is a linear dependency between ASIC quality and boost clock speed – which we actually measured using GPU-Z and 3DMark Firestrike.

At this point we’re going to close this chapter. In another article we’re going to have a look at GTX 980 Ti graphics cards from MSI or to be a bit more precise, we’re very curious about the ASIC quality spread with MSI’s high-end custom gaming graphics card, the MSI GeForce GTX 980 Ti Gaming 6G. Hopefully the results are going to be better.

comments powered by Disqus

Influence of ASIC Quality on Max Boost Clock - EVGA GeForce GTX 980 Ti Reference - Graphics cards Reviews NVIDIA - Reviews - ocaholic