Conclusion
As we already mentioned this is the first article of a series of
SLI scaling analysis with different CPUs and different numbers of graphics cards
as well as different graphics cards. At this point we want to quickly explain
what a scaling factor is. Basically it describes by how many times a certain
configuration is faster than another one. In fact we could also write that the
SLI setup is 80 percent faster than the non-SLI setup, but it's shorter to write
that with game XY we see a scaling factor of 1.80.
So far there are already some rather
interesting findings. One rather obvious thing is the fact, that consistent
scaling regarding GPU performance can only be found with theoretical benchmarks.
Referring to 3DMark we see that the overall socre is between factor 1.59 higher than with one card. Translated into
the raw GPU score we see an scaling factor of 1.81, respektively 1.83 (depending
on our CPU preset) which is usually about the
maximum scaling you get with SLI. A closer look at Unigine Heaven reveals that
in case of the basic preset scaling factor is as low as 1.24 at stoc CPU frequencies
and 1.13 with the CPU at 4.5 GHz. When it comes to the Extreme Preset then we
see that the GPU has much more influence again, which is acutally rather
obvious, since there is much higher load on the VGA(s).
Games on the other hand are a completely different story. Some of you might be
asking, why in the name of ... we ran an SLI with low resolution. Well, browsing
all the results shows, there is actually some scaling going on and we find it
interesting to see that even at low resolutions certain games benefit from a
second card. Another point is that adding a second card makes dead sure, the
VGAs aren't limiting in any case, meaning CPU limitation becomes even better
visible. But lets go back to talking about the results now. Scaling factors at
low resolutions vary from 1 to 1.52. 1 basically means, the fps didn't change at
all and 1.52 is quite a mssive jump, which bears the question why. The game in
question is Call of Duty Black Ops 2 of which we know, graphics load is very
low. If we have a closer look at the scores we see that with the CPU ran at
stock clocks.
Now lets focus on high resolution and high details, where the games cause quite
some load on the two graphics cards. In this case the lowest scaling factor we
find with Sleeping Dogs and the CPU at 4.5 GHz, where fps are by factor 1.04
higher than with one card. On the other hand there is Crysis 3, which produces
some values that look rather ridiculous at a first glance. If we look at the
scaling factor with the Intel Core i7-3960X at stock clocks then there is a
scaling factor of 2.80 and at 4.5 GHz this factor goes up to 2.94. We've cross
checked this value about 20 times in the meantime and we always get the same
result. Our explanation on this goes as follows. Since there must be a
multiplying factor we assume, there is a CPU as well as GPU bottleneck being
opened. We think that when we add two cards in the system the benchmark is not
VGA limited anymore, which is causing the CPU to kick in. We think this is
acutally possible since the scaling factor is even higher when we overclock the
CPU 4.5 GHz, which means the CPU must have a rather drastic effect on the
endresult.
To end this conclusion we try to leave you guys with at least some sort of
buying advice. Basically if you do have the funds to buy two high-end graphics
cards and then use them in SLI (apparently) it makes sense to combine these two
cards with a powerful CPU, since you're quite likely to run into CPU limitation
on certain games. On the other hand the question remains how much sense it makes
to buy two high-end cards in the first place, since not too many games are
actually well optimized for multi-card setups. Overall you'll definitely end up
with a highly futureproof setup but the benefit comes at quite a cost.