Another year passes along; another high-end card enters the market, another fight between AMD and Nvidia for the graphical crown. While the previous generation was completely owned by Nvidia when it came to the ‘fastest’ graphics card with the GTX 580, the HD 6990 vs. GTX 590 debacle notwithstanding, this time around things have gotten quite exciting.
Last year AMD released their flagship HD 7970 graphics card which easily bested NVidia’s GTX 580, but this was not an easy victory. And a short lived one at that, as you’ll find out soon enough. Today I’ll be looking at the Nvidia GTX 680 graphics card, a GPU many PC enthusiasts have been looking forward to since early this year.
Like the AMD HD 7000 series, the first thing to note about NVidia’s new Kepler architecture is that the GTX 680 runs on a 28nm core. It has a relatively low TDP of 195W and requires just two 6-pin connectors to power itself; a minimum PSU of 550W is recommended.
Measuring a little lesser than the GTX 580, the GTX 680 comes in at 10-inches. The rest of the specs for the GTX 680 are listed below.
Just on paper the AMD HD 7970 seems superior, tech wise, than the GTX 680. The memory bandwidth, number of transistors and shader count on the HD 7970 are more than the GTX 680, but NVidia’s champion does have higher clock speeds. We'll soon find out whether the Kepler architecture is better than Tahiti or not.
Nvidia is pouring in a lot of resources into the GTX 680; the end result isn't just a card that runs cooler and quieter than the previous generation (although that is solely due to the shrinking die size), but also a card with some cool new features.
A good look into all of these can be seen from the small video below, as Nvidia nicely sums up and visually displays how these new features on the GTX 680 work and look like in real life.
The GPU Boost feature can be thought of as Intel's and AMD's Turbo Boost technology. Given that the GPU is not under full load, there's enough headroom for a minor overclock to boost performance in-game. Obviously all of this is done automatically, and Core clock speeds can go from 1006MHz to 1059MHz. In the future, as Nvidia teams up with more game developers, we will be seeing this technology being used more efficiently, to a point where people may not even need to overclock their GPU anymore.
Sadly this is where actual overclocking suffers, as increasing the Core clock speed up to 1200MHz actually resulted in either the exact same performance as stock, or drop in certain cases by a frame or two. The GTX 680 just continuously offsets clock speeds against the voltage, or TDP to be more precise. I'm sure in the near future certain manufacturers, namely MSI, ASUS and perhaps EVGA will release updated overclocking software that allows further tweaking; but for now, everyone is stuck with Nvidia's nanny for overclocking.
Another cool new feature is Adaptive VSync, whereby the Kepler architecture dynamically turns VSync on and off whenever frame rates dip below 60fps to avoid the stuttering effect many people notice in games as the GPU cannot maintain 60fps lock due to excessive GPU usage. VSync, when left on, adds a lot of stress on the GPU, and if frame rates cannot be maintained, frame rates take a nose dive, resulting in the stuttering; Adaptive VSync will simply turn off VSync in order to avoid this issue altogether. Once frame rates go up again, VSync is turned on so that screen tearing doesn't occur as frame rates swiftly move from over 60 to 80 or more, down to 60, etc.
The last new feature worth mentioning is NVidia’s new antialiasing technique, called TXAA. TXAA is a mixture of regular antialiasing and a special 'CG film style AA resolve' which results in extremely smooth edges without impacting the GPU by as large a margin as regular MSAA. In fact, Nvidia claims that 1xTSAA provides better quality than 8xMSAA, while only consuming the resources equivalent to 2xMSAA. Conversely 2xTSAA uses the power consumption equivalent of 4xMSAA. Currently there are 9 games in development that will use NVidia’s TXAA which will be utilized by the GTX 680, to be released later this year. Specific games include Borderlands 2 and Mech Warrior Online, while Epic's Unreal Engine 4 as well as Crytek's next engine will be supporting TXAA.
The below testbed was used for testing the GTX 680.
For comparisons I have used the ASUS HD 7970
and HD 7950
, plus the Nvidia GTX 580. All graphics cards were running at stock speed, and original reference units like the GTX 680.
The benchmarks were all run at the below settings. All tests were done at the resolution of 1920x1200 and 2560x1440.
Unigine Heaven v2.5
Batman Arkham City
Temperature & Noise
Our test GTX 680 was just as well behaved as Nvidia predicted, with idle temperatures remaining at 34°C while under full load HW Monitor didn’t record anything over 76°C. Noise, as was also predicted by Nvidia, remained very low, although I didn’t notice much difference between the GTX 680 and HD 7970 during idle state. Under stress, though, the GTX 680 is definitely less noisy than the HD 7970, but even then both these cards are respectively quiet.
The Nvidia GTX 680 is without a doubt a fast card. It beats the AMD HD 7970 in every benchmark, with an overall average lead of 12%. The GTX 680 also has a considerable lead of the HD 7950 and GTX 580 with an average lead of 36% and 39% respectively.
While being a lot cooler and quieter than previous generation, the GTX 680 has already proven its worth, but the fact that it easily beats every single GPU solution from both their previous lineup (GTX 580) as well as current AMD champions (HD 7900 series) proves that Nvidia once again is the speed king this generation.
Now let’s see what the GTX 690 and HD 7990 have to bring to the table. In the meanwhile, AMD will have to reduce the price on the HD 7900 series, especially the HD 7970 in order to remain competitive, because at the same price the GTX 680 simply outclasses it.