«

»

NVIDIA SLI Performance: GeForce GTX 660 Ti vs GTX 670

PAGE INDEX

<< PREVIOUS            NEXT >>

NVIDIA SLI Performance: GeForce GTX 660 Ti vs GTX 670

Benchmark Reviews Compares SLI Performance Between NVIDIA GeForce GTX 660 Ti and GeForce GTX 670.

By Olin Coles

Back in May (2012) NVIDIA released their $400 GeForce GTX 670 video card, securing the number two position in their single-GPU product stack. Just three short months later, the GeForce GTX 660 Ti graphics card arrived to market and filled store shelves at the $300 price point. With a substantial $100 price difference between these two product, consumers might (incorrectly) presume there’s a significant difference in hardware or performance. To the surprise of many, the GeForce GTX 670 and GTX 660 Ti are nearly the same card. Both feature identical 28nm NVIDIA ‘Kepler’ GK104 graphics processors, complete with 1344 CUDA cores all clocked to identical 915 MHz core and 980 Boost speeds. Additionally, the GTX 670 and GTX 660 Ti also feature the exact same 2GB GDDR5 video memory buffer, clocked to 1502 MHz on both cards. The only physical difference between these two products resides in the memory subsystem: GeForce GTX 670 receives four 64-bit controllers (256-bit total bandwidth) while GeForce GTX 660 Ti is designed with three memory controllers (192-bit bandwidth).

So will memory bandwidth amount to any real differences in video game performance? Most PC gamers have a PCI-Express 2.0 compatible motherboard inside their computer system, although the most recent motherboards and all of NVIDIA’s GeForce GTX 600-series feature PCI-Express 3.0 compliance. Additionally, nearly all high-performance video cards feature at least 256-bit memory bandwidth or better (such as 384-bit with some AMD Radeon HD 7000-series graphics cards). However, most testing with these high-end graphics cards has shown little indication that bottlenecks actually occur at the PCI-Express level, even while playing the most demanding DirectX 11 video games. It’s more likely that a bandwidth bottleneck might occur at the video card’s memory subsystem, where the GPU may be capable of sending more information than the video frame buffer can accept. We discovered some evidence of this in our recent testing on the ultra-overclocked ASUS GeForce GTX 660Ti DirectCU-II TOP, which maintained performance levels with a stock GTX 670 in all but the most demanding video games that featured large maps or virtual worlds.

Obviously the GPU plays the star role in creating a bottleneck, and while less powerful processors lack the raw number of transactions to saturate the memory subsystem this isn’t so difficult with powerful Kepler-based GeForce GTX 600-series products. Because there are times when GeForce GTX 660 Ti’s 192-bit memory bandwidth can become bottlenecked, what would happen if we split demand between two cards connected together into an SLI set? In theory, by combining two graphics cards together with SLI technology we’re essentially doubling the memory bandwidth available to standard workloads. Given that 256-bit memory configurations suffer very few bandwidth bottlenecks while 192-bit can have occasional limitations, it seems plausible. Furthermore, the results could really help gamers decide on creating an SLI set of either GTX 660 Ti or GTX 670 graphics cards. That’s the purpose of this article.

NVIDIA-SLI-Ready-Badge.jpg

GeForce GTX 660 Ti VS. GeForce GTX670
GPU Cores 1344 1344
Core Clock (MHz) 915 915
Shader Clock (MHz) 980 Boost 980 Boost
Memory Clock (MHz) 1502 1502
Memory Amount 2048MB GDDR5 2048MB GDDR5
Memory Interface 192-bit 256-bit

The new and improved Kepler GPU architecture with NVIDIA GPU Boost technology is only the start, because GeForce GTX 600-series video cards deliver plenty of additional refinements to the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Overclockers might see their enthusiast experiments threatened by the presence of NVIDIA GPU Boost technology, but dynamically adjusting power and clock speed profiles can be supplemented with additional overclocking or shut off completely. Adaptive vSync on the other hand, is a welcome addition by all users – from the gamer to the casual computer user. This new technology adjusts the monitor’s refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync (when enabled), thereby reducing stutter and tearing artifacts. Finally, NVIDIA is introducing TXAA, a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality.


SKIP TO PAGE:

<< PREVIOUS            NEXT >>