Unleash the tiger!
EnthusiastPC

Graphics card basics

1


Which specifications matter?

This is the question that most frequently pops up when people ask about videocards. It's not really the right question for what most people are eager to know because the honest answer would be that most of them do :-)

To make matters even more complicated most of them matter only in the context of the other specifications. Below is a list of the most commonly listed specifications with both retailers and manufacturers:

  • GPU clock frequency
  • GPU shader clock
  • Number of shaders/streamprocessors/cores
  • Memory clock frequency
  • Effective memory clock
  • Memory type
  • Memory bandwidth
  • Memory size
  • TDP (Thermal Design Power)
  • Maximum board power

There's probably other parameters that are listed that won't be of much use to you without knowing what they mean and how or why they matter. The ones listed above are among the most important ones to look for however. Let's start off with highlighting some of these and explaining what they mean. Once you understand some of them the others will automatically start to make sense.

The GPU

Before we start talking about the clock frequency, let's get some terminology straight: the board that you stick in to your PCI express slot is the videocard and the processor on that board which renders your 3D game world, desktop and other stuff is called the GPU. The GPU really is a processor just like the Intel or AMD processor that sits on your mainboard. It is however a highly specialized processor that is designed specifically for the types of calculations that rendering a 3D world require. For this purpose this processor has multiple cores just like the CPU on your mainboard.

A "GTX 560" is not a GPU, it is a videocard.

The GPU on the GTX 560 videocard is called the GF114 GPU. Since the introduction of the GTX 560 Ti 448 Core model this distinction is even more important because on the 448 core model the GPU is a GF110 and not a GF114. The GF110 is capable of much more than the GF114 GPU, the most important in this example being that it has a 320-bit memory bus as opposed to 256-bit on the GF114. The naming scheme is a bit misleading though, GF110 being superior to GF114 while the latter has the highest model number.

To make it a bit easier to understand what a videocard is, here is a picture of the reference GTX 580 with the cooler removed:

GTX 580 Circuit Board


The large NVidia chip in the middle is the GPU (in this case a GF110). The 12 smaller chips that surround it are the videoram chips. The GTX 580 has a memory bus width of 384 bits and each GDDR5 chip has a bus width of 64 bits. To get to 384 bits (6 x 64) the card thus needs a multiple of 6 GDDR5 chips to get to this bandwidth. This is the reason why the GTX 580 comes in 1.5GB (128MB per chip x 12) or 3GB (256MB per chip x 12). It is also the reason that the GTX 560 comes in either 1GB or 2GB and the GTX 570 in 1.25GB or 2.5GB. This has to do with the physical number of chips needed to get to the bus width of 256 bit (GTX 560) and 320 bit (GTX 570).

Basically a videocard is actually a computer inside your computer with it's own processor, videoram and other stuff you find on your mainboard like voltage regulation circuits.

Back to the Graphics Processor (GPU)

Because of the specific application the GPU was designed for it has many more cores than your average CPU but these cores are much simpler and much smaller so that many more actually fit on a small piece of silicon. Because of the high core count a lot of GPUs have more transistors on them than most CPUs. A side effect of this is that a GPU can run a lot hotter than your average CPU. The other reason for the GPU running hotter is that it also processes an enormous amount of data and during gaming load it does so almost continuously. It processes far more data than the CPU when playing a game (hundreds of times more!) and this obviously produces a lot more heat as well. To add to all this heat, GPUs oftentimes have many more transistors on their little silicon die than CPUs have.

Shaders, streamprocessors and cores. So what are these?

Shaders, streamprocessors and cores are different words for the same thing. Remember the "many smaller and simpler cores" in the previous paragraph? Well, these cores go by three different names. Some people call them shaders, some call them streamprocessors and some simply call them cores. The naming spaghetti stems partially from the previous generations of graphics cards where these cores could only really be used for a very specific purpose: coloring (or "shading") pixels depending on how the light in your 3D game world would hit them. That is why they were initially called "pixel shaders" or simply "shaders".

Because of the awesome calculating power a graphics processor is capable of, NVidia has started to make these simple shaders a little more complex with their "CUDA" architecture so that the power of the GPU could be used for more than just 3D gaming. The extreme speed at which GPUs can do certain calculations have also recently found them a home in supercomputers.

The 2011 product lines of AMD and NVidia still have significant differences in architecture, the most distinctive difference is this:

  • NVidia has fewer cores but they are capable of more complex tasks than the AMD cores.
  • AMD has more cores that can significantly speed up some calculations but are less flexible.

This is the main reason that you can't simply compare "core count" between AMD and NVidia GPUs to estimate their relative performance. Comparison of GPUs from the same manufacturer is a fair performance indicator. Obviously this comparison works best when comparing cards within the same generation (for example comparing the core count between an AMD HD 6870 and HD 6950).

You can compare these between generations as well but with less accuracy (for instance the HD 6870 versus the HD 5850). Oftentimes the difference between generations like these are minor optimizations. These optimizations also aren't necessarily performance optimizations but can also be optimizations that affect power consumption and heat output. An example of those optimizations is the comparison between NVidia's GTX 460 and GTX 560. Both have the same core count (336) but the GTX 560 has some optimizations that limit it's power consumption and heat output. As a consequence of this, the GTX 560 can be (and is) clocked higher.

There are ofcourse also other differences but for the purpose and scope of this writeup I'll just mention these two for now. It is worth mentioning however that on the current NVidia architecture these cores run at twice the clockspeed that the rest of the GPU runs at. This is why specifications for NVidia often mention the shader clock in addition to (or in place of) the GPU clock.

So which is better? Many simple streamprocessors or fewer more complex ones?

Hoping for a simple answer right about now? Sorry to disappoint :-) The truth is that comparing AMD and NVidia on this is a bit like comparing apples and oranges. In some scenarios the AMD design will win while in others the NVidia design will win when it comes to performance.

The NVidia design does offer a little more flexibility when it comes to non-graphical usage scenarios. This is how PhysX came to exist, as a prime example of what the NVidia architecture has to offer besides pure graphics. AMD is fast making up that difference with their new GCN (Graphics Core Next) architecture that will hit the virtual store shelves in just a few days with the new HD7970.



Next Page