Stream HPC

AMD vs NVIDIA – Two figures that can tell a whole story

titanUpdate September ’13: AMD gets their new GPUs “Volcanic Islands” with GCN 2.0 out in October. For this reason the HD 7970’s price has dropped to €250. This shakes up some of the things described in this article.

Update June ’14: It has become clear that Titan is not a consumer device and should be categorised as a “Quadro for compute”. All consumer devices of both AMD and Nvidia show relatively low GFLOPS for dual precision.

Update July’14: Graphs updated with GTX Titan Z and R9 290X.

AMD/ATI has always had the fastest GPU out there. Yes, there were lots of times in which NVIDIA approached the throne, or even held the crown for a while (at least theoretically), but it was Radeon, at the end, the one who had the right claim.

Nevertheless, some things have changed:

  • AMD has focused more on the new architecture, making it easier to program while keeping the GFLOPS the same.
  • AMD bets on their A-series APU with integrated GPU.
  • NVIDIA has increased both memory bandwidth and GFLOPS at a steady pace.
  • NVIDIA has done the nitro-trick for double precision.

With NVIDIA GTX Titan (see three of them in the image), NVIDIA snatched victory from the jaws of defeat.

I’m not saying you should jump now to CUDA; there’s more than just GFLOPS. We should think also of costs and prevention of vendor-lockin. More particularly, I would like to show how unpredictable the market for accelerator-processors is.

Let’s take a look at the figures.

Below are the fastest consumer-targeted GPUs. As you can see, AMD has a flatline, while NVIDIA increased their performance at a fast pace.

Note that the dates are actual release dates, not the announcement dates. Also, there are a lot more differences between the dots than just the GFLOPS. There’s also architecture, memory bandwidth, memory size, PCIe-version, etc.

Update July’14: Both AMD and NVIDIA are now at 5.1 TFLOPS. AMD’s R9 290X costs $650 and the GTX Titan costs $1000+.

gflops_sp_1407

 

NVIDIA’s line for double precision makes a huge jump. No, this is not a mistake. NVIDIA decided to put double precision in consumer-GPUs. Only the Titan has it, the rest still has 1/8th of single precision.

Update July’14: AMD decided to go to 1/8th too, going to just over 600 DP GFLOPS.

 

gflops_dp_1407

For the professional accelerator market see the “answer to” series on this blog.

Costs

When it comes to cost, there is a big difference between the two competitors. This could be an effect of the vendor lock-in by CUDA, or a nice void in the market.

Radeon HD 8970 (4 GB): €550
Radeon HD 7970 Extreme edition (6 GB): €570
NVIDIA GTX Titan (6 GB): €970

… and they all have 288 GB/s memory bandwidth.

AMD Radeon 8970 XT

As you can read on the Guru3D forum, there are speculations about AMD taking back the crown. Below, a comparison between the 7970 and the 8970 XT (source) – the unofficial specs.

Higher clock speed 1,050 MHz vs 925 MHz   Better floating-point performance 5,376 GFLOPS vs 3,789 GFLOPS   Significantly higher pixel rate 50.4 GPixel/s vs 29.6 GPixel/s   Higher texture rate 168 GTexel/s vs 118.4 GTexel/s   Significantly more render output processors 48 vs 32   Slightly higher effective memory clock speed 6,000 MHz vs 5,500 MHz   More shading units 2,560 vs 2,048   More texture mapping units 160 vs 128   More compute units 40 vs 32   Higher memory clock speed 1,500 MHz vs 1,375 MHz   The source mentions the normal 8970, but that is a mistake. See the [Official specs of 8970 here [PDF]](http://www.amd.com/us/Documents/AMD_Radeon_HD_8970_Feature_Summary.pdf).

Titan II / Ultra

NVIDIA is known for whispering specs of upcoming products far before they launch. Take the example of 3D stacked memory for products in 2016.

But when will the Titan II arrive? Nobody knows for sure. The fact that the card will actually show up, has already been put as a controlled rumor. Now, under what name it will appear (2, II, Ultra), or what specs it will have, is also very difficult to tell. We will probably know better by late 2013 or early 2014.

What is sure is that this battle continue until the discrete GPU market vanishes.

Share your thoughts! How long you think NVIDIA can hold on to power?