NVIDIA announces new GPU “A100” for data center. AI performance is 20 times that of V100

On May 14th, NVIDIA announced GPU “NVIDIA A100 GPU” (hereinafter “A100”) which has the new GPU architecture “Ampere” for data centers. It is said that it has achieved about 20 times the performance of the current “Tesla V100” (when calculating FP32, V100 is 15.7TFLOPS, but A100 is 312TFLOPS).

 NVIDIA A100 GPU

The A100 is a 7nm processor with over 54 billion transistors built for scientific computing, cloud graphics and data analysis. Equipped with 40GB memory, the memory bandwidth is 1.6TB / s. The interconnection technology “NVLink” has become the third generation, and it is possible to operate multiple A100s as one GPU.

The company also announced the “DGX A100,” a 5PFLOPS AI performance system that integrates eight A100s with NVLink. NVIDIA acquired Mellanox in March last year and added nine 200Gb / s network interfaces to the DGX A100. The price is $199,000.

 nvidia 2

The A100 will be adopted by AWS, Cisco Systems, Dell Technologies, Fujitsu, Google Cloud, Microsoft Azure, Oracle and others for its products.

NVIDIA planned to hold a GPU-related event “GTC 2020” in San Jose in March, but it was canceled due to a pandemic of coronavirus infection (COVID-19). The announcement was made by CEO Jensen Huang in a keynote speech from his kitchen at home.

The keynote is divided into nine parts and published on YouTube. The following is the part related to A100.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s