Nvidia announces Volta-based Tesla V100 card

  • Nvidia announces Volta-based Tesla V100 card

Nvidia announces Volta-based Tesla V100 card

It does this by increasing the number of links from four to six, and upping the data rate per link to 25 GB/second.

Nvidia is holding its GPU Technology Conference Wednesday, which it holds annually, where it is expected that the company will unveil additional details related to its next generation graphics architecture known as Volta that will target the deep learning operations.

Or in other words, stealing a march on Intel's machine learning efforts: the x86 goliath is desperately bent on stopping Nvidia and others from expelling it from the artificial intelligence processing space.

Before the showcase of Tesla V100, we were unsure of Nvidia's latest architecture's capabilities.

Companies in many industries such as health care and finance are investing in machine-learning infrastructure.

The combination of the new graphics processor, the GPU cloud and AI software bundled and delivered in containers is seen by market watchers as a new way to boost AI development.

According to the announcement, HPE and NVIDIA will jointly address GPU technology integration and Deep Learning expertise challenges to accelerate the adoption of technologies that provide real-time insights from massive data volumes. But the gist is that Volta is the most advanced graphics tech there is right now.

The very profitable portion of its business that is embodied in the Tesla and GRID product lines and that are used to accelerate simulation, modeling, machine learning, databases, and virtual desktops has been almost tripling in recent quarters and shows no signs of slowing down.

NVIDIA today launched Volta™ - the world's most powerful GPU computing architecture, created to drive the next wave of advancement in artificial intelligence and high performance computing. It now has new cores instead of the old ones.

"To make one chip work per 12-inch wafer, I would characterise it as unlikely", said Huang, " so the fact that this is manufacturable is an incredible feat". And this year, Nvidia plans to train 100,000 developers to use deep learning.

The Nvidia GPU Cloud will give developers greatly increased access to new tools and cloud based machine learning via their PC, NVIDIA DGX system or the cloud. My job is to ensure people use the Azure Cloud, and people want to use what's available immediately, without waiting.

Meanwhile, Google disclosed last summer that it was already using a chip customized for AI, developed in-house, called a Tensor Processing Unit, or TPU. Automotive went from $113 million a year ago to $140 million.

Three virtual figures (pictured) appeared on a big screen along with a virtual Koenigsegg Regera, the $1.9 million, high performance sports auto set against the familiar grid outline of Star Trek's holodeck. While we're seeing awesome advancement in technologies by Nvidia in the server computing sector, we're yet to see any announcements for consumer grade GPUs. Whether the this enormous version of Volta appears in the GeForce lineup of tomorrow depends on Nvidia's willingness to break new ground.