FREMONT, CA: NVIDIA announces NVIDIA Tesla P100 Accelerators for the Deep Learning field and HPC application. The latest announcement of the new offering is build for hyperscale datacenter applications.
The Tesla P100 platform will include the company’s Pascal GPU architecture along with the newest memory, semiconductor process and packaging technology to allow for the best graphical and computing experience.
Nvidia will be manufacturing the chip using 16nm FinFET manufacturing technology. The Tesla P100’s chip will have over 15 billion transistors and will include 16GB of second generation die stacked High Bandwidth Memory (HBM2). Nvidia has made it easy to link multiple P100 GPUs using its NVLink technology.
The new Pascal architecture will make server nodes faster and more efficient,. Pascal’s architecture helps in increasing performance in neural network training by 12 times, compared to previous generations of Nvidia GPUs based on its Maxwell architecture.
The P100 also supports NVIDIA’s NVLink technology, a proprietary interconnect that allows multiple GPUs, or supporting CPUs, to connect directly with each other at a higher bandwidth than the current PCI Express 3.0 slots. It also supports up to eight GPU connections versus the four of PCIe and SLI.
The GPU offers deep learning machines, where algorithms work by correlating and classifying huge chunks of data. Nvidia is taking a new approach to memory design, which it calls Chip on Wafer on Substrate. The new design will provide a three-fold boost in memory bandwidth performance. Datacenters can expect 5.3 teraflops double-precision performance, 10.6 teraflops single-precision performance and 21.2 teraflops half-precision performance with NVIDIA GPU BOOST technology.
"Our greatest scientific and technical challenges, finding cures for cancer, understanding climate change, building intelligent machines, require a near-infinite amount of computing performance," says Jen-Hsun Huang, CEO and co-founder, Nvidia. "We designed the Pascal GPU architecture from the ground up with innovation at every level. It represents a massive leap forward in computing performance and efficiency, and will help some of the smartest minds drive tomorrow's advances."