The Nvidia P100 is NVIDIA's first GPU that makes use of HBM2 memory. Since it belongs to the NVIDIA Tesla series, it’s designed for the workstation environment. Gamers will have to wait about a year until they see the first NVIDIA cards, with HBM2 memory.
NVIDIA appears to be preparing different models, which are based on the P100 GPU, while the amount of memory will vary. So far there will be a card with 16GB HBM2 memory and another with 12GB high bandwidth memory. The first one will achieve 732 GB/s of memory throughput while the second manages 549 GB/s. The 16GB version will use four 4GB HBM2 chips while the other variant will use three 4GB HBM2 memory chips.
Apart from that the P100, is going to support NVLink, which is an interconnect, allowing to place up to eight P100 GPUs in a single Server. According to NVIDIA, 8x Tesla P100 are almost 50x faster than a dual CPU server based on Intel's Xeon E5-2698 V3. In addition the Tesla P100 might also help reducing the total cost of ownership running a data center, due to lower power consumption at equal performance.
Source:
Techpowerup