@Xi Logo @Xi Logo
Navigation Navigation

@Xi Computer Corporation | Every Data Center Should be Equipped with GPUs

NVIDIA Tesla is the world’s leading platform for accelerated data centers.

From scientific discovery to artificial intelligence, HPC is an important pillar that fuels the progress of humanity. Modern HPC data centers are currently solving some of the greatest challenges facing the world today. With traditional CPUs no longer delivering the performance gains they used to, the path forward for HPC data centers is GPUaccelerated computing.

NVIDIA® Tesla® is the leading platform for accelerated computing and powers some of the largest data centers in the world—delivering significantly higher throughput while saving money. NVIDIA Tesla P100 powered by NVIDIA® Pascal™ architecture is the computational engine for scientific computing and artificial intelligence. Here are three powerful reasons to deploy NVIDIA Tesla P100 GPUs to your data center.

Reason 1: Be Prepared for the AI Revolution
The AI revolution is here, and every data center should be equipped for it. AI is the engine behind consumer services we use every day, like web searches and video recommendations. In HPC, AI is enabling new ways to solve complex scientific challenges in bioinformatics, drug discovery, and high-energy physics.

NVIDIA Tesla P100 is the computational engine driving the AI revolution and enabling HPC breakthroughs. For example, researchers at New York’s Icahn School of Medicine at Mount Sinai are using deep learning to analyze over 100,000 patient health records to predict patients likely to develop serious illnesses and provide treatment up to one year before traditional diagnoses.

Reason 2: Top Applications are GPU-Accelerated
Over 450 HPC applications are already GPUoptimized in a wide range of areas including quantum chemistry, molecular dynamics, climate and weather, and more.

In fact, an independent study by Intersect360 Research shows that 70% of the most popular HPC applications, including 10 of top 10 have built-in support for GPUs.

With most popular HPC applications and all deep learning frameworks GPU-accelerated, every HPC customer would see the majority of their data center workload benefit from GPUaccelerated computing.

Reason 3: Boost Data Center Productivity & Throughput
Data center managers all face the same challenge: how to meet the demand for computing resources that often exceed available cycles in the system.

The NVIDIA Tesla P100 dramatically boosts the throughput of your data center with fewer nodes, completing more jobs and improving data center efficiency.

A single server node with P100 GPUs can replace up to 20 CPU nodes. For example, for MILC, a single node with four P100’s will do the work of 10 dual socket CPU nodes while for HOOMD Blue a single P100 node can replace 21 CPU nodes. With fewer overheads on networking and cables, strong nodes provide high application throughput at substantially reduced costs.

NVIDIA Tesla P100 for Strong-Scale HPC Tesla P100 with NVIDIA NVLink technology enables lightning-fast nodes to substantially accelerate time-to-solution for strong-scale applications. A server node with NVLink can interconnect up to eight Tesla P100s at 5X the bandwidth of PCIe. It's designed to help solve the world's most important challenges that have infinite compute needs in HPC and deep learning.
  NVIDIA Tesla P100 for Mixed-Workload HPC Tesla P100 for PCIe enables mixed-workload HPC data centers to realize a dramatic jump in throughput while saving money. For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Completing all the jobs with far fewer powerful nodes means that customers can save up to 70% in overall data center costs.
  Tesla P100 (NVLink) Tesla P100 (PCIe) Tesla P100 (PCIe)
GPU / Form Factor Pascal GP100 / SXM2 Pascal GP100 / PCIe Pascal GP100 / PCIe
SMs 56 56 56
TPCs 28 28 28
FP32 CUDA Cores / SM 64 64 64
FP32 CUDA Cores / GPU 3584 3584 3584
FP64 CUDA Cores / SM 32 32 32
FP64 CUDA Cores / GPU 1792 1792 1792
Base Clock 1328 MHz 1126 MHz 1126 MHz
GPU Boost Clock 1480 MHz 1303 MHz 1303 MHz
FP32 GFLOPs 10608 9340 9340
FP64 GFLOPs 5304 4670 4670
Texture Units 224 224 224
Memory Interface 4096-bit HBM2 3072-bit HBM2 4096-bit HBM2
Memory Bandwidth 732 GB/s 549 GB/s 732 GB/s
Memory Size 16 GB 12 GB 16 GB
L2 Cache Size 4096 KB 4096 KB 4096 KB
Register File Size / SM 256 KB 256 KB 256 KB
Register File Size / GPU 14336 KB 14336 KB 14336 KB
TDP 300 Watts 250 Watts 250 Watts
Transistors 15.3 billion 15.3 billion 15.3 billion
GPU Die Size 610 mm 610 mm 610 mm
Manufacturing Process 16-nm 16-nm 16-nm
TESLA Family:
NVIDIA TESLA P100 12GB PCIE - 900-2H400-0010-000
NVIDIA TESLA P100 16GB PCIE - 900-2H400-0000-000
NVIDIA TESLA P100 16GB SXM2 - 900-2H403-0000-000
Optimized GPU Servers
Xi® NetRAIDer™ 64XLT Network RAID Server
Xi® NetRAIDer™ 64XE Network RAID Server
Xi® BladeRAIDer™ 64X-1U Cluster Blade Server


All our computers are customized, built, tested and supported in OC California


Watch Us on YouTube Channel
Facebook Logo Twitter Logo Linkedin Logo Instagram Logo Youtube Logo Sign up for updates | Request more info TAA LogoGSA LogoIntel TPPNPN Logo .:Total views:.
All trademarks and brands mentioned on this website may be legally registered in the U.S. and/or other countries. They are subject without restriction to the terms of applicable registered trademark rights and the ownership rights of the respective registered owners. The mention of a trademark should not be taken to indicate that such a trademark is not subject to third-party rights.
Home | Products | Support | Company | Contact | Terms & Privacy © 1996- @Xi® Computer Corporation | All Rights reserved.