2805 Bowers Ave, Santa Clara, CA 95051 | 408-730-2275
sales@colfax-intl.com
My Colfax  

NVIDIA Hopper Architecture

NVIDIA H100 Tensor Core GPU

The NVIDIA H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC with unprecedented performance, scalability, and security for every data center and includes the NVIDIA AI Enterprise software suite to streamline AI development and deployment. With NVIDIA® NVLink® Switch System direct communication between up to 256 GPUs, H100 accelerates exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models. For small jobs, H100 can be partitioned down to right- sized Multi-Instance GPU (MIG) partitions. With Hopper Confidential Computing, this scalable compute power can secure sensitive applications on shared data center infrastructure. The inclusion of the NVIDIA AI Enterprise software suite reduces time to development and simplifies deployment of AI workloads and makes H100 the most powerful end-to-end AI and HPC data center platform.

Coming Soon with Colfax Systems

Inquire

Download Datasheet

Download Presentation Deck

Download Architecture Overview





H100 SXM

H100 PCIe

FP64

34 teraFLOPS

26 teraFLOPS

FP64 Tensor Core

67 teraFLOPS

51 teraFLOPS

FP32

67 teraFLOPS

51 teraFLOPS

TF32 Tensor Core

989 teraFLOPS*

756 teraFLOPS*

BFLOAT16 Tensor Core

1,979 teraFLOPS*

1,513 teraFLOPS*

FP16 Tensor Core

1,979 teraFLOPS*

1,513 teraFLOPS*

FP8 Tensor Core

3,958 teraFLOPS*

3,026 teraFLOPS*

INT8 Tensor Core

3,958 TOPS*

3,026 TOPS*

GPU Memory

80GB

80GB

GPU Memory Bandwidth

3.35TB/s

2TB/s

Decoders

7 NVDEC
7 JPEG

7 NVDEC
7 JPEG

Max Thermal Design Power (TDP)

Up to 700W (configurable)

300-350W (configurable)

Multi-Instance GPUs

Up to 7 MIGS @ 10GB each

Form Factor

SXM

PCIe
Dual-slot air-cooled

Interconnect

NVLink: 900GB/s PCIe Gen5: 128GB/s

NVLINK: 600GB/s PCIe Gen5: 128GB/s

Server Options

NVIDIA HGX™ H100
NVIDIA DGX™ H100 with 8 GPUs
Colfax NVIDIA-Certified Systems™ with 4 or 8 GPUs

Colfax NVIDIA-Certified Systems™ with 1-8 GPUs

NVIDIA AI Enterprise

Add-on

Included

* Shown with sparsity. Specifications are one-half lower without sparsity.
** Preliminary specifications. May be subject to change.

NVIDIA H100 CNX Converged Accelerator

Experience the unprecedented performance of converged acceleration. NVIDIA H100 CNX combines the power of the NVIDIA H100 Tensor Core GPU with the advanced networking capabilities of the NVIDIA® ConnectX®-7 smart network interface card (SmartNIC) to accelerate GPU-powered, input/output (IO)-intensive workloads, such as distributed AI training in the enterprise data center and 5G processing.

Coming Soon with Colfax Systems

Inquire

Download Datasheet

Download Presentation Deck

Specifications

GPU Memory

80GB HBM2e

Memory Bandwidth

> 2.0TB/s

MIG Instances

7 instances @ 10GB each
3 instances @ 20GB each
2 instances @ 40GB each

Interconnect

PCIe Gen5 128GB/s

NVLINK Bridge

2-Way

Networking

1x 400Gb/s, 2x 200Gb/s ports, Ethernet or InfiniBand

Form Factor

Dual-slot full-height, full length (FHFL)

Max Power

350W

What can we help you with?

As an Elite NVIDIA Partner, we have established a reputation for solving the most complex problems and securing outstanding results for businesses across different sectors.

Get your project rolling with quick answers from technical sales to address your unique challenges.

Tell us about your challenge

All product, brand, or trade names used on this page are the trademarks or registered trademarks of their respective owners.