
AILimited
NVIDIADatacenter GPUs
NVIDIA H100 80GB PCIe
Hopper-architecture accelerator for LLM training and inference.
MSRP $30,000
Details 
Universal GPU for AI inference, training, and graphics workloads.

Hopper-architecture accelerator for LLM training and inference.

192 GB HBM3 accelerator for LLM training and large-context inference.

Hopper refresh with 141 GB HBM3e — the largest memory in the H-class.