
AI
AMDDatacenter GPUs
AMD Instinct MI300X
192 GB HBM3 accelerator for LLM training and large-context inference.
MSRP $27,000
Details 
Hopper-architecture accelerator for LLM training and inference.
80 GB HBM3 memory at 2 TB/s. Designed for transformer engine workloads and FP8 inference.

192 GB HBM3 accelerator for LLM training and large-context inference.

Hopper refresh with 141 GB HBM3e — the largest memory in the H-class.

Universal GPU for AI inference, training, and graphics workloads.