Hewlett Packard Enterprise has shipped its first NVIDIA GB200 NVL72, a rack-scale AI system based on the NVIDIA Blackwell architecture. Designed to meet the growing demand for high-performance AI infrastructure, the system integrates advanced direct liquid cooling to optimise power efficiency and scalability for large AI clusters.
Scaling AI with Liquid Cooling
The increasing adoption of AI-driven workloads across Asia has amplified the need for advanced cooling technologies. Joseph Yang, General Manager of HPC and AI, APAC and India at HPE, highlighted the importance of efficiency in AI infrastructure: “As the demand for faster and more efficient AI workload processing surges across Asia, the need for advanced liquid cooling technology has never been greater to support the region’s rapidly growing power and computing requirements. The new NVIDIA Grace Blackwell system is designed to help scale AI workloads, maximise performance, and unlock AI’s full transformative potential – while addressing critical infrastructure challenges and energy efficiency needs.”
HPE’s leadership in direct liquid cooling technology – spanning over five decades – has positioned the company as a key enabler of AI computing, providing service providers and enterprises with the tools to build and scale AI clusters efficiently.
Extreme Performance for AI Model Training
The NVIDIA GB200 NVL72 is built to handle AI models exceeding one trillion parameters within a unified memory space. Key technical specifications include:
- 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs interconnected via high-speed NVIDIA NVLink
- Up to 13.5 TB of HBM3e memory with 576 TB/sec bandwidth
- HPE direct liquid cooling technology for enhanced thermal efficiency
With this architecture, the system delivers extreme compute power for large-scale generative AI (GenAI) model training, inferencing, and scientific computing workloads.
AI Infrastructure at Global Scale
As AI model builders and service providers race to deploy scalable, high-performance AI solutions, Trish Damkroger, Senior Vice President and General Manager of HPC & AI Infrastructure Solutions at HPE, underscored HPE’s advantage: “AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment. As builders of the world’s top three fastest systems with direct liquid cooling, HPE offers customers lower cost per token training and best-in-class performance with industry-leading services expertise.”
Bob Pette, Vice President of Enterprise Platforms at NVIDIA, added that the HPE-NVIDIA collaboration is enabling breakthrough computing: “Engineers, scientists and researchers need cutting-edge liquid cooling technology to keep up with increasing power and compute requirements. HPE’s first shipment of NVIDIA GB200 NVL72 will help service providers and large enterprises efficiently build, deploy and scale large AI clusters.”
Industry-Leading AI Services and Support
Beyond hardware, HPE delivers a full suite of AI support services to help enterprises deploy and manage AI clusters:
- Onsite engineering resources – Expert resident engineers ensure optimal system performance and availability.
- Performance benchmarking – AI specialists fine-tune system configurations to maximise efficiency.
- Sustainability services – Energy and emissions monitoring to reduce AI’s environmental footprint.
HPE’s proven expertise in supercomputing is reflected in its ability to deliver eight of the top 15 systems on the Green500 list of the world’s most energy-efficient supercomputers. It has also built seven of the top 10 fastest supercomputers globally, reinforcing its position as a leader in AI infrastructure solutions.
Why It Matters
As AI adoption accelerates, enterprises need powerful, scalable, and energy-efficient computing to handle next-generation workloads. The NVIDIA GB200 NVL72 by HPE offers a cutting-edge solution for GenAI model training, inferencing, and scientific computing, combining extreme performance with sustainable cooling technologies.




Share your thoughts