ASUS unveils the latest AI POD with NVIDIA GB300 NVL72, offering groundbreaking performance for AI workloads. The new solution powers the future of AI infrastructure.
![]() |
ASUS launches AI POD with NVIDIA GB300 NVL72 platform at GTC 2025, showcasing industry-leading AI solutions and scaling infrastructure for enterprises. Image: ASUS |
San Jose, California, USA — March 19, 2025:
At the GTC 2025 event in San Jose, California, ASUS unveiled its latest AI POD featuring the powerful NVIDIA GB300 NVL72 platform, a cutting-edge solution designed to address the most demanding AI challenges. The company, serving as a diamond sponsor at the event, also announced that it has already secured significant order placements, marking a major milestone in the AI industry.
The ASUS AI POD integrates the immense capabilities of the NVIDIA GB300 NVL72 platform, delivering exceptional processing power with 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs. This setup is designed to provide up to 40TB of high-speed memory per rack, making it ideal for AI workloads that require massive computational resources. Additionally, the platform includes NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet networking solutions, providing top-tier connectivity and scalability.
In addition to the AI POD, ASUS also showcased its range of AI servers within the Blackwell and HGX™ families. These include the ASUS XA NB3I-E12 powered by NVIDIA B300 NVL16, and the ASUS ESC NB8-E11 featuring the NVIDIA DGX B200 8-GPU configuration. These servers are built for high-performance AI inference and training, optimized with the latest NVIDIA AI Enterprise and Omniverse platforms.
“We are excited to drive the next wave of innovation in data centers, leveraging the powerful capabilities of NVIDIA’s Blackwell Ultra platform,” said Kaustubh Sanghani, Vice President of GPU Products at NVIDIA. “ASUS’s leading-edge servers are accelerating AI reasoning, agentic AI, and video inference applications, helping enterprises unlock new opportunities in AI development.”
The AI POD is a game-changer in the AI infrastructure landscape, designed with high-performance cooling solutions and a liquid-cooled architecture for maximum stability and efficiency. This innovative system also supports trillion-parameter large language model (LLM) inference and training, positioning ASUS as a leader in providing AI solutions that accelerate time-to-market for businesses worldwide.
ASUS also revealed additional AI server products like the ASUS ESC8000 series, which incorporates the latest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These servers are fully compatible with the NVIDIA MGX architecture, providing scalable and efficient solutions for dynamic IT environments.
Among the notable announcements, ASUS introduced the Ascent GX10 AI supercomputer, powered by the NVIDIA GB10 Grace Blackwell Superchip. This compact supercomputer is capable of 1,000 AI TOPS (Tera Operations Per Second), providing a petaflop-scale computing experience to AI developers, researchers, and data scientists. The Ascent GX10 is designed to handle demanding AI workloads with exceptional performance and memory capacity.
ASUS also showcased its Edge AI solutions, including the PE2100N with NVIDIA Jetson AGX Orin™, delivering 275 TOPS for generative AI and robotics. The rugged PE8000G is designed for real-time perception AI, excelling in autonomous vehicles and intelligent video analytics.
ASUS’s AI infrastructure solutions are available globally, and customers can contact local ASUS representatives for further details on pricing and availability. With these groundbreaking innovations, ASUS continues to push the boundaries of AI infrastructure, empowering enterprises to meet the ever-growing demands of AI technology.