ASUS has introduced its latest AI and HPC infrastructure solutions at SC24, showcasing a wide array of servers and advanced cooling systems aimed at revolutionizing AI computing.
The ASUS server lineup is designed to drive digital transformation, catering to diverse AI and HPC workloads. The AI POD rack solution, equipped with the NVIDIA GB200 NVL72 platform, integrates GPUs, CPUs, and switches with cutting-edge NVIDIA Grace Blackwell Superchips and NVLink technology. Offering liquid-to-air and liquid-to-liquid cooling options, this system optimizes the training of trillion-parameter LLMs and real-time inference. Additional highlights include the ESC8000A-E13P HPC server, which supports eight NVIDIA H200 NVL cards and benefits from the NVIDIA MGX modular architecture, enabling efficient deployment and scalability. For generative AI, ASUS offers advanced servers like the ESC N8-E11V with NVIDIA HGX H200 and ESC N8A-E13 with NVIDIA Blackwell platforms, delivering scalable performance across applications.
They also emphasize energy-efficient cooling, addressing the substantial power consumption of data centers. Partnering with global cooling providers, ASUS offers solutions that span from individual racks to full-scale facilities, achieving up to 95% heat dissipation through liquid-to-air or liquid-to-liquid systems. Rigorous testing of thermal designs, power, networking, and GPUs ensures optimal server performance and reliability in real-world scenarios.
Additionally, in partnership with Ubilink, ASUS has completed the construction of a supercomputing center in Taiwan, featuring a capacity of 45.82 PFLOPS. This data center supports public cloud services and AI computing rentals, with renewable energy options available for customers. The project was completed within three months and has demonstrated the immense computational power of supercomputers through AI-powered avatars and robot use cases.