ASUS has introduced its “All in AI” initiative, marking a significant entry into the server market with a comprehensive AI infrastructure solution tailored to AI-driven applications and AI supercomputing.
To deliver a wide range of AI infrastructure solutions, ASUS has partnered up with NVIDIA, Intel, and AMD to bring entry-level AI servers, machine-learning solutions, and large-scale supercomputing setups. Key offerings include the ESC AI POD with advanced NVIDIA GB200 NVL72 and NVIDIA Grace CPUs for high-performance LLM training and inference.
The company also provides eight-GPU systems for generative AI, which includes the NVIDIA Blackwell HGX solutions, AMD MI300X, and Intel Gaudi 3 solutions designed for scalability and capable of handling diverse applications like digital twins, HPC, AI infrastructure, as well as cloud services.
On top of that, ASUS offers customizable turnkey AI platforms and software solutions that feature cluster deployment, billing systems, generative AI tools, and so on, which can be quickly deployed in just as few as eight weeks.
Furthermore, the brand has demonstrated its expertise in AI supercomputing through successful projects like Taiwania 2, Taiwania 4 (Forerunner 1), Ubilink.AI, and Yotta in India. Notably, the Forerunner 1 project showcased ASUS’s ability to achieve industry-leading energy efficiency with a PUE of 1.17, highlighting the brand’s commitment to sustainability and innovative AI solutions.
With all that being said, you can find more details on ASUS AI server solutions through this link.