HomeBusinessNewsSupport for AMD Instinct™ MI350 Series GPUs lets ASUS ESC A8A-E12U servers...

Support for AMD Instinct™ MI350 Series GPUs lets ASUS ESC A8A-E12U servers deliver next-gen AI and HPC performance

Our flagship high-density AI server, the ASUS ESC A8A-E12U, already gave enterprises, research institutions, and cloud providers the capability to accelerate their AI and HPC workloads. Now, we’re able to offer even more. These 7U dual-socket GPU servers now offer support for the AMD Instinct™ MI350 Series GPUs for next-gen AI and HPC performance, all while preserving compatibility with existing infrastructure.

Enhanced performance and seamless upgrade path for next-generation AI and HPC deployments 

Built on the 4th Gen AMD CDNA™ architecture, AMD Instinct MI350 series GPUs deliver powerful new capabilities, including 288GB of HBM3E memory and up to 8TB/s of bandwidth. This enables faster, more energy-efficient execution of large AI models and complex simulations. With expanded support for low-precision compute formats such as FP4 and FP6, the Instinct MI350 series significantly accelerates generative AI, inference, and machine-learning workloads.

Importantly, Instinct MI350 series GPUs maintain drop-in compatibility with existing AMD Instinct MI300 series-based systems, such as those running Instinct MI325X, offering customers a cost-effective and seamless upgrade path. These innovations reduce server resource requirements and simplify scaling and workload management, making Instinct MI350 series GPUs an ideal choice for efficient, large-scale AI deployments. 

ASUS ESC A8A-E12U with AMD Instinct MI350X delivers scalable, high-performance AI and HPC 

The newly-enhanced ASUS ESC A8A-E12U is a high-performance 7U server built to meet the growing demands of AI and HPC workloads. Equipped with dual AMD EPYC™ 9005 processors, the ESC A8A-E12U server leverages the MI350X’s architecture to deliver outstanding performance for a range of demanding scenarios, including training and inferencing large language models (LLMs), fine-tuning generative AI, and running scientific computing simulations.

With 288GB of high-bandwidth memory per GPU and up to 8TB/s bandwidth, fewer GPUs are needed to support larger models, thereby reducing server density and simplifying infrastructure planning. This allows organizations to scale more efficiently, minimize power consumption, and reduce total cost of ownership in AI-driven data centers. 

In addition, the MI350X platform integrates advanced security features such as Secure Boot, DICE attestation, SR-IOV for secure multi-tenant virtualization, and GPU-to-GPU communication encryption, making the ESC A8A-E12U an ideal solution for multi-tenant environments where data confidentiality and regulatory compliance are critical, such as in the government, finance, and healthcare sectors.  

“As AI workloads continue to evolve in complexity and scale, ASUS is committed to delivering infrastructure solutions that help our customers move faster,” said Paul Ju, General Manager of the ASUS Server Business Unit. “With support for AMD Instinct MI350X GPUs, our ESC A8A-E12U server provides a robust, future-ready platform for next-era AI and HPC deployments.” 

The ESC A8A-E12U continues ASUS leadership in enterprise and cloud infrastructure, providing scalable and reliable platforms that support the latest in compute acceleration technologies. ASUS remains committed to delivering innovations that unlock new possibilities in AI, HPC, and advanced data center workloads. 

RELATED ARTICLES

Most Popular