AI applications are booming. The sudden popularity of ChatGPT and other AI-powered tools has proven the transformative potential of artificial intelligence, and for enterprises across a wide range of industries, that potential is too significant to ignore. To accelerate the development of AI and data science, we’re introducing our most advanced eight-GPU AI server yet: the ASUS ESC N8-E11. With its premium combination of raw throughput, exceptional energy efficiency, and user-friendly design elements, this NVIDIA HGX H100 server paves the way for large-scale AI and HPC development.
Immense performance that’s ready to scale
High-performance computing (HPC) workloads like AI benefit from systems that offer immense scalability. Tasks like training an accurate conversational AI model can take months to complete, even on supercomputers. As an NVIDIA® HGX H100 AI server, the high-end ASUS ESC N8-E11 provides the multiple-GPU support and blazing-fast interconnections needed to reduce the time needed for large-scale AI training models and HPC.
This 7U dual-socket server is powered by 4th Gen Intel Xeon® Scalable processors and eight NVIDIA H100 Tensor Core GPUs. It offers Its dedicated one-GPU-to-one-NIC topology supports up to eight NICs for the highest throughput in compute-intensive workloads. Fourth generation NVLink and NVSwitch technology, along with NVIDIA ConnectX-7 SmartNIC empowering GPUDirect® RDMA and Storage with NVIDIA Magnum IO™ and NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, accelerate the development of AI and data science.
For even more efficient scaling, direct GPU-to-GPU interconnects via NVLink deliver up to 900GB/s of bandwidth, and its PCIe 5.0 switch board delivers a faster connection between storage, graphics, and NICs. 32 DIMM slots allow for a deep pool of DDR5 RAM, and 10 hot-swappable drive bays make it convenient to install large and fast storage arrays.
Ready to reduce data-center PUE
To reduce operating costs, the ASUS ESC N8-E11 provides effective cooling and efficient components. 80 PLUS Titanium power supplies in a redundant 4+2 or 3+3 arrangement deliver copious quantities of power at a high level of efficiency. The ESC N8-E11 uses a two-level GPU and CPU sled with independent airflow tunnels for the CPUs and GPUs that delivers thermal efficiency, scalability, and unprecedented performance. Liquid cooling is an option, too — we’re prepping the ESC N8-E11 for direct-to-chip (D2C) liquid cooling, a highly effective strategy for reducing a data center’s overall power-usage effectiveness (PUE).
The ESC N8-E11’s modular design benefits both the assembly process and cooling. It greatly reduces the number of cables needed, reducing the amount of time needed for cable routing during system assembly. Reducing the number of cables also lowers the risk of airflow choke to ensure optimal thermal performance.
Your partner for AI infrastructure
The ESC N8-E11 joins our full lineup of NVIDIA-certified GPU servers. With options ranging from four to eight GPUs based on Intel and AMD platforms, we have options at the ready for everything from small enterprises to massive, unified AI training clusters. We optimize select servers for NVIDIA OVX so that they excel in rendering and digital twin applications, as well.
As one of the world’s leading AI and HPC solution providers, ASUS is well-positioned to deliver a comprehensive AI solution through our own all-dimensional resources. Between the ASUS server business unit, Taiwan Web Service (TWS), and ASUS Cloud — all part of the ASUS group — we can deliver in-house AI server design, data-center infrastructure, and AI software-development capabilities, in addition to a diverse ecosystem of industrial hardware and software partners. Click here to browse our full selection of servers and workstations.