HomeBusinessFeaturesASUS offers a total AI server solution that no other manufacturer can...

ASUS offers a total AI server solution that no other manufacturer can match

If your organization is making plans to build out the server infrastructure necessary to take advantage of the groundbreaking power of generative AI, you’ll find plenty of vendors willing to sell you the necessary hardware. Yet hardware is just one piece of the puzzle. Operating at low power usage effectiveness (PUE), even in challenging environments, is another, as is fine-tuning a large language model for your needs. And all that comes before the actual processes of deploying your AI infrastructure effectively.

This is where ASUS offers a key advantage over any other server vendor in the market. Not only do we offer a top-to-bottom hardware solution for AI that ranges all the way from server systems to end-user devices, but we also have first-hand experience with every stage of designing and operating a data center for AI applications. What’s more, we’re also innovating large-language model (LLM) development, especially for enterprises and governments developing LLMs for languages other than English.

When it comes to AI supercomputing, ASUS is your domain expert.

Proven expertise in professional server tuning

ASUS has over 25 years of expertise in the server industry. In the years since we joined SPEC.org, a prominent performance standardization body, our servers have set 1,959+ world records.

The Taiwania 2 supercomputer
Photo courtesy of National Center for High-performance Computing

One of our greatest success stories in this field is the Taiwania 2 supercomputer. In 2018, along with the National Center for High-Performance Computing (NCHC) of Taiwan and other industry partners, we built this state-of-the-art supercomputer system with public cloud services that can scale resources on demand for efficiently running AI workloads per user requests. Taiwania 2 provides effective and intuitive AI and big data cloud services and tools for AI developers and data scientists to quickly and easily configure, build, and manage development and production environments.

Too often, performance and energy efficiency are treated as opposing goals. ASUS excels at delivering both. Taiwania 2 debuted at #10 on the Green500 list for its energy efficiency and #20 on the TOP500 supercomputer list for its powerful performance.

One of our latest projects just nabbed the 31st spot on the TOP500  — and, thanks to its highly efficient design, the 44th position on the Green500. We worked with Ubilink to construct Taiwan’s new largest supercomputing center. Built in just three months, this ambitious project is equipped with 128 NVIDIA HGX H100 servers and 1,024 GPUs, achieving an impressive 45.82 PFLOPS.

A supercomputer in a data center

With this solution, we achieved an efficiency (Rmax / Rpeak) reaching 66.08%. These results were reported to Top 500.org. Under the same GPU and high-speed network card conditions, the ASUS solution is more efficient, delivering 1.23X the overall performance of competitors.*

1.17 PUE: ASUS leads the way in data center energy efficiency

Today, work is underway for the Forerunner 1 supercomputer, and once again, ASUS technologies are key to its development. The data center that we’ve built for Forerunner 1 is designed from the ground up to address a key engineering challenge for any data center in Taiwan: the local environmental conditions. High temperatures and high levels of humidity are quite common on the island of Taiwan. Installations in regions with a comparable environment often target a PUE of 1.5 to accommodate increasing cooling costs. For Forerunner 1, we were able to shatter this expectation with a stunningly low PUE of 1.17.

The Forerunner 1 supercomputer
Photo courtesy of National Center for High-performance Computing

Our proven expertise with constructing highly efficient supercomputers and server systems, testing issues in the software stack, and building middleware makes ASUS an invaluable partner for organizations who need a total AI solution. Already, we’ve served more than 100 customers in Taiwan, including government research centers and enterprises across a variety of industries.

Innovative LLM approaches

Large language models have taken the world by storm, but the first generation of LLMs were trained primarily if not entirely on English-language copy. To empower businesses across the globe and democratize the potential of AI, ASUS subsidiary TWSC launched the Formosa Foundation Model, or FFM-Llama2.

Trained on Llama 2, an open-source large language model created by Meta, FFM-Llama2 harnesses AIHPC supercomputing, parallel computing, and local language data to elevate its proficiency in traditional Chinese. We designed this LLM to support fine-tuning and customization to broaden its impact. Already, we’ve created a version to support Hakka, a language group local to Taiwan.

What does this mean for your organization? If you’re looking to generate your own LLM in a different language, you won’t need to start from scratch with a new hardware installation and the processing time necessary for training. Instead, we can help you to refine from our software. We already have this experience, and we’re ready to put it to work for you.

Unbeatable TCO for total AI infrastructure solutions

Considering the total cost of ownership (TCO) of operating a data center, not just the initial hardware costs, demonstrates the value of ASUS solutions. ASUS stands out from every other manufacturer on the market with the total AI infrastructure solution that we’re able to offer.

Not only do we manufacture servers, but we also have proven experience in designing supercomputers and data centers for optimal efficiency and robust integration with your existing IT infrastructure. Our fine-tunable LLM is ready to refine for your needs.  And our teams stand ready to assist with consultation, customized support, server installation and validation, and more.

All that adds up to a lower TCO for you. Other vendors can sell you hardware. Only ASUS has proven experience to guide you through the complete deployment of an AI infrastructure solution. To learn more about how we can help you stay ahead in the AI race, reach out to our server team for more information.

 

 

*Based on data from Top500.org. All the data used in this analysis is sourced from the published information of the Top 500 list, June 2024. We assess the performance of each data center by comparing the computing efficiency, represented as the ratio of Rmax to Rpeak.

RELATED ARTICLES

Most Popular