AMD provided a deep-dive look at its latest AI accelerator arsenal for data centers and supercomputers, as well as consumer client devices, but software support, optimization and developer adoption will be key.
Advanced Micro Devices held its Advancing AI event in San Jose this week, and in addition to launching new AI accelerators for the data center, supercomputing and client laptops, the company also laid out its software and ecosystem enablement strategy with an emphasis on open source accessibility. Market demand for AI compute resources is currently outstripping supply from incumbents like Nvidia, so AMD is racing to provide compelling alternatives. Underscoring this emphatically, AMD CEO Dr. Lisa Su, noted that the company is raising its TAM forecast for AI accelerators from the $150 billion number it projected a year ago at this time, to $400 billion by 2027 with a 70% compounded annual growth rate. Artificial Intelligence is obviously a massive opportunity for the major chip players, but it’s really anybody’s guess as to the true potential market demand. AI will be so transformational that it will impact virtually all industries in some way or another. Regardless, the market will likely be welcoming and eager for these new AI silicon engines and tools from AMD.
AMD’s data center group formally launched two major product family offerings this week, known as the MI300X and MI300A, for the enterprise and cloud AI and supercomputing markets, respectively. These two products are purpose-built for their respective applications, but are based on similar chiplet-enabled architectures with advanced 3D packaging techniques and a mix of optimized 5 and 6nm semiconductor chip fab processes. AMD’s High Performance Computing AI accelerator is the Instinct MI300A that is comprised of both the company’s CDNA 3 data center GPU architecture, along with Zen 4 CPU core chiplets (24 EPYC Genoa cores) and 128GB of shared, unified HBM3 memory that both the GPU accelerators and CPU cores have access to, as well as 256MB of Infinity Cache. The chip is comprised of a whopping 146B transistors and offers up to 5.3 TB/s of peak memory bandwidth, with its CPU, GPU, and IO interconnect enabled via AMD’s high speed serial Infinity Fabric.
This AMD accelerator can also run as both a PCIe connected add-in device and a root complex host CPU. All-in, the company is making bold claims for MI300A in HPC, with up to a 4X performance lift versus Nvidia’s H100 accelerator in applications like OpenFOAM for computational fluid dynamics, and up to a 2X performance-per-watt uplift over Nvidia’s GH200 Grace Hopper Superchip. AMD MI300A will also be powering HPE’s El Capitan at the Lawrence Livermore National Laboratory, where it will replace Frontier (also powered by AMD) as the world’s first two-exaflop supercomputer, reportedly making it the fastest, most powerful supercomputer in the world.
Comments are closed.