AMD has unveiled Helios, a groundbreaking integrated rack-scale AI infrastructure solution designed to enhance frontier model training and large-scale AI inference. CEO Lisa Su presented Helios at the Advancing AI conference, highlighting the unified architecture that combines CPUs, GPUs, Pensando NICs, and ROCm software for optimal performance in demanding AI applications. AMD aims to set new standards in AI infrastructure with Helios, expected to provide significant competitive advantages such as improved memory capacity and bandwidth, leading to faster model training and enhanced inference capabilities upon its release in 2026.
"Helios is truly a game changer," said Su during her keynote at the event. "For the first time, we've architected every part of the rack as a unified system that's combining our CPUs, our GPUs, our Pensando NICs and our ROCm software all together in one platform, and it's really purpose built for the most demanding AI workloads."
[Helios] will set the new standard for rack scale AI (and) we expect Helios to deliver significant advantages over the best 2026 rack scale solutions from our competition," said Andrew Dieckmann.
These advantages translate directly into faster model training and better inference performance and hugely advantaged economics for our customers," Dieckmann continued.
It's designed to run hundreds of millions and trillion parameter models and train them and provide inferencing on them. It features 40 petaflops of FP4 performance, 20 petaflops of FP8, 432 gigabytes of HBM per GPU and supports almost 20 TB/sec of HBM memory bandwidth.
Collection
[
|
...
]