AMD's Ryzen AI 300 Crushes Intel in AI Benchmarking Test

AMD Ryzen 7 3700X
Credit: Olivier Collet on Unsplash | Free use under the Unsplash License

AMD Ryzen 7 3700X
Credit: Olivier Collet on Unsplash | Free use under the Unsplash License

AMD's latest Ryzen AI 300 Series processors have significantly outperformed Intel’s Core Ultra line in AI benchmarking tests. This was particularly true when handling large language models (LLMs). A recent whitepaper released by AMD highlights that the Ryzen AI 9 HX 375 processor outperformed Intel's Core Ultra 7 258v by up to 27% tokens per second, a crucial metric for LLM Speed.

AMD's processor not only outperforms Intel's, but it does so with a staggering 3.5x faster "time to the first token". This means it can start generating model responses more swiftly, reducing latency and offering a significant advantage for real-time AI applications.

llama cpp
expand image
Credit: AMD | Free use for promotional purposes

This performance boost is due to AMD's advanced XDNA architecture-based neural-processing units (NPUs) and integrated graphics processing units (GPUs), which are both required for high-efficiency AI workloads. To further improve LLM flow and overall speed, AMD also included a Variable Graphics Memory (VGM) technology that allows the iGPU access to more system RAM. This combination has proven beneficial in applications like LM Studio, which uses the llama.cpp framework for all devices' AI performance without GPU dependence, supporting efficient model use even on consumer-grade laptops.

Consumer LLM
expand image
Credit: AMD | Free use for promotional purposes

Despite underwhelming Q3 results, AMD's AI advancements with the Ryzen AI 300 Series offer a promising potential that could reshape the company's position in the AI and semiconductor sectors. AMD's goal is to make advanced AI more accessible, providing comprehensive, locally deployable AI capabilities. This includes developers looking for faster, real-time AI performance for everyday tasks and apps.