views
E-commerce giant Amazon has launched two new artificial intelligence chips for its cloud computing service—Graviton4 and Trainium2. Amazon Web Services (AWS) Chief Executive Adam Selipsky announced that Trainium2, the second generation of the chip, is specifically designed for training AI systems.
According to the company, the Graviton4 processors are based on Arm architecture and consume less energy than chips from Intel or AMD.
Graviton4 provides up to 30 per cent better compute performance, 50 per cent more cores, and 75 per cent more memory bandwidth than current generation Graviton3 processors.
On the other hand, Trainium2 is designed to deliver up to 4x faster training than first generation Trainium chips and will be able to be deployed in EC2 UltraClusters of up to 100,000 chips, making it possible to train foundation models (FMs) and large language models (LLMs) in a fraction of the time, while improving energy efficiency up to 2x.
“Graviton4 marks the fourth generation we’ve delivered in just five years, and is the most powerful and energy efficient chip we have ever built for a broad range of workloads. And with the surge of interest in generative AI, Tranium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency,” David Brown, vice president of Compute and Networking at AWS, said.
The AWS move comes weeks after Microsoft announced its own AI chip called Maia. The Trainium2 chip will also compete against AI chips from Alphabet’s Google, which has offered its Tensor Processing Unit (TPU) to its cloud computing customers since 2018.
More than 50,000 AWS customers are already using Graviton chips. Startup Databricks and Amazon-backed Anthropic, an OpenAI competitor, plan to build models with the new Trainium2 chips, Amazon said.
Additionally, Amazon and NVIDIA also announced an expansion of their strategic collaboration to deliver the most advanced infrastructure, software, and services to power customers’ generative artificial intelligence (AI) innovations.
Comments
0 comment