메타, 대규모 AI 인프라 구축 위해 AWS Graviton 칩 도입
요약
글로벌 사용자 기반을 바탕으로 막대한 컴퓨팅 수요를 처리하기 위해 메타(Meta)가 아마존 웹 서비스(AWS)의 범용 프로세서인 Graviton 칩 수십만 개를 확보했습니다. 이는 AI 모델 학습 후 미세 조정(refinement)이나 추론(inference)과 같은 CPU 집약적 워크로드를 효율적으로 처리하는 데 중점을 둡니다. 메타는 이미 엔비디아 GPU와 대규모 계약을 체결한 바 있으나, Graviton 도입은 비용 효율성과 에너지 절감 측면에서 AI 인프라의 기반을 다지려는 전략적 움직임으로 해석됩니다.
핵심 포인트
- 메타는 AWS와의 장기 계약을 통해 수십만 개의 범용 프로세서인 Graviton 칩을 확보하며 AI 인프라를 확장합니다.
- Graviton은 GPU와 달리 CPU에 가까운 역할을 수행하며, 특히 대규모 모델의 미세 조정(refinement) 및 추론(inference)과 같은 CPU 집약적 워크로드에 강점을 보입니다.
- AWS는 Graviton이 주어진 가격 대비 최고의 성능을 제공하고 전력 소모를 60% 절감한다고 강조했습니다.
- 메타의 이번 움직임은 AI 시대 컴퓨팅 아키텍처가 GPU 중심에서 CPU 기반 플랫폼으로 확장되고 있음을 보여줍니다.
Meta adopts hundreds of thousands of AWS Graviton chips for AI infrastructure
Around 3.6 billion people use Meta's applications every day, and the social networking company will be operating 32 data centers to handle the load with the completion of a new one in Oklahoma. But that's not enough.
Amazon's cloud unit said Friday that Meta has agreed to use Amazon's general-purpose Graviton chips in a deal that will run for at least three years.
The arrangement demonstrates Meta's willingness under CEO Mark Zuckerberg to splurge so it can meet high computing demand, alongside technology peers such as Alphabet and Microsoft. In recent weeks Meta has signed deals worth a combined $48 billion with CoreWeave and Nebius, both of which rent out access to Nvidia graphics processing units, or GPUs, that run AI models.
Amazon didn't disclose the value of its Meta deal.
Meta is counterbalancing infrastructure expansions with head count reductions. On Thursday the company announced plans to lay off around 8,000 employees, or 10% of its workforce.
Unlike Nvidia GPUs, Arm-based Graviton processors from top cloud Amazon Web Services can take care of a wide assortment of computing tasks, similar to Intel's or AMD's central processing units, or CPUs. But Graviton can still come in handy for AI workloads, specifically for refinements, or post-training, after models have been trained with large amounts of data using large-scale computing clusters.
"Graviton is one of the most used platforms for pre training by a lot of foundation model companies, and Meta is now one the newest one," said Nafea Bshara, an AWS vice president and distinguished engineer.
Bshara co-founded chip company Annapurna Labs, which Amazon acquired in 2015. Since then, Amazon has developed special-purpose chips for training and running AI models, among other components. Graviton has become a breakout hit, gaining adoption from Adobe, Apple and Snowflake. Earlier this week, Amazon-backed AI model builder Anthropic announced plans to use Graviton processors as well.
AWS says Graviton delivers the best performance for a given price of all computing options available through the EC2 computing service, while using 60% less energy.
Meta has used Graviton chips on a small scale, and now it will tap hundreds of thousands of the chips, making it one of the top five Graviton customers, Bshara said. The company has rented out Nvidia GPUs from AWS since 2017, he said.
On Thursday Intel CEO Lip-Bu Tan told analysts that demand exceeds supply for its Xeon server chips.
"For the last few years, the story around high performance computing was almost exclusively about GPU and other accelerators," Tan said. "In recent months, we have seen clear signs that the CPU is reinserting itself as the indispensable foundation of the AI era."
But Meta did not choose Graviton because other kinds of CPUs were unavailable, Bshara said.
"Expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale," Santosh Janardhan, Meta's head of infrastructure, was quoted as saying in a statement.
AI 자동 생성 콘텐츠
본 콘텐츠는 CNBC Technology의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.
원문 바로가기