Meta Platforms Unveils Next-Generation AI Accelerator Chip: Artemis
Meta Platforms, formerly known as Facebook, has unveiled details about its next-generation in-house artificial intelligence accelerator chip, named “Artemis.” This new chip is part of Meta’s broader custom silicon effort and is designed to address the increasing computing power needed to run AI products across its platforms, including Facebook, Instagram, and WhatsApp.
According to Reuters, Meta’s Artemis chip aims to reduce the company’s reliance on Nvidia’s AI chips and lower its overall energy costs. The chip’s architecture is optimized for serving ranking and recommendation models, providing a balance of compute, memory bandwidth, and memory capacity.
Taiwan Semiconductor Manufacturing Co will produce the Artemis chip using its “5nm” process, which Meta claims is capable of three times the performance of its first-generation processor. The chip has already been deployed in the data center and is currently serving AI applications.
Meta’s CEO, Mark Zuckerberg, has announced significant investments in acquiring Nvidia and other AI chips. The company plans to acquire approximately 350,000 flagship H100 chips from Nvidia this year, totaling around 600,000 H100 chips when combined with other suppliers.
In addition to hardware development, Meta has been investing in software to efficiently harness the power of its infrastructure. The company has several programs in progress to expand the scope of the Meta Training and Inference Accelerator (MTIA) chip, including support for generative AI workloads.
Overall, Meta’s unveiling of the Artemis chip marks a significant step in the company’s efforts to enhance its AI capabilities and reduce its reliance on external chip suppliers. With a focus on performance, energy efficiency, and scalability, Meta is positioning itself for continued innovation in the AI space.