Meta Begins Testing In-House Chips for AI Training

Meta, the parent company of Facebook, Instagram, and WhatsApp, is testing its first in-house chip for artificial intelligence (AI) training, as reported by Reuters. This is a major development as the company continues to drive towards creating its custom silicon, hoping to reduce expenses and minimize reliance on third-party suppliers such as Nvidia.
According to sources, Meta has started limited chip deployment and can ramp up production for mass application if the pilot test is successful. The firm views home-grown chip development as a long-term plan to lower infrastructure costs while heavily investing in AI for future growth.
Reinventing Infrastructure Costs
Meta has estimated its overall 2025 expenses at between $114 billion and $119 billion, with up to $65 billion going towards capital spending, mostly for AI infrastructure. Building in-house chips can counterbalance such expensive costs by driving efficiency and less dependence on third-party hardware vendors.
One of the sources informed Reuters that Meta's AI training chip is a specialized accelerator, designed specifically for AI-specific tasks and not general computing. Its custom design enhances power efficiency, performing better than conventional GPUs that typically service AI workloads. The company has collaborated with Taiwan-based semiconductor giant TSMC to produce the chip.
Progress in Silicon Development
Having successfully taped out its first chip, Meta shipped the initial design of the chip to a factory for production—a critical step in the semiconductor process. The process can cost tens of millions of dollars and takes several months. If the chip does not work as hoped, Meta will need to troubleshoot problems and start again.
The company’s in-house chip development falls under its Meta Training and Inference Accelerator (MTIA) program, which has experienced setbacks in the past as Reuter notes. Meta previously scrapped a chip at a similar development stage but later introduced an MTIA inference chip to enhance its recommendation algorithms on Facebook and Instagram.
Future Plans for AI Training and Generative AI
Meta executives have indicated their intent to deploy proprietary chips for AI training by 2026. Meta’s new training chip will first enhance recommendation systems and later support generative AI applications, including the Meta AI chatbot.
Chris Cox, Meta’s Chief Product Officer, described the company’s approach to chip development as a gradual process, comparing it to "kind of a walk, crawl, run situation." He noted that the first-generation inference chip was deemed a major success, setting the stage for more advanced AI-focused hardware in the future.
Balancing Custom Chips and Nvidia GPUs
Meta is still one of Nvidia’s biggest customers, investing billions in GPUs and developing its AI-focused chips. Reuters reports Meta's chips are essential for powering AI-driven recommendations, ad-targeting systems, and the Llama foundation models, enhancing efficiency and performance.
However, recent developments in AI research have raised questions about the effectiveness of continuously scaling large language models using additional computing power. DeepSeek’s AI models offer a smarter approach, emphasizing efficiency and optimization instead of endlessly increasing data and computing power for improvements.