Why Anthropic wants to break free from Nvidia

ChatGPT maker's rival Anthropic is exploring a 2027 deal with UK chip startup Fractile to break Nvidia's AI hardware stranglehold. The move targets specialized inference chips that could slash costs for running AI models like Claude.

Why Anthropic wants to break free from Nvidia

Anthropic is in early discussions to purchase AI inference chips from UK-based startup Fractile when those chips become available in 2027. As reported by The Information, the potential deal represents Anthropic's push to diversify its chip supply beyond Nvidia's dominance in January 2026.

Key Takeaways

  • Anthropic is in early talks to purchase AI inference chips from UK startup Fractile, expected in 2027.
  • Fractile develops specialized inference silicon using in-memory compute technology.
  • The move represents Anthropic's strategy to diversify chip suppliers beyond Nvidia.
  • The deal signals growing industry interest in inference-optimized hardware over general-purpose GPUs.

What is Fractile building?

Fractile is developing specialized AI inference chips that utilize in-memory compute technology designed specifically for AI inference workloads rather than training. The UK startup's approach differs from traditional GPU architectures by performing computations directly within memory, potentially reducing data movement and improving efficiency for running AI models.

This technology targets a specific gap in the AI hardware market. While companies like Nvidia have dominated with general-purpose GPUs suitable for both training and inference, Fractile focuses exclusively on the inference side — the process of running trained AI models to generate responses.

The 2027 timeline suggests Fractile is still in development phases, likely working through chip design, fabrication partnerships, and testing cycles required for commercial AI silicon.

Why Anthropic is looking beyond Nvidia

According to industry analysts, Anthropic's interest in alternative chip suppliers reflects a broader trend among AI companies seeking to reduce dependence on Nvidia's ecosystem. The Claude developer joins companies exploring specialized inference hardware that could offer better performance-per-watt or cost advantages for specific workloads.

Nvidia's H100 and upcoming Blackwell chips excel at training large language models, but they're also expensive and designed for multiple use cases. Anthropic's recent focus on AI agents, which prioritise running large models in production rather than training them, can use chips designed specifically for inference.

As Claude usage grows, inference costs become a larger share of operational expense, raising the appeal of dedicated inference silicon.

What this means for AI hardware

The Fractile discussions signal a maturing AI hardware market where inference and training workloads are diverging toward specialized silicon. While Nvidia built its dominance on versatile GPUs that handle both functions, startups like Fractile are betting that dedicated inference chips will capture market share through efficiency gains.

This shift mirrors historical patterns in computing where general-purpose processors eventually gave way to specialized chips for graphics, networking, and storage. similarly emphasizes mixed hardware approaches for different AI workloads.

Timeline and availability

Fractile's AI inference chips are expected to become commercially available in 2027, assuming successful development and fabrication partnerships. The extended timeline reflects the complexity of developing competitive AI silicon and securing manufacturing capacity.

No pricing details have been disclosed for either Fractile's chips or the potential Anthropic deal. Early-stage talks typically focus on technical specifications and supply commitments rather than final commercial terms.

The 2027 target puts Fractile's chips in competition with next-generation offerings from established players like Nvidia, AMD, and Intel, all of whom are developing inference-focused silicon for the same timeframe.

Frequently Asked Questions

What is Fractile building?

Fractile is developing AI inference chips that utilize in-memory compute technology, designed specifically for efficient AI inference workloads rather than training.

Why is Anthropic looking for new chip suppliers?

Anthropic is exploring chip diversification to reduce reliance on Nvidia and potentially access more specialized, cost-effective hardware optimized for AI inference rather than general-purpose training.

When will Fractile's chips be available?

Fractile's AI inference chips are expected to become commercially available in 2027, pending successful development and manufacturing partnerships.

Subscribe to our newsletter

Subscribe to our newsletter to get the latest updates and news

Member discussion