Long-term deal highlights growing shift toward in-house AI processors as cloud giants seek chip independence.
The race to build specialised artificial intelligence chips is accelerating as Broadcom signed a long-term agreement to develop custom AI processors for Google, signalling a deeper industry move toward tailored silicon.
Under the multi-year deal extending through 2031, Broadcom will help design and supply Google’s next generation of custom AI chips, widely known as Tensor Processing Units (TPUs). These chips are engineered specifically for AI training and inference workloads inside large data centres.
The partnership reflects a growing industry challenge — dependence on general-purpose GPUs is becoming expensive and strategically limiting for hyperscale cloud companies. Custom chips allow firms like Google to optimise performance, reduce power consumption, and lower long-term infrastructure costs.
Unlike traditional processors, AI-specific silicon focuses on accelerating neural network calculations, enabling faster model training and more efficient deployment of generative AI systems. As AI workloads expand, chip architecture is becoming a critical competitive differentiator.
Broadcom’s role goes beyond chip design. The company will also provide networking components required to connect thousands of AI processors inside large computing clusters, an increasingly vital element as AI systems scale.
The deal underscores a broader transition in the semiconductor industry. Technology giants are no longer relying solely on external chip suppliers but are investing heavily in proprietary processors to gain control over performance, supply chains, and AI innovation cycles.
As demand for AI computing surges globally, custom silicon development is emerging as the next battleground shifting competition from standard chips toward specialised AI hardware ecosystems.


















