Despite testing Google’s TPUs, OpenAI is not switching gears just yet; NVIDIA still leads its AI workload, with a custom chip also in the pipeline.
OpenAI has confirmed it is not planning to adopt Google’s artificial intelligence (AI) chips on a wide scale, despite ongoing testing and recent media speculation suggesting otherwise.
According to a Mint report, the company, best known for developing ChatGPT, clarified that while limited trials with Google’s tensor processing units (TPUs) are underway, there is no intention to transition away from its current infrastructure.
This statement follows a series of reports claiming OpenAI was moving some of its AI workloads to Google’s hardware in response to growing demand. However, the company continues to rely primarily on NVIDIA’s graphics processing units (GPUs), broadly used across the industry for machine learning and large-scale model training.
OpenAI is also using advanced chips from AMD to support its expanding needs.
Meanwhile, the company is actively developing its own AI chip, with the design phase expected to be completed later this year. This in-house effort is part of a longer-term strategy to reduce dependency on third-party providers.
Although OpenAI has reportedly begun using some Google Cloud services, most of its cloud-based operations are still run through CoreWeave, a specialist infrastructure firm offering high-performance GPU computing.
While it is common for AI companies to test various hardware solutions, shifting to a new chip architecture, such as TPUs, would require major changes to software systems and infrastructure, a costly and time-consuming undertaking.
Google, which recently opened up its TPUs to external clients, has seen interest from other major players, including Apple and AI startups like Anthropic and Safe Superintelligence.
However, for now, OpenAI remains focused on its established compute ecosystem and internal chip development plans.