OpenAI has clarified that it has no current plans to adopt Google’s in-house AI chips for its products, following reports suggesting the company may turn to Google’s tensor processing units (TPUs) to handle rising computing demand. While the AI lab confirmed limited testing with TPUs, a spokesperson stated they are not being deployed broadly at this time. This statement comes two days after speculation emerged from Reuters and other outlets regarding a potential collaboration between the two tech giants.
Although testing across different hardware is standard practice in the AI industry, scaling such deployments would require significant changes in system architecture and software compatibility. OpenAI currently relies heavily on Nvidia’s GPUs and AMD’s AI chips to support its operations. In parallel, the company is working on developing its own custom chip, which is expected to reach the “tape-out” phase—a major milestone in chip manufacturing—by the end of the year.
Earlier this month, Reuters reported that OpenAI had signed on to use Google Cloud services to meet growing compute demands, a move that surprised many given their rivalry in the AI space. However, most of OpenAI’s compute needs are still being met by GPU-powered servers from CoreWeave, a fast-growing cloud infrastructure provider.
Google, for its part, has been increasing availability of its TPUs to external customers, an effort that has already attracted tech heavyweights like Apple, and startups including Anthropic and Safe Superintelligence—both launched by former OpenAI leaders. This shift signals Google’s broader ambition to compete more aggressively in the AI infrastructure market.
As competition intensifies among AI firms, OpenAI’s strategic choices about computing hardware and cloud providers remain closely watched. While testing Google’s chips signals flexibility, the firm’s continued reliance on Nvidia, AMD, and its own chip development project underscores its intent to maintain independence from major rivals.
Source: Reuters