OpenAI is reportedly working with Broadcom to develop new custom silicon designed to handle its large AI workloads for inference and secured manufacturing capacity with TSMC, according to sources speaking to Reuters. OpenAI has reportedly built a chip development team of about 20 people, including lead engineers who previously worked on Google’s Tensor processors for AI.
Still, on its current timeline, the custom-designed hardware may not start production until 2026.
In the meantime, the sources also said OpenAI is incorporating AMD chips into its Microsoft Azure setup. AMD introduced its MI300 chips last year, which was a big part of the news this summer that its data center business has doubled in a single year as it chases market leader Nvidia.
The Information had reported in July that OpenAI was in discussion with Broadcom and other semiconductor designers about developing its own AI chip, and earlier this year, Bloomberg reported that OpenAI was working to build its own network of foundries, but according to Reuters, those plans have been put on ice due to cost and time.
The reported strategy puts OpenAI on a similar track to the other tech companies trying to manage costs and access to AI server hardware with custom chip designs. But Google, Microsoft, and Amazon are all already a few generations down the road in their efforts, and OpenAI may need significantly more funding to become a true competitor.