Elon Musk Acknowledges xAI Trained on OpenAI Models

▼ Summary
– Elon Musk testified in federal court that xAI may have used OpenAI’s models for distillation, a technique where one AI model trains another.
– Distillation involves training a smaller AI model to mimic a larger one, making it cheaper and faster while preserving performance.
– OpenAI has taken steps to prevent competitors like DeepSeek from distilling its models, citing concerns about China appropriating US innovation.
– The Trump administration has also moved to block Chinese companies from distilling American AI models, sharing information with US firms.
– In the competitive AI landscape, labs like Anthropic have blocked rivals like OpenAI and xAI from accessing their models for coding.
During a federal court appearance on Thursday, Elon Musk appeared to acknowledge that his artificial intelligence venture, xAI, may have relied on OpenAI’s models to train its own systems. The admission came during cross-examination in the ongoing legal dispute between Musk and the creator of ChatGPT, offering a rare glimpse into the competitive practices among top AI labs.
The exchange, as closely transcribed by WIRED, unfolded as follows:
OpenAI attorney William Savitt asked Musk, “Do you know what distillation is?”
Musk replied, “It means to use one AI model to train another AI model.”
Savitt then pressed, “Has xAI done that with OpenAI?”
Musk responded, “Generally all the AI companies [do that].”
Savitt followed up, “So that’s a yes.”
Musk clarified, “Partly.”
Distillation is a process in which a smaller, more efficient AI model learns from a larger, more capable one, allowing the smaller model to replicate much of the larger model’s performance at a lower cost and with faster operation. This technique is widely used across the industry to speed up development and reduce computational expenses.
Savitt then asked directly whether OpenAI’s technology had been used to build xAI.
“Has OpenAI technology been used in any way to develop xAI?” Savitt inquired.
Musk answered, “It is standard practice to use other AIs to validate your AI.”
Neither OpenAI nor xAI responded to WIRED’s requests for comment.
OpenAI has actively worked to prevent rivals from distilling its models, especially targeting the Chinese AI lab DeepSeek. In a February 2026 memo to a House committee, OpenAI stated it has “taken steps to protect and harden our models against distillation.” The company emphasized its goal of maintaining a competitive environment where “China can’t advance autocratic AI by appropriating and repackaging American innovation.”
The Trump administration has also taken action to curb foreign distillation of American AI technology. Michael Kratsios, director of the White House Office of Science and Technology Policy, issued an April 2026 memo pledging to share intelligence with U. S. AI firms about foreign distillation activities. Kratsios later posted on X that the “US government is committed to the free and fair development of AI technologies across a competitive ecosystem.”
While U. S. AI labs have historically used each other’s models for benchmarking and safety testing, the competitive climate has grown increasingly hostile. In August 2025, Anthropic cut off OpenAI’s access to its Claude coding models after accusing OpenAI of violating its terms of service. More recently, Anthropic also blocked xAI from using its models for coding tasks.
Throughout his cross-examination, Savitt has probed Musk’s past efforts to take control of OpenAI and his subsequent push to surpass the company. On Wednesday, Savitt presented emails and text messages from 2017 suggesting that Musk may have squeezed OpenAI by withholding funding and poaching key researchers.
(Source: Wired)



