Access to Future AI Models in OpenAI’s API May Require a Verified ID

▼ Summary
– OpenAI plans to implement an ID verification process called Verified Organization for accessing advanced AI models.
– Verification requires a government-issued ID from supported countries, with one ID verifying only one organization every 90 days.
– The initiative aims to ensure responsible use of AI while maintaining broad accessibility.
– The verification process seeks to enhance security and prevent misuse, including malicious activities from groups like those in North Korea.
– It also aims to protect against intellectual property theft, as seen in an incident involving a Chinese AI lab extracting data from OpenAI’s API.
OpenAI may soon implement an ID verification process for organizations seeking access to certain future AI models, as detailed on a recently published support page on the company’s website.
The initiative, named Verified Organization, is designed to enable developers to access the most advanced models and features on the OpenAI platform. The verification requires a government-issued ID from one of the countries that OpenAI’s API supports. Notably, a single ID can only verify one organization every 90 days, and not all organizations will qualify for verification.
OpenAI emphasizes its commitment to ensuring that AI remains widely accessible while being used responsibly. The support page notes, “A small minority of developers intentionally misuse the OpenAI APIs, violating our usage policies. By introducing this verification process, we aim to reduce unsafe usage while still providing advanced models to the wider developer community.”
This verification process is likely intended to enhance security measures around OpenAI’s increasingly sophisticated and powerful products. The company has consistently reported on its efforts to identify and counteract the malicious use of its models, including activities by groups purportedly based in North Korea.
Another possible objective of this process could be to safeguard against intellectual property theft. Bloomberg reported earlier this year that OpenAI was investigating whether a group associated with DeepSeek, a Chinese AI lab, had extracted large amounts of data via its API in late 2024, potentially to train their models, a clear breach of OpenAI’s terms.
In response to such concerns, OpenAI restricted access to its services in China last summer.
(Source: TechCrunch)