OpenAI Launches Advanced Security Mode for At-Risk Accounts

▼ Summary
– OpenAI announced Advanced Account Security, an optional feature that enforces strict access controls to prevent account takeover attacks.
– The feature requires users to replace passwords with two physical security keys or passkeys, eliminating email and SMS recovery routes.
– Users enabling the feature cannot seek account recovery help from OpenAI’s support team, preventing social engineering attacks on support portals.
– Advanced Account Security enforces shorter sign-in sessions, produces login alerts, and automatically opts users out of having ChatGPT conversations used for model training.
– Members of OpenAI’s Trusted Access for Cyber program must enable the feature by June 1 or use an enterprise single sign-on alternative.
For anyone concerned that their ChatGPT or Codex accounts could fall into the wrong hands, OpenAI has introduced a new optional security layer called Advanced Account Security. Announced Thursday, this feature enforces strict access controls designed to make account takeovers significantly more difficult.
While the concept isn’t new , Google’s Advanced Protection program has offered similar safeguards for nearly a decade , the launch comes as AI tools become deeply embedded in daily life. OpenAI frames this as part of a broader cybersecurity strategy unveiled earlier this month. The rationale is clear: as these platforms handle increasingly sensitive data, stronger defenses are no longer optional.
“People are turning to AI for deeply personal questions and increasingly high-stakes work,” OpenAI wrote in a blog post. “Over time, a ChatGPT account can hold sensitive personal and professional context, and sit at the center of connected tools and workflows. For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security-conscious, the stakes are even higher.”
Under Advanced Account Security, users can no longer rely on traditional passwords. Instead, they must register two physical security keys or passkeys to drastically reduce phishing risks. The feature also eliminates email and SMS-based account recovery routes. Recovery now requires recovery keys, backup passkeys, or physical security keys. To make adoption easier, OpenAI has partnered with Yubico to offer discounted YubiKey bundles for eligible users.
A critical component of the system is that once enabled, users cannot turn to OpenAI’s support team for account recovery. Support staff no longer have access or control over recovery options. This prevents attackers from using social engineering tactics against support portals to compromise accounts.
The feature also enforces shorter sign-in windows and sessions, requiring users to re-authenticate more frequently on each device. Alerts are generated every time someone logs into the protected account, directing users to a dashboard that tracks active ChatGPT and Codex sessions. Additionally, while all users can opt out of having their conversations used for model training, this exclusion is turned on by default for Advanced Account Security users.
Starting June 1, members of OpenAI’s Trusted Access for Cyber program , which provides cybersecurity professionals and researchers with early access to new models , will be required to enable Advanced Account Security or submit an alternative attestation that they use phishing-resistant authentication through an enterprise single sign-on system.
(Source: Wired)




