AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

OpenAI and Yubico partner to bring hardware security keys to ChatGPT

▼ Summary

– OpenAI launched Advanced Account Security (AAS), opt-in protections for ChatGPT users, and partnered with Yubico to release two co-branded YubiKeys to prevent phishing.
– AAS is aimed at high-risk individuals like political dissidents, journalists, researchers, and elected officials, but is available to any user.
– Security keys are hardware devices with a unique cryptographic identifier that restrict account access to the physical key holder.
– Cybercriminals increasingly target chatbot users for extortion, as intimate conversations provide valuable data for both personal and enterprise accounts.
– A tradeoff exists: if the security key is lost, OpenAI cannot help recover account access, potentially resulting in permanent loss of conversations.

OpenAI is stepping up its account security game with a new initiative aimed at protecting high-risk users from targeted cyberattacks. The company officially launched Advanced Account Security (AAS) on Thursday, a set of optional protections designed for ChatGPT users who handle sensitive information, though the feature is open to anyone willing to opt in.

As part of this rollout, digital security firm Yubico has announced a partnership with OpenAI to integrate two new hardware security keys directly with ChatGPT accounts. The collaboration focuses on combating phishing threats, which experts say are increasingly targeting AI chatbot users. Yubico is releasing a pair of co-branded YubiKeys , the YubiKey C NFC and the YubiKey C Nano , specifically tailored for this purpose.

OpenAI has positioned AAS as an ideal solution for political dissidents, journalists, researchers, and elected officials , individuals whose work carries significant political or personal risk. Enterprise users, who often store corporate secrets in ChatGPT sessions, are also likely to benefit from the added layer of protection.

“Ultimately, our intent is to drastically reduce the threat of unauthorized access to sensitive data in OpenAI accounts worldwide,” said Yubico CEO Jerrod Chong in a press release announcing the deal.

Security keys are compact hardware devices that connect to a computer via USB ports. Each key contains a unique cryptographic identifier, ensuring that only the person physically holding the key can log into the linked account. This makes them highly resistant to remote hacking attempts.

While the idea of a phished ChatGPT account might seem far-fetched to some, a growing body of evidence shows that bad actors are increasingly targeting chatbot users. Cybercriminals seek out extortion-worthy information, and given the intimate nature of many chatbot conversations , ranging from personal advice to corporate strategy , there is ample material for exploitation.

The broader AI industry is also turning its attention to digital security. Just weeks ago, Anthropic unveiled a new cybersecurity model called Mythos. Not to be outdone, OpenAI has made several security-related announcements in recent days, including a new digital defense framework. Thursday’s Yubico partnership is the latest in that series.

Of course, relying on a hardware security key comes with a notable tradeoff. If the key is lost, OpenAI will not be able to help recover access to the account. In practical terms, that means any saved conversations could be lost permanently. Users must weigh the benefit of stronger protection against the risk of losing their data.

(Source: TechCrunch)

Topics

account security 95% phishing threats 90% yubico partnership 88% security keys 85% chatgpt users 82% data protection 80% cybercriminal activity 78% enterprise security 75% digital defense 73% anthropic competition 70%