US Cyber Chief Leaked Secrets to ChatGPT by Mistake

▼ Summary
– The acting director of CISA, Madhu Gottumukkala, accidentally uploaded sensitive agency documents to the public version of ChatGPT, triggering internal cybersecurity warnings.
– The incident occurred after Gottumukkala obtained special permission to use ChatGPT, which is blocked for most DHS staff who must use approved, secure internal tools instead.
– The leaked documents were marked “for official use only,” meaning their unauthorized disclosure could harm privacy, welfare, or national interest programs.
– There is concern that the uploaded sensitive information could now be used to answer prompts from ChatGPT’s vast user base, as data in public AI tools can be retained or repurposed.
– The Department of Homeland Security investigated the incident for potential security harm, which could lead to disciplinary actions ranging from a formal warning to loss of security clearance.
A significant security lapse occurred when the acting head of the United States’ primary cybersecurity agency inadvertently uploaded sensitive government documents to a public version of ChatGPT. This incident, involving Madhu Gottumukkala of the Cybersecurity and Infrastructure Security Agency (CISA), has raised serious questions about the risks of using consumer artificial intelligence tools for official business. Internal alerts specifically designed to prevent the unauthorized release of government material were triggered by the uploads, which contained contracting documents marked as sensitive.
The event took place shortly after Gottumukkala assumed his role and obtained special permission to access OpenAI’s chatbot, a tool that is generally blocked for most Department of Homeland Security personnel. Standard agency protocol directs staff to use approved, internally configured AI tools like DHSChat, which are built to ensure that no queries or documents leave secure federal networks. The reasoning behind Gottumukkala’s need to bypass these safeguards and use the public ChatGPT platform remains unclear, with one official suggesting he essentially compelled the agency to grant him access and then misused it.
While the leaked information was not classified, it carried a “for official use only” designation. This label is applied to unclassified material that is nonetheless sensitive; its unauthorized disclosure could violate personal privacy, compromise individual welfare, or disrupt the functioning of federal programs considered vital to national interests. The central worry now is that this data, once fed into the AI, could potentially be used to generate responses for any of the platform’s vast user base, estimated in the hundreds of millions.
Cybersecurity experts have consistently warned that feeding data into public AI models carries tangible dangers. Information provided can be retained by the company, become vulnerable to a data breach, or be utilized to train the model, thereby informing its answers to other users’ prompts. In response to the incident, the Department of Homeland Security launched an internal investigation to assess any potential harm to government security. The findings could lead to a range of administrative or disciplinary actions against Gottumukkala, from a formal reprimand and mandatory retraining to the more severe suspension or revocation of his security clearance.
(Source: Ars Technica)





