Artificial IntelligenceCybersecurityNewswireTechnology

Google AI Security Expert’s Forbidden Chatbot Secrets

Originally published on: December 14, 2025
▼ Summary

– Harsh Varshney, a Google engineer, uses AI tools daily for tasks like research, coding, and note-taking but is highly aware of the associated privacy risks.
– He advises treating public AI chatbots like a public postcard, never sharing sensitive personal information such as addresses or medical history with them.
– It is crucial to distinguish between public AI tools and secure enterprise-grade models, using the latter for confidential work to prevent data leakage.
– He recommends regularly deleting chat history and using temporary chat modes to protect privacy, as AI can retain and recall personal details from past conversations.
– Using well-known AI tools with clear privacy frameworks and reviewing their privacy settings to control data usage for model training is essential for safety.

For anyone integrating artificial intelligence into their daily workflow, safeguarding personal and professional information is no longer optional, it’s a critical necessity. The convenience these tools offer for coding, research, and communication is undeniable, yet that very utility requires a disciplined approach to privacy. Drawing from experience in software engineering and security, particularly in roles focused on user data protection and browser security, several key practices have proven essential for using AI safely without sacrificing its benefits.

A fundamental rule is to treat interactions with public AI chatbots as if you were writing on a postcard. There’s often an illusion of private conversation, but sensitive details like credit card numbers, Social Security information, home addresses, or medical history should never be shared. The data provided can be utilized to train future model iterations, potentially leading to “training leakage” where personal information is memorized and inadvertently revealed to other users. Additionally, the ever-present risk of data breaches means anything shared could be exposed. If you wouldn’t write it on a public document, don’t entrust it to a public AI tool.

Understanding the environment you’re working in is equally important. Distinguish between public AI tools and enterprise-grade solutions. Public models may use conversation data for training, creating a risk similar to discussing confidential matters in a crowded cafe. In contrast, enterprise versions, which companies typically pay for, are generally designed not to train on user dialogues, making them a safer space for discussing work projects or proprietary information. There have been notable incidents of employees accidentally leaking company data to public chatbots, a risk that is minimized by using secured enterprise platforms. For any work-related task, even simple email edits, opting for an enterprise model is a prudent choice, though it’s still wise to limit the personal data shared.

Make a habit of regularly clearing your chat history, even on enterprise platforms. While these tools offer convenience through memory features, this functionality can retain information you may have forgotten you shared. In one instance, an enterprise chatbot recalled a precise address from a prior conversation where it was mentioned in an email draft. To prevent such retention, regularly delete your history as a precaution against potential account compromises. For queries you prefer not to be stored, utilize features analogous to incognito mode, such as “temporary chat,” which neither saves history nor uses the data for model training.

Finally, prioritize well-known AI tools with transparent privacy frameworks. Established providers are more likely to have robust security measures and clear policies in place. It’s advisable to review a tool’s privacy policy to understand how your data may be used. Often, settings include an option to “improve the model for everyone”; ensuring this is disabled prevents your conversations from being used in training datasets. While the power of AI is transformative, its responsible use hinges on our vigilance in protecting our digital identities and sensitive information.

(Source: Business Insider)

Topics

AI Tools 95% Data Privacy 93% privacy habits 88% AI Integration 85% ai security 85% enterprise ai 82% user awareness 80% public ai 80% training leakage 78% chatbot history 77%