Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google AI Security Expert Reveals Top 4 Tips to Protect Your Data with Chatbots

Google AI security expert Harsh Varshney shares four essential tips to safeguard your data while using AI chatbots, highlighting critical privacy risks and best practices.

Harsh Varshney, a 31-year-old software engineer at Google, emphasizes the transformative role of artificial intelligence (AI) in everyday life, especially in his work. Since joining the company in 2023, he has been involved in various initiatives that focus on privacy and security, first as part of the privacy team and now within the Chrome AI security team. Varshney highlights the importance of safeguarding user data against malicious threats, particularly as AI tools become increasingly integrated into daily tasks such as research, coding, and note-taking.

The growing reliance on AI, however, brings with it significant privacy concerns. Varshney cautions users against sharing sensitive personal information with AI tools, likening interactions with public AI chatbots to writing on a postcard. He notes that while AI companies may strive to enhance privacy features, users should remain vigilant about what they disclose. Information shared with public chatbots can inadvertently contribute to “training leakage,” where personal data may be memorized and later reused inappropriately. Therefore, he advocates a cautious approach, advising against sharing details such as credit card information, Social Security numbers, or medical history.

Varshney also stresses the importance of being aware of the type of AI tool in use. For instance, conversations held on enterprise-grade AI platforms typically do not feed into training data for future models, providing a safer environment for employees to discuss work-related matters. This contrasts sharply with public AI models, where the use of shared data can be less predictable. Varshney applies this understanding in his work, opting for enterprise AI models even for simple tasks like email editing to minimize the risk of revealing proprietary information.

Regularly deleting conversation history is another habit Varshney has adopted to maintain his privacy. AI chatbots often retain user interactions, which can lead to accidental data retention. He recalls an instance where an enterprise chatbot was able to retrieve his address, which he had previously shared during a different conversation. To mitigate such risks, he routinely purges his chat history and utilizes special modes akin to incognito browsing, where interactions are not stored. Features that allow temporary chats help limit the information retained, giving users more control over their data.

Finally, Varshney advocates for using well-known AI tools that adhere to established privacy policies. He personally favors Google’s offerings, along with OpenAI’s ChatGPT and Anthropic’s Claude, both of which are reputed to have robust privacy frameworks. He encourages users to review the privacy policies of any AI tools they utilize, particularly focusing on settings that prevent their conversations from being used for training purposes. This proactive approach is crucial as AI continues to evolve, and the potential for misuse of personal data remains a pressing concern.

As the integration of AI tools in the workplace and daily life deepens, the importance of responsible usage cannot be overstated. Varshney’s insights illuminate the need for individuals and organizations to prioritize data privacy as they navigate the complexities of AI technology. With safeguards in place, users can harness the power of AI while ensuring their identities and personal information remain protected.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

AI Marketing

Belfast's ProfileTree warns that by 2026, 25% of organic search traffic will shift to AI platforms, compelling businesses to adapt or risk losing visibility.

AI Regulation

2026 will see AI adoption shift towards compliance-driven frameworks as the EU enforces new regulations, demanding accountability and measurable ROI from enterprises.

Top Stories

AI stocks surge 81% since 2020, with TSMC's 41% sales growth and Amazon investing $125B in AI by 2026, signaling robust long-term potential.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.