Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google AI Security Expert Reveals Top 4 Tips to Protect Your Data with Chatbots

Google AI security expert Harsh Varshney shares four essential tips to safeguard your data while using AI chatbots, highlighting critical privacy risks and best practices.

Harsh Varshney, a 31-year-old software engineer at Google, emphasizes the transformative role of artificial intelligence (AI) in everyday life, especially in his work. Since joining the company in 2023, he has been involved in various initiatives that focus on privacy and security, first as part of the privacy team and now within the Chrome AI security team. Varshney highlights the importance of safeguarding user data against malicious threats, particularly as AI tools become increasingly integrated into daily tasks such as research, coding, and note-taking.

The growing reliance on AI, however, brings with it significant privacy concerns. Varshney cautions users against sharing sensitive personal information with AI tools, likening interactions with public AI chatbots to writing on a postcard. He notes that while AI companies may strive to enhance privacy features, users should remain vigilant about what they disclose. Information shared with public chatbots can inadvertently contribute to “training leakage,” where personal data may be memorized and later reused inappropriately. Therefore, he advocates a cautious approach, advising against sharing details such as credit card information, Social Security numbers, or medical history.

Varshney also stresses the importance of being aware of the type of AI tool in use. For instance, conversations held on enterprise-grade AI platforms typically do not feed into training data for future models, providing a safer environment for employees to discuss work-related matters. This contrasts sharply with public AI models, where the use of shared data can be less predictable. Varshney applies this understanding in his work, opting for enterprise AI models even for simple tasks like email editing to minimize the risk of revealing proprietary information.

Regularly deleting conversation history is another habit Varshney has adopted to maintain his privacy. AI chatbots often retain user interactions, which can lead to accidental data retention. He recalls an instance where an enterprise chatbot was able to retrieve his address, which he had previously shared during a different conversation. To mitigate such risks, he routinely purges his chat history and utilizes special modes akin to incognito browsing, where interactions are not stored. Features that allow temporary chats help limit the information retained, giving users more control over their data.

Finally, Varshney advocates for using well-known AI tools that adhere to established privacy policies. He personally favors Google’s offerings, along with OpenAI’s ChatGPT and Anthropic’s Claude, both of which are reputed to have robust privacy frameworks. He encourages users to review the privacy policies of any AI tools they utilize, particularly focusing on settings that prevent their conversations from being used for training purposes. This proactive approach is crucial as AI continues to evolve, and the potential for misuse of personal data remains a pressing concern.

As the integration of AI tools in the workplace and daily life deepens, the importance of responsible usage cannot be overstated. Varshney’s insights illuminate the need for individuals and organizations to prioritize data privacy as they navigate the complexities of AI technology. With safeguards in place, users can harness the power of AI while ensuring their identities and personal information remain protected.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Cybersecurity

Schools leverage AI to enhance cybersecurity, but experts warn that AI-driven threats like advanced phishing and malware pose new risks.

AI Tools

Only 42% of employees globally are confident in computational thinking, with less than 20% demonstrating AI-ready skills, threatening productivity and innovation.

AI Research

Krites boosts curated response rates by 3.9x for large language models while maintaining latency, revolutionizing AI caching efficiency.

AI Marketing

HCLTech and Cisco unveil the AI-driven Fluid Contact Center, improving customer engagement and efficiency while addressing 96% of agents' complex interaction challenges.

Top Stories

Cohu, Inc. posts Q4 2025 sales rise to $122.23M but widens annual loss to $74.27M, highlighting risks amid semiconductor market volatility.

Top Stories

ValleyNXT Ventures launches the ₹400 crore Bharat Breakthrough Fund to accelerate seed-stage AI and defence startups with a unique VC-plus-accelerator model

AI Technology

A new report reveals that 74% of climate claims by tech giants like Google and Microsoft lack evidence, highlighting serious environmental costs of AI...

AI Regulation

Clarkesworld halts new submissions amid a surge of AI-generated stories, prompting industry-wide adaptations as publishers face unprecedented content challenges.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.