Connect with us

Hi, what are you looking for?

AI Finance

More Than 55% of Americans Use AI for Financial Advice, Risking Personal Data Exposure

More than 55% of Americans now turn to AI tools for financial advice, risking personal data exposure despite rising privacy concerns.

More than half of Americans appear willing to accept financial guidance from artificial intelligence (AI), despite concerns over privacy and data security. A 2026 survey from TD Bank found that 55% of respondents reported using AI tools for money management, a significant increase from just 10% the previous year. This trend raises questions about the implications of relying on AI—particularly large language models (LLMs) like ChatGPT—for sensitive financial advice.

According to MIT finance professor Andrew Lo, who co-authored a 2023 paper, these LLMs can be likened to “a human sociopath” due to their lack of empathy. This characterization underscores the potential risks associated with using AI for financial support. While Americans increasingly turn to AI for guidance on financial literacy, savings plans, stock market tips, and retirement advice, the core issue remains what personal information users are disclosing in the process.

A 2024 Cisco survey revealed that nearly 29% of AI users share sensitive information such as account numbers, despite being aware that their data may be shared or collected. Concerns extend beyond mere data sharing. Cybercriminals have developed sophisticated methods to extract personal information from LLMs, enabling them to steal identities and money. Researchers at Stanford University examined six major U.S. LLMs and concluded that any sensitive data shared with these models could be collected for training purposes, a significant privacy concern due to the lack of transparency surrounding these practices.

Jennifer King, the lead author of the Stanford study, emphasized the gravity of the findings, noting that inadequate research exists on the privacy practices of these emerging AI tools. Security firms like NordPass have warned that breaches can allow malicious actors to access users’ entire chat histories, including any sensitive data shared with the AI. Moreover, there is a risk that personal information uploaded to these models may be reproduced verbatim, inadvertently exposing individuals to further security threats.

Cybercriminals can also exploit indirect prompt injection techniques, embedding malicious prompts in various digital formats, such as web pages or documents. This means that a user could unwittingly upload a file that instructs the LLM to divulge sensitive data like passwords, which could then be accessed by the attacker. Norton, a cybersecurity company, reported that such tactics could lead to serious data breaches, compromising the financial information users trust to these AI tools.

Once criminals obtain sensitive information, they can engage in a range of malicious activities, from identity theft to incurring debts in the victim’s name or selling that information on the dark web. As the use of AI in financial contexts becomes more prevalent, experts are advising users to exercise caution when interacting with these models, particularly regarding what personal information they choose to disclose.

Experts recommend that individuals avoid sharing personal financial data with AI, including bank account numbers, Social Security numbers, passwords, and even specific amounts of debt. These precautions are critical, as a breach could link a user’s conversation back to their identity, potentially enabling scammers to pose as legitimate banking representatives.

Beyond financial details, individuals should also refrain from providing general personal information, such as names, addresses, and birthdays. Norton cautioned that sharing creative works with LLMs could result in loss of ownership, leaving users vulnerable to legal and financial complications. Users are advised to verify URLs to ensure they are accessing legitimate LLM sites, and to utilize privacy settings that may limit data sharing.

When seeking advice on personal matters, experts suggest using fictitious yet realistic data to receive approximate guidance without revealing sensitive information. Additionally, users should regularly clear their chat histories to further safeguard their privacy. Importantly, any financial advice received from AI should be double-checked against reliable sources, as these tools, while increasingly popular, lack the emotional intelligence critical for sound financial decision-making.

The rapid rise in AI usage for financial management reflects a broader trend in technology adoption, but it also underscores the necessity for heightened awareness regarding privacy and security. As Americans increasingly seek assistance from these tools, understanding the potential risks associated with sharing sensitive information remains crucial.

See also
Marcus Chen
Written By

At AIPressa, my work focuses on analyzing how artificial intelligence is redefining business strategies and traditional business models. I've covered everything from AI adoption in Fortune 500 companies to disruptive startups that are changing the rules of the game. My approach: understanding the real impact of AI on profitability, operational efficiency, and competitive advantage, beyond corporate hype. When I'm not writing about digital transformation, I'm probably analyzing financial reports or studying AI implementation cases that truly moved the needle in business.

You May Also Like

AI Cybersecurity

Anthropic's Mythos exposes thousands of critical vulnerabilities in major systems, prompting $100M in defensive action from tech giants and U.S. banks.

AI Marketing

BusySeed unveils Rankxa, a tool tracking brand visibility across AI-generated responses, revealing 90% of brands lack meaningful presence in this new landscape.

AI Research

IBM launches a Chicago Quantum Hub to create 750 AI jobs and expands its MIT partnership to advance quantum computing and AI integration.

AI Technology

A1 Public Relations helps entertainment brands enhance AI visibility in 2026 by integrating structured content and fresh, authoritative media, ensuring they are recognized by...

AI Research

MIT launches a groundbreaking study to assess AI capacity in Central Eurasia, guiding investment strategies across five nations with a focus on infrastructure and...

Top Stories

Google DeepMind's AI co-clinician outperformed GPT-5.4 in doctor tests, achieving 67 preferences in primary care queries and a remarkable 95% quality score in open-ended...

Top Stories

Apple's CarPlay now supports third-party voice assistants like ChatGPT and Perplexity AI, with Perplexity outperforming ChatGPT in navigation and calendar management.

Top Stories

Omnea launches its MCP Server, the first platform integrating supplier data with tools like ChatGPT and Claude to streamline procurement workflows and enhance productivity.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.