More than half of Americans appear willing to accept financial guidance from artificial intelligence (AI), despite concerns over privacy and data security. A 2026 survey from TD Bank found that 55% of respondents reported using AI tools for money management, a significant increase from just 10% the previous year. This trend raises questions about the implications of relying on AI—particularly large language models (LLMs) like ChatGPT—for sensitive financial advice.
According to MIT finance professor Andrew Lo, who co-authored a 2023 paper, these LLMs can be likened to “a human sociopath” due to their lack of empathy. This characterization underscores the potential risks associated with using AI for financial support. While Americans increasingly turn to AI for guidance on financial literacy, savings plans, stock market tips, and retirement advice, the core issue remains what personal information users are disclosing in the process.
A 2024 Cisco survey revealed that nearly 29% of AI users share sensitive information such as account numbers, despite being aware that their data may be shared or collected. Concerns extend beyond mere data sharing. Cybercriminals have developed sophisticated methods to extract personal information from LLMs, enabling them to steal identities and money. Researchers at Stanford University examined six major U.S. LLMs and concluded that any sensitive data shared with these models could be collected for training purposes, a significant privacy concern due to the lack of transparency surrounding these practices.
Jennifer King, the lead author of the Stanford study, emphasized the gravity of the findings, noting that inadequate research exists on the privacy practices of these emerging AI tools. Security firms like NordPass have warned that breaches can allow malicious actors to access users’ entire chat histories, including any sensitive data shared with the AI. Moreover, there is a risk that personal information uploaded to these models may be reproduced verbatim, inadvertently exposing individuals to further security threats.
Cybercriminals can also exploit indirect prompt injection techniques, embedding malicious prompts in various digital formats, such as web pages or documents. This means that a user could unwittingly upload a file that instructs the LLM to divulge sensitive data like passwords, which could then be accessed by the attacker. Norton, a cybersecurity company, reported that such tactics could lead to serious data breaches, compromising the financial information users trust to these AI tools.
Once criminals obtain sensitive information, they can engage in a range of malicious activities, from identity theft to incurring debts in the victim’s name or selling that information on the dark web. As the use of AI in financial contexts becomes more prevalent, experts are advising users to exercise caution when interacting with these models, particularly regarding what personal information they choose to disclose.
Experts recommend that individuals avoid sharing personal financial data with AI, including bank account numbers, Social Security numbers, passwords, and even specific amounts of debt. These precautions are critical, as a breach could link a user’s conversation back to their identity, potentially enabling scammers to pose as legitimate banking representatives.
Beyond financial details, individuals should also refrain from providing general personal information, such as names, addresses, and birthdays. Norton cautioned that sharing creative works with LLMs could result in loss of ownership, leaving users vulnerable to legal and financial complications. Users are advised to verify URLs to ensure they are accessing legitimate LLM sites, and to utilize privacy settings that may limit data sharing.
When seeking advice on personal matters, experts suggest using fictitious yet realistic data to receive approximate guidance without revealing sensitive information. Additionally, users should regularly clear their chat histories to further safeguard their privacy. Importantly, any financial advice received from AI should be double-checked against reliable sources, as these tools, while increasingly popular, lack the emotional intelligence critical for sound financial decision-making.
The rapid rise in AI usage for financial management reflects a broader trend in technology adoption, but it also underscores the necessity for heightened awareness regarding privacy and security. As Americans increasingly seek assistance from these tools, understanding the potential risks associated with sharing sensitive information remains crucial.
See also
Mizuho Bank Completes Merger, Integrates Oracle Autonomous AI for Enhanced Operations
KFTC Launches AI Agent Payment Platform to Transform Financial Transactions
Finance Ministry Alerts Public to Fake AI Video Featuring Adviser Salehuddin Ahmed
Bajaj Finance Launches 200K AI-Generated Ads with Bollywood Celebrities’ Digital Rights
Traders Seek Credit Protection as Oracle’s Bond Derivatives Costs Double Since September





















































