Connect with us

Hi, what are you looking for?

AI Cybersecurity

Financial Advisers Embrace AI Tools: 60% Adopt ChatGPT and Copilot Amid Security Concerns

Financial advisers rapidly adopt AI tools, with 60% now using platforms like ChatGPT and Microsoft Copilot, despite security concerns affecting 35% of users.

Financial advisers are increasingly integrating artificial intelligence tools into their operations, according to recent research. A study by the lang cat has revealed that 60% of advice professionals are now utilizing AI tools, a significant rise from the 29% reported in the previous edition of the study. This growing trend indicates that advisers are not only adopting specialist software but also turning to more generic platforms like ChatGPT and Microsoft Copilot.

Among those advisers currently using AI, the uptake of generic tools is now comparable to that of Saturn, one of the most popular AI platforms tailored specifically for advisory firms. Supporting this trend, Dimensional’s 2025 Global Advisor Study found that 41% of firms in the EMEA region reported using ChatGPT, while 24% utilized Saturn and 17% employed Copilot, based on responses from 109 firms.

The data from the lang cat also underscores the increasing significance of AI within advisory practices, with 81% of respondents indicating that AI has become a more critical area for investment and focus than any other aspect of their business. Nearly a quarter, or 24%, of these advisers identified AI as the single best piece of technology in their operations, just behind cashflow modelling, which topped the list.

However, the rising adoption of AI is also accompanied by heightened concerns regarding data security and compliance. More than a third (35%) of advisers expressed a lack of confidence in the data security of the AI tools they are currently using, while an additional 31% reported feeling neutral on the issue. Steve Nelson, insight director at the lang cat, emphasized the dual nature of AI adoption, highlighting both opportunities and risks.

“There is clearly demand from advisers to be using LLM AI tools to help them with day-to-day work,” Nelson stated. He noted that while many tools are specifically designed for advisory firms, a significant number of advisers are still opting for more generic solutions like Copilot, Claude, and ChatGPT.

This trend reflects a desire among advisers to enhance efficiency and alleviate back-office burdens. However, Nelson cautioned that this eagerness to adopt new technology must be balanced with a commitment to security and compliance. “On the one hand, this demonstrates that advisers are trying to evolve and improve—it’s great they are looking to be more efficient,” he said. “On the other, it raises immediate questions about data security, compliance risk, and client confidentiality.”

With the financial advisory landscape rapidly changing, advisers are urged to exercise caution when selecting AI tools. Nelson stressed that security and responsible usage should be paramount in the due diligence process for any technology, especially generic AI platforms. The focus on AI not only enhances operational efficiency but also poses significant challenges that the industry must navigate carefully.

As the integration of AI tools becomes a norm rather than an exception for financial advisers, the industry will need to address these concerns head-on. The balance between leveraging cutting-edge technology and safeguarding client information is a critical issue that will shape the future of financial advisory services.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Research

Microsoft's study reveals 41% of health inquiries to AI chatbots like Copilot seek vital information and education, reshaping digital health interactions.

AI Regulation

OpenAI faces backlash for not alerting authorities about concerning user behavior leading to a mass shooting in Canada that claimed nine lives.

Top Stories

OpenAI launches GPT-5.4-Cyber, enabling aggressive cybersecurity measures but raising fears of unprecedented cyber threats and misuse.

AI Generative

Google's Gemma 4 launches as an open-source LLM, delivering 26 billion parameter performance on 4 billion parameter speed, enhancing local AI capabilities.

Top Stories

OpenAI refocuses on business solutions, launching new AI model Spud to boost profitability as corporate revenue grows from 20% to 40% in just over...

Top Stories

Microsoft's CEO Satya Nadella initiates a "Code Red" overhaul for Copilot amid a 17% stock decline, as investor confidence wanes ahead of a competitive...

AI Generative

Microsoft launches Copilot, an AI image creation tool that enables users to generate and edit visuals effortlessly using text prompts, democratizing design for all.

AI Tools

HubSpot launches its Answer Engine Optimisation tool at $50/month to tackle a 27% drop in organic traffic, focusing on AI-driven brand visibility.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.