According to the World Economic Forum’s Cyber Risk in 2026 outlook, artificial intelligence (AI) is expected to be the most consequential factor shaping cybersecurity strategies this year, cited by 94% of surveyed executives as a force multiplier for both defense and offense. The report, released on January 12, highlights how generative AI technologies are expanding the attack surface, contributing to unintended data exposure and more complex exploitation tactics that outpace the capacity of purely human-led teams.
Cyberdefense has traditionally focused on remediation after losses occur. However, AI is enabling earlier intervention in the attack cycle by identifying coordinated behavior and emerging risk signals before fraud scales. Companies are ramping up their use of AI to guard against suspicious activities, even as they face rising risks from shadow AI, third-party agents, and apps that could expose their business to cyber threats, as reported by PYMNTS.
Security firms and financial institutions are increasingly employing machine learning to correlate activity across multiple systems rather than relying on isolated alerts. One example is Group-IB’s Cyber Fraud Intelligence Platform, which analyzes behavioral patterns across participating organizations to identify signs of account takeover, authorized push payment scams, and money-mule activity while schemes are still developing. Instead of waiting for confirmed losses, institutions can flag suspicious behavior based on early indicators such as repeated credential reuse or low-value test transactions.
Fraud prevention is increasingly reliant on shared intelligence and behavioral analysis rather than static rules. By correlating signals across platforms, institutions can detect coordinated activity that may not appear risky within a single organization. AI is also expanding into visual risk detection, with Truepic’s shared intelligence platform applying machine learning to analyze images and videos submitted as identity or compliance evidence. This system can flag AI-generated or altered media that might otherwise pass manual review by identifying reused or manipulated visual patterns.
Moreover, AI is being applied at the identity and session level, where behavioral analytics focus on how a user interacts with a system rather than solely on the credentials they present. Tools like keystroke dynamics analysis, device fingerprinting, session velocity tracking, and behavioral biometrics measure signals such as typing cadence, mouse movement, touchscreen pressure, IP stability, device configuration, and navigation patterns across a session. These signals help security systems distinguish legitimate users from attackers who may already possess valid credentials, an increasingly common scenario as AI-generated phishing and credential harvesting improve.
Predictive AI models broaden this approach by detecting fraud patterns that emerge before transactions or approvals occur. In documented cases cited by Group-IB, financial institutions used predictive AI to identify more than 1,100 attempted loan applications involving AI-generated or manipulated biometric images, where attackers attempted to bypass identity verification using deepfake photos. The systems flagged the activity not through document inspection alone, but by identifying inconsistencies across device reuse, session behavior, application timing, and interaction patterns that diverged from legitimate customer behavior. This allowed institutions to halt the applications before approval rather than discovering fraud post-disbursement.
Using AI to Disrupt Crime
AI-driven defense is no longer confined to private fraud platforms; governments are integrating AI directly into cybercrime and economic crime enforcement. The UAE Ministry of Interior has deployed AI and advanced analytics within its Cybercrime Combating Department to support investigations into digital and financial crimes. Officials state that AI systems help analyze large volumes of digital evidence, identify links between cases, and trace the origins of cyber incidents more swiftly than manual methods.
At the enterprise level, large technology providers are embedding AI into financial crime and security workflows. Oracle, for instance, employs AI-based investigation tools to assist analysts by gathering evidence, connecting related cases, and highlighting higher-risk activity. Smaller companies are also adopting AI defensively. In the U.S. Midwest, cybersecurity firms are deploying AI tools to monitor network traffic, email, and user behavior to detect phishing attempts and unauthorized access in real time. These systems emphasize early anomaly detection to prevent incidents from escalating.
The growing reliance on AI reflects a simple constraint: human analysts cannot keep pace with the attack volumes generated by automated tools. National security agencies, including the U.K.’s National Cyber Security Centre, warn that AI will continue to increase the speed and effectiveness of cyber threats through at least 2027, particularly in social engineering and fraud. Enterprise adoption data already reflects this reality; as PYMNTS has reported, 55% of surveyed chief operating officers are relying on generative AI-driven solutions to improve cybersecurity management.
See also
Apple Glasses Poised to Drive AI Glasses Market from 6M to 20M Units in 2026
Nvidia’s NVentures Invests in Harmonic AI’s $120M Series C for Advanced Math Engine
AI’s Global Impact in 2026: U.S.-China Rivalry Fuels Sovereign Models, Data Control, and Governance
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT

















































