Cybersecurity researchers at ESET have identified a significant escalation in cybercrime tactics, revealing that hackers are now deploying AI-generated malware to compromise payments made through Near Field Communication (NFC)-enabled devices. This sophisticated malware can intercept sensitive payment card information, facilitate fraudulent online purchases, and even authorize unauthorized withdrawals from automated teller machines (ATMs). The findings underscore a pivotal shift in how cybercriminals are leveraging artificial intelligence to enhance the scale and sophistication of their attacks.
This alarming trend signifies that threat actors are expanding their use of AI beyond traditional cyberattacks like ransomware. Notably, AI-powered ransomware, such as PromptLock, has already demonstrated capabilities to scan, lock, or obliterate data on infected systems. Now, hackers are taking a more elaborate approach by utilizing Generative Artificial Intelligence (GenAI) to craft malware explicitly designed for financial fraud, targeting the digital payment systems that are increasingly integral to modern commerce.
Previously, ESET reported another concerning application of GenAI by cybercriminals, wherein attackers utilized AI tools to create highly convincing phishing scams. These scams, made possible through accessible open-source and commercial AI platforms like Google Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude, have made phishing attacks more difficult to detect. The realistic language generation capabilities of these tools pose heightened risks for both individuals and organizations.
In light of these developments, incident response and cybersecurity teams are urged to adopt proactive measures to defend against evolving threats. Basic security practices remain among the most effective defenses against such sophisticated attacks. Keeping operating systems and applications fully updated, ensuring browsers are fortified with the latest security patches, deploying reputable endpoint protection solutions, and conducting automated system scans regularly can help in early detection of suspicious activities.
Alongside technical safeguards, employee training plays a crucial role in enhancing an organization’s security posture. Educating staff about emerging cyber threats, phishing methods, and safe digital practices can significantly reduce the chances of successful attacks. When employees are informed and vigilant, they form a robust first line of defense against AI-driven cybercrime, enabling organizations to maintain resilience in an increasingly complex threat landscape.
See also
Cybersecurity Teams Cautiously Adopt AI Tools, 70% Report Improved Effectiveness
Cybersecurity Teams Cautiously Adopt AI Tools: 30% Already Implemented, 44% See No Hiring Impact
Cowbell Projects 2026 Rise in AI-Driven Cyber Threats for UK Businesses
AI-Driven Automation Set to Transform Cybercrime Landscape by 2026, Warns Trend Micro
Parrot OS 7.0 Launches with Advanced AI Tools and New Penetration Testing Features


















































