Artificial intelligence is reshaping the landscape of cybercrime, enhancing the sophistication and efficiency of attacks while maintaining their core intent: to exploit human trust. In a recent conversation with iTNews Asia, Righard Zwienenberg, Senior Research Fellow at cybersecurity firm ESET, emphasized that while AI has transformed the methodologies of cyberattacks, the ultimate target remains the same—human vulnerability.
AI has enabled attackers to streamline various stages of the cyberattack lifecycle. At the reconnaissance phase, malicious actors can now gather detailed victim profiles within minutes by conducting automated analyses of social media, leaked databases, and public records. This capability significantly reduces the time spent on research, allowing for more personalized and convincing phishing attempts.
Generative AI further amplifies this trend, as it allows for the crafting of messages that closely mimic the local language, tone, and writing style of the target demographic. This level of personalization makes attacks more credible and harder to recognize as fraudulent. Zwienenberg noted a shift in execution methods, where attackers have moved away from obvious malicious links to more subtle tactics, such as browser-based manipulation and AI-assisted business email compromise. These techniques guide victims through actions that seem legitimate, further blurring the lines between real and fake communications.
During the extraction phase, AI continues to play a significant role by automating credential harvesting and adapting attack strategies to better influence victim behavior. Zwienenberg highlighted that the critical danger lies in the lure and execution stages, which increasingly rely on familiarities, urgency, and perceived trust to manipulate victims’ judgments.
One of the most pervasive misconceptions in organizations is the belief that modern scams are easily identifiable. Many firms still expect malicious messages to feature obvious signs—poor grammar or suspicious links. However, contemporary scams are crafted to integrate seamlessly into daily business communication, exploiting human judgment rather than technical vulnerabilities. “The real danger today is not a technical failure, but a human decision made under pressure or false trust,” Zwienenberg cautioned.
Despite significant investments in security and ongoing awareness campaigns, phishing and scam-driven attacks continue to dominate the global threat landscape. These attacks exploit psychological triggers such as trust and urgency, factors that technology alone cannot eliminate. Phishing remains particularly attractive for cybercriminals due to its low cost, scalability, and relatively minimal risks compared to technical exploitation.
As long as human decision-makers play a critical role in most business processes, phishing will likely remain one of the most effective attack techniques. The emergence of voice cloning technology adds another layer of concern, enabling cybercriminals to create convincing voice impersonations from short audio samples. This trend lowers the barrier for executing impersonation attacks, making it easier to mimic executives or colleagues.
To defend against these evolving threats, organizations must rethink their identity verification processes. Recognizing a familiar voice may no longer suffice as proof of identity. Zwienenberg advocated for a shift toward process-based trust, which includes independent verification channels, structured approval workflows, and rehearsed response playbooks for urgent requests. Looking ahead, he anticipates that multi-channel impersonation attacks—coordinated efforts involving email, voice calls, and messaging—will become more common, reinforcing deception while pressuring victims to act quickly.
Modern scams are also shifting away from traditional malicious files and attachments. In many cases, attackers manipulate victims directly within their web browser, guiding them through seemingly legitimate actions such as copying commands, verifying accounts, or resolving technical errors.
– Righard Zwienenberg, Senior Research Fellow at cybersecurity firm ESET
The ‘ClickFix’ scam technique exemplifies this trend, using fake error messages or CAPTCHA prompts to persuade users into actions that install malware or reveal credentials. Such attacks exploit human behavior rather than identifiable malicious files, allowing them to bypass conventional email security and antivirus systems designed to detect harmful attachments.
Another emerging threat is a polluted AI ecosystem, where AI systems operate amidst unreliable data inputs, including misinformation and synthetic content. This situation presents risks, as organizations may mistakenly treat AI outputs as trustworthy shortcuts. If the underlying data is flawed, AI systems could provide misleading advice, impacting critical decisions related to security operations or financial approvals.
To build resilience in this new era, Zwienenberg argues that organizations should not solely focus on detecting technical threats but also recognize that modern scams are fundamentally trust events. He identifies three key priorities: ensuring decision integrity through independent verification steps for sensitive actions, monitoring behavioral telemetry for unusual requests rather than just malware indicators, and conducting regular drills focused on social engineering scenarios.
In conclusion, Zwienenberg emphasizes that the true measure of cybersecurity success lies in an organization’s ability to contain and recover from incidents quickly, alongside fostering a culture where staff routinely report suspicious requests. As cybercriminals continue to exploit trust, organizations must adapt their defenses to not only verify identities but also ensure authenticity before trust turns into vulnerability.


















































