Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI Powers Evolving Cyberattacks, Exploiting Human Trust, Warns ESET’s Righard Zwienenberg

AI-driven cyberattacks are increasingly exploiting human trust, warns ESET’s Righard Zwienenberg, emphasizing the urgent need for enhanced identity verification methods.

Artificial intelligence is reshaping the landscape of cybercrime, enhancing the sophistication and efficiency of attacks while maintaining their core intent: to exploit human trust. In a recent conversation with iTNews Asia, Righard Zwienenberg, Senior Research Fellow at cybersecurity firm ESET, emphasized that while AI has transformed the methodologies of cyberattacks, the ultimate target remains the same—human vulnerability.

AI has enabled attackers to streamline various stages of the cyberattack lifecycle. At the reconnaissance phase, malicious actors can now gather detailed victim profiles within minutes by conducting automated analyses of social media, leaked databases, and public records. This capability significantly reduces the time spent on research, allowing for more personalized and convincing phishing attempts.

Generative AI further amplifies this trend, as it allows for the crafting of messages that closely mimic the local language, tone, and writing style of the target demographic. This level of personalization makes attacks more credible and harder to recognize as fraudulent. Zwienenberg noted a shift in execution methods, where attackers have moved away from obvious malicious links to more subtle tactics, such as browser-based manipulation and AI-assisted business email compromise. These techniques guide victims through actions that seem legitimate, further blurring the lines between real and fake communications.

During the extraction phase, AI continues to play a significant role by automating credential harvesting and adapting attack strategies to better influence victim behavior. Zwienenberg highlighted that the critical danger lies in the lure and execution stages, which increasingly rely on familiarities, urgency, and perceived trust to manipulate victims’ judgments.

One of the most pervasive misconceptions in organizations is the belief that modern scams are easily identifiable. Many firms still expect malicious messages to feature obvious signs—poor grammar or suspicious links. However, contemporary scams are crafted to integrate seamlessly into daily business communication, exploiting human judgment rather than technical vulnerabilities. “The real danger today is not a technical failure, but a human decision made under pressure or false trust,” Zwienenberg cautioned.

Despite significant investments in security and ongoing awareness campaigns, phishing and scam-driven attacks continue to dominate the global threat landscape. These attacks exploit psychological triggers such as trust and urgency, factors that technology alone cannot eliminate. Phishing remains particularly attractive for cybercriminals due to its low cost, scalability, and relatively minimal risks compared to technical exploitation.

As long as human decision-makers play a critical role in most business processes, phishing will likely remain one of the most effective attack techniques. The emergence of voice cloning technology adds another layer of concern, enabling cybercriminals to create convincing voice impersonations from short audio samples. This trend lowers the barrier for executing impersonation attacks, making it easier to mimic executives or colleagues.

To defend against these evolving threats, organizations must rethink their identity verification processes. Recognizing a familiar voice may no longer suffice as proof of identity. Zwienenberg advocated for a shift toward process-based trust, which includes independent verification channels, structured approval workflows, and rehearsed response playbooks for urgent requests. Looking ahead, he anticipates that multi-channel impersonation attacks—coordinated efforts involving email, voice calls, and messaging—will become more common, reinforcing deception while pressuring victims to act quickly.

Modern scams are also shifting away from traditional malicious files and attachments. In many cases, attackers manipulate victims directly within their web browser, guiding them through seemingly legitimate actions such as copying commands, verifying accounts, or resolving technical errors.

– Righard Zwienenberg, Senior Research Fellow at cybersecurity firm ESET

The ‘ClickFix’ scam technique exemplifies this trend, using fake error messages or CAPTCHA prompts to persuade users into actions that install malware or reveal credentials. Such attacks exploit human behavior rather than identifiable malicious files, allowing them to bypass conventional email security and antivirus systems designed to detect harmful attachments.

Another emerging threat is a polluted AI ecosystem, where AI systems operate amidst unreliable data inputs, including misinformation and synthetic content. This situation presents risks, as organizations may mistakenly treat AI outputs as trustworthy shortcuts. If the underlying data is flawed, AI systems could provide misleading advice, impacting critical decisions related to security operations or financial approvals.

To build resilience in this new era, Zwienenberg argues that organizations should not solely focus on detecting technical threats but also recognize that modern scams are fundamentally trust events. He identifies three key priorities: ensuring decision integrity through independent verification steps for sensitive actions, monitoring behavioral telemetry for unusual requests rather than just malware indicators, and conducting regular drills focused on social engineering scenarios.

In conclusion, Zwienenberg emphasizes that the true measure of cybersecurity success lies in an organization’s ability to contain and recover from incidents quickly, alongside fostering a culture where staff routinely report suspicious requests. As cybercriminals continue to exploit trust, organizations must adapt their defenses to not only verify identities but also ensure authenticity before trust turns into vulnerability.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Cybersecurity

ESET Ireland warns that cybercriminals are leveraging AI tools to accelerate attacks on government systems, urging firms to bolster cybersecurity measures now.

AI Cybersecurity

ESET uncovers PromptSpy, the first Android malware using generative AI to manipulate UI, targeting Argentine users with advanced financial fraud tactics.

AI Cybersecurity

Nomani investment scams surged 62% as ESET reported over 64,000 blocked URLs, utilizing AI deepfakes to mislead victims into financial loss.

AI Cybersecurity

ESET uncovers AI-generated malware targeting NFC payments, enabling unauthorized ATM withdrawals and escalating financial cybercrime tactics.

AI Cybersecurity

ESET unveils PromptLock, the first AI-driven ransomware that dynamically generates scripts, amidst a concerning 87% rise in NFC malware threats.

AI Marketing

HCLSoftware unveils agentic AI to streamline APAC marketing tech stacks, aiming for real-time engagement and compliance without added risk.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.