Kaspersky experts have identified significant transformations in the cybersecurity landscape by 2026, driven by the rapid advancement of artificial intelligence (AI). Both individual users and businesses face new challenges, as large language models (LLMs) enhance defensive capabilities and simultaneously create opportunities for cybercriminals.
The rise of deepfake technology has reached a tipping point, with increasing awareness among companies about the risks associated with synthetic content. Organizations are proactively training employees to recognize and mitigate the potential threats posed by deepfakes, which have diversified in format and accessibility. As consumers encounter fake content more frequently, their understanding of these threats is improving, elevating deepfakes to a critical element of the security agenda demanding systematic internal policies and training.
The quality of deepfakes is expected to continue improving, particularly in audio realism, as the barrier to entry lowers for content creation. While visual fidelity in deepfakes is already advanced, the auditory aspect remains a key area for enhancement. With user-friendly tools available, even those without technical expertise can produce mid-quality deepfakes in a matter of clicks. This democratization of technology is likely to lead to more sophisticated and easily accessible deepfake content, making it a valuable tool for cybercriminals.
Despite the ongoing evolution of online deepfakes, they remain primarily tools for advanced users. Improvements in real-time face and voice-swapping technologies are notable, yet the complexity of their setup continues to demand technical skills. Broad adoption of these tools may be limited; however, the risks associated with targeted attacks are escalating as the realism of such manipulations increases, making them more convincing.
Efforts to establish reliable systems for labeling AI-generated content are ongoing, though no unified criteria currently exist to reliably identify synthetic materials. Current labeling methods are often easily circumvented or removed, particularly in the realm of open-source models. As such, it is likely that new technical and regulatory initiatives will emerge to tackle these challenges.
The capabilities of open-weight models are rapidly advancing, allowing them to approach the performance of top closed models in various cybersecurity tasks. This shift presents greater opportunities for misuse, as closed models typically include stricter controls and safeguards against abuse. The growing functionality of open-source systems, however, coupled with their fewer restrictions, is blurring the lines between proprietary and open-source models, both of which can be exploited for malicious purposes.
As the distinction between legitimate and fraudulent AI-generated content becomes increasingly ambiguous, the ability to differentiate between the two is diminishing. AI is adept at crafting convincing scam emails, creating realistic visual identities, and generating high-quality phishing pages. Concurrently, prominent brands are adopting synthetic materials for advertising, making AI-generated content appear familiar and acceptable. This convergence complicates the task of identifying real versus fake, posing challenges for users and automated detection systems alike.
AI is anticipated to serve as a cross-chain tool in cyberattacks, influencing various stages of the attack lifecycle. Cybercriminals are already utilizing LLMs to write code, construct infrastructure, and automate operational tasks. As advancements continue, AI’s role in supporting diverse attack phases—from preparation and communication to vulnerability probing and tool deployment—will likely expand. Additionally, attackers may take steps to obscure signs of AI involvement, complicating analysis and response efforts.
“While AI tools are being used in cyberattacks, they are also becoming increasingly prevalent in security analysis, influencing how Security Operations Center (SOC) teams operate,” said Vladislav Tushkanov, Research Development Group Manager at Kaspersky. He noted that agent-based systems are capable of continuously scanning infrastructure, identifying vulnerabilities, and gathering contextual information for investigations, thereby reducing the burden of manual tasks. This shift allows specialists to focus on decision-making based on curated data rather than sifting through raw information. Concurrently, security tools are evolving to feature natural-language interfaces, enabling users to issue prompts instead of relying on complex technical queries.
As AI continues to reshape the cybersecurity landscape, both organizations and individuals must adapt to an environment where the stakes are higher and the challenges more complex. The interplay between technological advancements and security measures will determine the effectiveness of responses to emerging threats in the years ahead.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































