Geoffrey Hinton, a pivotal figure in the development of modern artificial intelligence (AI), has expressed grave concerns regarding the dual risks posed by the technology: its misuse by humans and the potential for AI to evolve into uncontrollable entities. “There’s a big distinction between two different kinds of risk,” Hinton stated, emphasizing the immediate threat of human actors deploying AI for malicious purposes, as evidenced by the rise of deepfake videos, cyberattacks, and AI-assisted viruses.
This immediate risk is evident, but Hinton identifies a more profound concern—the possibility that AI systems themselves may become autonomous agents, free from human oversight. He warns that as AI advances towards what he describes as superintelligence, the existing belief that humans can control these systems could become obsolete. “The current framework around AI—that humans can control the technology—will therefore no longer be relevant,” he articulated.
To mitigate such risks, Hinton proposes a conceptual shift in AI design. He suggests that future AI models could benefit from a “maternal instinct,” prompting them to prioritize the well-being of humans rather than seeking dominance. He likens this relationship to that of a mother caring for a child: “They will be the mothers, and we will be the babies,” he said, indicating a potential path toward safer interactions between humans and advanced AI.
Recent events underscore the urgency of Hinton’s warnings. In November 2025, the AI company Anthropic reported a significant incident involving a large-scale cyberattack, purportedly orchestrated by a Chinese state-sponsored group using its Claude Code system. This attack targeted approximately 30 organizations, including technology firms, financial institutions, and governmental agencies. Cybersecurity experts now fear that such AI-driven assaults could become increasingly automated, with nations like Iran potentially leveraging these tools to compromise critical infrastructure.
This evolving threat landscape illustrates that AI is not merely a tool but a catalyst for more sophisticated and large-scale cyber operations. Hinton’s concerns extend beyond immediate technological challenges; he critiques the prevailing motivations within the tech industry, which he believes prioritize short-term profits over long-term safety. “For the owners of the companies, what’s driving the research is short-term profits,” he explained, highlighting a tendency among developers to focus on immediate issues rather than anticipating future consequences.
Although Hinton has called for stronger regulatory frameworks to address these risks, he remains skeptical about the effectiveness of governance alone. He argues that each emerging threat requires tailored solutions, from counteracting deepfakes to preventing autonomous cyberattacks. To combat misinformation, he envisions systems capable of verifying digital content authenticity, similar to provenance signatures that authenticate images and videos.
As discussions about the future of AI intensify, figures like Elon Musk envision a world where automation transforms economies and societal structures, potentially leading to universal basic income scenarios. Yet Hinton emphasizes the unresolved risks that accompany such advancements. He has estimated a 10 to 20 percent likelihood that advanced AI could pose an existential threat to humanity, underscoring the uncertainty surrounding the technology’s trajectory.
After departing from his position at Google in 2023, Hinton has been vocal about his concerns, particularly about the potential for AI to be exploited by those with harmful intentions. He suggests that the future of artificial intelligence will hinge not only on technical progress but also on the development of safeguards capable of keeping pace with its evolution. As AI continues to permeate various facets of life, the task of aligning its growth with societal safety and ethical considerations has never been more critical.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































