Connect with us

Hi, what are you looking for?

AI Cybersecurity

Geoffrey Hinton Warns of AI Misuse and Existential Risks Amid Rapid Advancements

Geoffrey Hinton warns that advanced AI poses a 10-20% existential threat, urging for regulatory frameworks amid rising cyberattacks like Anthropic’s recent incident.

Geoffrey Hinton, a pivotal figure in the development of modern artificial intelligence (AI), has expressed grave concerns regarding the dual risks posed by the technology: its misuse by humans and the potential for AI to evolve into uncontrollable entities. “There’s a big distinction between two different kinds of risk,” Hinton stated, emphasizing the immediate threat of human actors deploying AI for malicious purposes, as evidenced by the rise of deepfake videos, cyberattacks, and AI-assisted viruses.

This immediate risk is evident, but Hinton identifies a more profound concern—the possibility that AI systems themselves may become autonomous agents, free from human oversight. He warns that as AI advances towards what he describes as superintelligence, the existing belief that humans can control these systems could become obsolete. “The current framework around AI—that humans can control the technology—will therefore no longer be relevant,” he articulated.

To mitigate such risks, Hinton proposes a conceptual shift in AI design. He suggests that future AI models could benefit from a “maternal instinct,” prompting them to prioritize the well-being of humans rather than seeking dominance. He likens this relationship to that of a mother caring for a child: “They will be the mothers, and we will be the babies,” he said, indicating a potential path toward safer interactions between humans and advanced AI.

Recent events underscore the urgency of Hinton’s warnings. In November 2025, the AI company Anthropic reported a significant incident involving a large-scale cyberattack, purportedly orchestrated by a Chinese state-sponsored group using its Claude Code system. This attack targeted approximately 30 organizations, including technology firms, financial institutions, and governmental agencies. Cybersecurity experts now fear that such AI-driven assaults could become increasingly automated, with nations like Iran potentially leveraging these tools to compromise critical infrastructure.

This evolving threat landscape illustrates that AI is not merely a tool but a catalyst for more sophisticated and large-scale cyber operations. Hinton’s concerns extend beyond immediate technological challenges; he critiques the prevailing motivations within the tech industry, which he believes prioritize short-term profits over long-term safety. “For the owners of the companies, what’s driving the research is short-term profits,” he explained, highlighting a tendency among developers to focus on immediate issues rather than anticipating future consequences.

Although Hinton has called for stronger regulatory frameworks to address these risks, he remains skeptical about the effectiveness of governance alone. He argues that each emerging threat requires tailored solutions, from counteracting deepfakes to preventing autonomous cyberattacks. To combat misinformation, he envisions systems capable of verifying digital content authenticity, similar to provenance signatures that authenticate images and videos.

As discussions about the future of AI intensify, figures like Elon Musk envision a world where automation transforms economies and societal structures, potentially leading to universal basic income scenarios. Yet Hinton emphasizes the unresolved risks that accompany such advancements. He has estimated a 10 to 20 percent likelihood that advanced AI could pose an existential threat to humanity, underscoring the uncertainty surrounding the technology’s trajectory.

After departing from his position at Google in 2023, Hinton has been vocal about his concerns, particularly about the potential for AI to be exploited by those with harmful intentions. He suggests that the future of artificial intelligence will hinge not only on technical progress but also on the development of safeguards capable of keeping pace with its evolution. As AI continues to permeate various facets of life, the task of aligning its growth with societal safety and ethical considerations has never been more critical.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

Red Hat advances enterprise AI with Small Language Models that achieve over 98% validity in structured tasks, prioritizing reliability and data sovereignty.

AI Research

OpenAI's o1 model achieves 81.6% diagnostic accuracy in emergency situations, surpassing human doctors and signaling a major shift in medical practice.

AI Regulation

Korea Venture Investment Corp. unveils AI-driven fund management systems by integrating Nvidia H200 GPUs to enhance efficiency and support unicorn growth.

AI Technology

Apple raises Mac mini starting price to $799 amid AI-driven inventory shortages, eliminating the $599 model in response to surging demand for advanced computing.

AI Research

IBM launches a Chicago Quantum Hub to create 750 AI jobs and expands its MIT partnership to advance quantum computing and AI integration.

AI Government

71% of Australian employees use generative AI daily, but only 36% trust its implementation, highlighting urgent calls for better policy frameworks and safeguards.

AI Regulation

The Academy of Motion Picture Arts and Sciences bars AI performances from Oscar eligibility, emphasizing human-authored content amid rising industry tensions over generative AI's...

AI Tools

Workday's stock jumps 3.73% to $126.96 amid AI product updates and earnings optimism, yet analysts cite a 49.8% undervaluation risk at $253.14.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.