Connect with us

Hi, what are you looking for?

AI Cybersecurity

Geoffrey Hinton Warns of AI Misuse and Existential Risks Amid Rapid Advancements

Geoffrey Hinton warns that advanced AI poses a 10-20% existential threat, urging for regulatory frameworks amid rising cyberattacks like Anthropic’s recent incident.

Geoffrey Hinton, a pivotal figure in the development of modern artificial intelligence (AI), has expressed grave concerns regarding the dual risks posed by the technology: its misuse by humans and the potential for AI to evolve into uncontrollable entities. “There’s a big distinction between two different kinds of risk,” Hinton stated, emphasizing the immediate threat of human actors deploying AI for malicious purposes, as evidenced by the rise of deepfake videos, cyberattacks, and AI-assisted viruses.

This immediate risk is evident, but Hinton identifies a more profound concern—the possibility that AI systems themselves may become autonomous agents, free from human oversight. He warns that as AI advances towards what he describes as superintelligence, the existing belief that humans can control these systems could become obsolete. “The current framework around AI—that humans can control the technology—will therefore no longer be relevant,” he articulated.

To mitigate such risks, Hinton proposes a conceptual shift in AI design. He suggests that future AI models could benefit from a “maternal instinct,” prompting them to prioritize the well-being of humans rather than seeking dominance. He likens this relationship to that of a mother caring for a child: “They will be the mothers, and we will be the babies,” he said, indicating a potential path toward safer interactions between humans and advanced AI.

Recent events underscore the urgency of Hinton’s warnings. In November 2025, the AI company Anthropic reported a significant incident involving a large-scale cyberattack, purportedly orchestrated by a Chinese state-sponsored group using its Claude Code system. This attack targeted approximately 30 organizations, including technology firms, financial institutions, and governmental agencies. Cybersecurity experts now fear that such AI-driven assaults could become increasingly automated, with nations like Iran potentially leveraging these tools to compromise critical infrastructure.

This evolving threat landscape illustrates that AI is not merely a tool but a catalyst for more sophisticated and large-scale cyber operations. Hinton’s concerns extend beyond immediate technological challenges; he critiques the prevailing motivations within the tech industry, which he believes prioritize short-term profits over long-term safety. “For the owners of the companies, what’s driving the research is short-term profits,” he explained, highlighting a tendency among developers to focus on immediate issues rather than anticipating future consequences.

Although Hinton has called for stronger regulatory frameworks to address these risks, he remains skeptical about the effectiveness of governance alone. He argues that each emerging threat requires tailored solutions, from counteracting deepfakes to preventing autonomous cyberattacks. To combat misinformation, he envisions systems capable of verifying digital content authenticity, similar to provenance signatures that authenticate images and videos.

As discussions about the future of AI intensify, figures like Elon Musk envision a world where automation transforms economies and societal structures, potentially leading to universal basic income scenarios. Yet Hinton emphasizes the unresolved risks that accompany such advancements. He has estimated a 10 to 20 percent likelihood that advanced AI could pose an existential threat to humanity, underscoring the uncertainty surrounding the technology’s trajectory.

After departing from his position at Google in 2023, Hinton has been vocal about his concerns, particularly about the potential for AI to be exploited by those with harmful intentions. He suggests that the future of artificial intelligence will hinge not only on technical progress but also on the development of safeguards capable of keeping pace with its evolution. As AI continues to permeate various facets of life, the task of aligning its growth with societal safety and ethical considerations has never been more critical.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

Salesforce stock plunges 26% and Adobe falls 29% as fears of AI disruption threaten the SaaS business model, marking their worst quarterly performance since...

AI Finance

Adobe CFO Dan Durn leverages agentic AI to boost finance efficiency by 45%, streamlining contract reviews and email management for rapid growth.

AI Education

AI in education is set to transform learning with £2M investment in tools that could free teachers from 4-5 hours of administrative tasks weekly.

AI Marketing

Over 50% of CMOs plan to integrate AI into campaigns by 2026, prioritizing human connection and storytelling amidst a $250 billion influencer marketing boom.

AI Generative

Samsung launches the Galaxy XR, integrating multimodal AI to revolutionize user interaction and productivity in augmented and virtual reality environments.

Top Stories

SMCI shares plummet 33% to $30 after a U.S. indictment alleges a $2.5B scheme to divert Nvidia AI servers to China, raising compliance concerns.

AI Cybersecurity

Malaysian Home Minister Saifuddin Nasution urges rapid AI adoption in security agencies, but experts warn of data leakage risks and unreliable technologies.

AI Regulation

Biden's administration unveils a comprehensive AI regulatory framework prioritizing children's online safety and state law preemption, aiming to foster innovation while protecting vulnerable users.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.