Connect with us

Hi, what are you looking for?

Top Stories

OpenAI Warns GPT-5 Models Could Enable Cybersecurity Threats with High Exploit Potential

OpenAI warns that its advanced GPT-5.1-Codex-Max models could develop zero-day exploits, achieving a 76% success rate in cybersecurity challenges, raising alarm over potential attacks.

OpenAI, the company behind ChatGPT, issued a significant warning on Wednesday regarding the potential dangers of its forthcoming artificial intelligence systems, highlighting serious cybersecurity threats. The organization indicated that future AI models could develop functional zero-day exploits capable of targeting highly protected computer systems while facilitating sophisticated attacks on businesses or industrial facilities aimed at causing tangible harm.

OpenAI’s blog detailed a rapid advancement in its AI models, noting that performance on capture-the-flag security challenges leapt from 27% on GPT-5 in August 2025 to 76% on GPT-5.1-Codex-Max by November of the same year. The firm now anticipates that new models will achieve what it describes as “high” levels of cybersecurity capability, implying systems able to devise working exploits for previously undiscovered vulnerabilities in well-protected networks and assist in intricate intrusion campaigns targeting critical infrastructure.

In response to these threats, OpenAI announced it is investing in strengthening its models for defensive security tasks. The company aims to develop tools that assist security teams in identifying code vulnerabilities and rectifying security gaps. OpenAI recognizes that defenders are often outnumbered and resource-constrained, and it seeks to provide them with an advantage.

However, the challenge lies in the overlapping nature of offensive and defensive cybersecurity techniques. What benefits defenders may also aid attackers. OpenAI emphasized that it cannot depend on a single protective measure, advocating for multiple layers of security controls that work synergistically. This includes access restrictions, enhanced infrastructure security, regulated information flow, and continuous monitoring of network activity.

The company’s detection systems are designed to monitor for suspicious behavior across its products, employing advanced models that can block results, revert to less powerful models, or flag incidents for human review when necessary. This proactive stance is complemented by OpenAI’s collaboration with specialized security testing groups that simulate determined attackers, aiming to uncover vulnerabilities before they can be exploited in real-world scenarios.

The cybersecurity concerns surrounding AI are not unfounded; hackers are increasingly leveraging AI technologies to refine their methods. OpenAI plans to develop a program offering qualified users engaged in cybersecurity defense special access to advanced features within its latest models, although the firm is still determining which features may be broadly accessible and which will require tighter restrictions.

OpenAI is also developing a security tool called Aardvark, which is currently in private testing. This tool assists developers and security teams in identifying and rectifying vulnerabilities at scale, highlighting weaknesses in code and recommending fixes. It has already uncovered new vulnerabilities in open-source software, and the company plans to dedicate substantial resources to bolster the broader security ecosystem, including providing complimentary coverage to select non-commercial open-source projects.

In a bid to guide its efforts, OpenAI will establish the Frontier Risk Council, which will consist of seasoned cybersecurity practitioners. Initially focusing on cybersecurity, the council will eventually broaden its scope. Council members will aid in delineating the line between beneficial capabilities and potential avenues for misuse.

OpenAI collaborates with other leading AI firms through the Frontier Model Forum, a nonprofit organization dedicated to cultivating a common understanding of threats and best practices. OpenAI posits that the security risks associated with advanced AI could stem from any major AI system within the industry.

Recent studies indicate that AI agents can uncover zero-day vulnerabilities worth millions in blockchain smart contracts, underscoring the dual-edged nature of these advancements. OpenAI has made strides to enhance its security measures, although it has faced its share of security breaches, underscoring the challenges of safeguarding AI systems and their infrastructure.

The company acknowledges that this is an ongoing effort aimed at providing defenders with greater advantages while reinforcing the security of critical infrastructure throughout the technology landscape. As AI continues to evolve, the balancing act between innovation and security will remain a pressing issue for the industry.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Schools leverage AI to enhance cybersecurity, but experts warn that AI-driven threats like advanced phishing and malware pose new risks.

AI Tools

Only 42% of employees globally are confident in computational thinking, with less than 20% demonstrating AI-ready skills, threatening productivity and innovation.

AI Research

Krites boosts curated response rates by 3.9x for large language models while maintaining latency, revolutionizing AI caching efficiency.

AI Marketing

HCLTech and Cisco unveil the AI-driven Fluid Contact Center, improving customer engagement and efficiency while addressing 96% of agents' complex interaction challenges.

Top Stories

Cohu, Inc. posts Q4 2025 sales rise to $122.23M but widens annual loss to $74.27M, highlighting risks amid semiconductor market volatility.

Top Stories

ValleyNXT Ventures launches the ₹400 crore Bharat Breakthrough Fund to accelerate seed-stage AI and defence startups with a unique VC-plus-accelerator model

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

AI Regulation

Clarkesworld halts new submissions amid a surge of AI-generated stories, prompting industry-wide adaptations as publishers face unprecedented content challenges.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.