Connect with us

Hi, what are you looking for?

Top Stories

OpenAI Warns GPT-5 Models Could Enable Cybersecurity Threats with High Exploit Potential

OpenAI warns that its advanced GPT-5.1-Codex-Max models could develop zero-day exploits, achieving a 76% success rate in cybersecurity challenges, raising alarm over potential attacks.

OpenAI, the company behind ChatGPT, issued a significant warning on Wednesday regarding the potential dangers of its forthcoming artificial intelligence systems, highlighting serious cybersecurity threats. The organization indicated that future AI models could develop functional zero-day exploits capable of targeting highly protected computer systems while facilitating sophisticated attacks on businesses or industrial facilities aimed at causing tangible harm.

OpenAI’s blog detailed a rapid advancement in its AI models, noting that performance on capture-the-flag security challenges leapt from 27% on GPT-5 in August 2025 to 76% on GPT-5.1-Codex-Max by November of the same year. The firm now anticipates that new models will achieve what it describes as “high” levels of cybersecurity capability, implying systems able to devise working exploits for previously undiscovered vulnerabilities in well-protected networks and assist in intricate intrusion campaigns targeting critical infrastructure.

In response to these threats, OpenAI announced it is investing in strengthening its models for defensive security tasks. The company aims to develop tools that assist security teams in identifying code vulnerabilities and rectifying security gaps. OpenAI recognizes that defenders are often outnumbered and resource-constrained, and it seeks to provide them with an advantage.

However, the challenge lies in the overlapping nature of offensive and defensive cybersecurity techniques. What benefits defenders may also aid attackers. OpenAI emphasized that it cannot depend on a single protective measure, advocating for multiple layers of security controls that work synergistically. This includes access restrictions, enhanced infrastructure security, regulated information flow, and continuous monitoring of network activity.

The company’s detection systems are designed to monitor for suspicious behavior across its products, employing advanced models that can block results, revert to less powerful models, or flag incidents for human review when necessary. This proactive stance is complemented by OpenAI’s collaboration with specialized security testing groups that simulate determined attackers, aiming to uncover vulnerabilities before they can be exploited in real-world scenarios.

The cybersecurity concerns surrounding AI are not unfounded; hackers are increasingly leveraging AI technologies to refine their methods. OpenAI plans to develop a program offering qualified users engaged in cybersecurity defense special access to advanced features within its latest models, although the firm is still determining which features may be broadly accessible and which will require tighter restrictions.

OpenAI is also developing a security tool called Aardvark, which is currently in private testing. This tool assists developers and security teams in identifying and rectifying vulnerabilities at scale, highlighting weaknesses in code and recommending fixes. It has already uncovered new vulnerabilities in open-source software, and the company plans to dedicate substantial resources to bolster the broader security ecosystem, including providing complimentary coverage to select non-commercial open-source projects.

In a bid to guide its efforts, OpenAI will establish the Frontier Risk Council, which will consist of seasoned cybersecurity practitioners. Initially focusing on cybersecurity, the council will eventually broaden its scope. Council members will aid in delineating the line between beneficial capabilities and potential avenues for misuse.

OpenAI collaborates with other leading AI firms through the Frontier Model Forum, a nonprofit organization dedicated to cultivating a common understanding of threats and best practices. OpenAI posits that the security risks associated with advanced AI could stem from any major AI system within the industry.

Recent studies indicate that AI agents can uncover zero-day vulnerabilities worth millions in blockchain smart contracts, underscoring the dual-edged nature of these advancements. OpenAI has made strides to enhance its security measures, although it has faced its share of security breaches, underscoring the challenges of safeguarding AI systems and their infrastructure.

The company acknowledges that this is an ongoing effort aimed at providing defenders with greater advantages while reinforcing the security of critical infrastructure throughout the technology landscape. As AI continues to evolve, the balancing act between innovation and security will remain a pressing issue for the industry.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Tenable forecasts a 2026 cybersecurity landscape where AI-driven attacks amplify traditional threats, compelling organizations to prioritize proactive security measures and custom tools.

Top Stories

AI is set to transform drug development, potentially reducing costs and timelines significantly, as Impiricus partners with top pharma companies amid rising regulatory scrutiny.

AI Finance

Benchmark boosts Broadcom's price target to $485 following a 76% surge in AI chip revenue, while the company faces potential margin pressures ahead.

AI Education

U.S. Education Department announces $1B initiative to enhance immigrant student rights and integrate AI-driven personalized learning by 2027.

AI Generative

Discover the top 7 AI chat apps of 2026, including Claude AI's $20 Pro plan and Google Gemini's multimodal features, guiding users to optimal...

AI Research

Researchers demonstrate deep learning's potential in protein-ligand docking, enhancing drug discovery accuracy by 95% and paving the way for personalized therapies.

Top Stories

New studies reveal that AI-generated art is perceived as less beautiful than human art, while emotional bonds with chatbots risk dependency, highlighting urgent societal...

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.