OpenAI has unveiled its new AI model, GPT-5.2-Codex, designed to function as an autonomous software agent capable of managing complex tasks. The company is launching an exclusive access program for verified experts, allowing them to utilize a version of this model with relaxed security filters, particularly beneficial for identifying software vulnerabilities.
The development of GPT-5.2-Codex builds on advanced context compression techniques, enabling the model to efficiently handle extensive conversation histories and complex code analyses. This enhancement allows the system to maintain clarity and coherence even during intricate project evaluations, extending the capabilities of its predecessor, GPT-5.1-Codex-Max, which was already adept at tasks lasting longer than a day.
Moreover, OpenAI has improved the model’s image processing capabilities, empowering GPT-5.2-Codex to more accurately interpret technical diagrams and screenshots of user interfaces. The company claims that this version demonstrates more reliability in controlling native Windows environments compared to earlier iterations.
Despite these advancements, benchmarks indicate only modest improvements. In the standardized SWE-Bench Pro test, which simulates real-world problem-solving in GitHub repositories, GPT-5.2-Codex achieved a solution rate of 56.4 percent, a slight increase from the 55.6 percent recorded by the standard version. In another evaluation, Terminal-Bench 2.0, the model performed slightly better, reaching an accuracy of 64 percent compared to 62.2 percent for GPT-5.2 and 58.1 percent for its predecessor.
Cybersecurity remains a significant focus with the release of GPT-5.2-Codex. The model’s enhanced code analysis capabilities can serve dual purposes—both for defensive measures and potential attacks. OpenAI highlighted a recent case where security researcher Andrew MacPherson utilized an earlier version of the model to uncover vulnerabilities within the React framework. This investigation revealed unexpected behaviors that led to the identification of three previously unknown vulnerabilities capable of compromising services and exposing source code. OpenAI asserts that such discoveries illustrate how autonomous AI systems can expedite the work of security researchers.
However, these capabilities also pose risks. OpenAI has classified the model as nearing a “high” rating within its Preparedness Framework for cybersecurity. In response to these concerns, the company is initiating a trusted access program targeted at certified security experts and organizations. This program will provide participants with access to less restrictive models, enabling them to probe for security vulnerabilities without being hindered by the standard protective filters of the AI.
GPT-5.2-Codex is currently available to paying ChatGPT users. Integration is facilitated through command-line interfaces, development environments, and cloud services, with an interface for third-party providers expected to launch soon. As the landscape of AI technology continues to evolve, the implications of such advancements on cybersecurity and software development practices remain profound.
See also
Google Launches Gemini 3 Flash, Delivering 3x Speed Boost Over Gemini 2.5
Large Language Models Show 90% Vulnerability to Prompt Injection in Medical Advice Tests
OpenAI Launches GPT Image 1.5: 4x Faster Creation & 20% Cost Reduction for Users
Receptor.AI Integrates LLMs to Accelerate Protein Binding Pocket Identification
Skyra Launches Groundbreaking ViF-CoT-4K Dataset for Enhanced AI Video Detection and Explainability


















































