Connect with us

Hi, what are you looking for?

AI Research

OpenAI’s GPT-5 Conducts 36,000 Experiments, Highlighting AI’s Risks in Biology

OpenAI’s GPT-5 autonomously conducts 36,000 biological experiments, cutting protein production costs by 40% while raising biosecurity concerns.

Artificial intelligence (AI) is rapidly advancing the capabilities of biological research, enabling systems to autonomously design and execute experiments. This evolution is exemplified by a recent collaboration between AI company OpenAI and biotech firm Ginkgo Bioworks, which announced in February 2026 that OpenAI’s flagship model, GPT-5, autonomously designed and executed 36,000 biological experiments through a robotic cloud laboratory. This innovation significantly reduces the cost of producing specific proteins by 40%, illustrating how programmable biology can streamline the design and testing of biological components. In this framework, AI closes the loop, generating study designs, which are then executed by robots, ultimately allowing human researchers to set goals while machines handle the bulk of the experimentation.

Traditionally, the field of biology has progressed from mere observation toward deeper understanding, marked by milestones such as genome sequencing and the development of gene-editing tools like CRISPR. AI now facilitates a third phase, transforming biology into a more engineering-like discipline characterized by rapid iteration and parallel exploration of design variations. Unlike traditional experiments that test singular hypotheses, AI-driven approaches explore thousands of designs simultaneously, refining them as an engineer would refine a prototype.

A prominent application of AI in this realm is AI-accelerated protein design. Proteins, essential molecular machines in living cells, have historically required extensive trial and error for design, as even minor modifications can lead to unpredictable results. AI systems trained on millions of natural protein sequences can now predict how changes will affect a protein’s function, expediting drug design and vaccine development. Coupled with automated laboratories, these AI models reduce the time needed for testing variations from months or years to just days.

Despite these advancements, the same tools that enhance biological research also prompt concerns regarding their potential misuse, commonly referred to as the dual-use problem. Researchers have highlighted that AI models integrated with automated labs can optimize the spread of viruses without requiring specialized training. A risk-scoring tool has been developed to assess how AI could enhance a virus’s transmission capabilities or help it evade immune responses. Current AI models can guide users through the technical aspects of recovering live viruses from synthetic DNA, increasing the risk of developing bioweapons—a concern that existing oversight measures do not adequately address.

Studies have yielded mixed results regarding the potential for inexperienced individuals to safely conduct biological experiments using AI assistance. Research from Scale AI and the biosecurity nonprofit SecureBio found that novices enhanced by AI could complete biosecurity tasks with four times greater accuracy, often outpacing trained experts. Conversely, a separate study from Active Site suggested that while AI assistance improved the success rates of novices in certain tasks, it did not lead to significant advancements in complex workflows to produce viruses safely.

As AI systems become capable of running experiments autonomously, the regulatory landscape remains ill-equipped to address these advances. Existing rules governing biological research do not account for AI-driven automation, while regulations surrounding AI lack specificity regarding its application in biological contexts. Although the Biden administration had introduced a 2023 executive order aimed at AI security that included biosecurity provisions, its revocation by the Trump administration has left gaps in oversight. A bipartisan bill proposed in 2026 seeks to mandate DNA screening but does not address AI-designed sequences that could bypass current detection methods.

The situation is compounded by the inadequacies of international treaties such as the Biological Weapons Convention, which lacks provisions for AI technologies. Both the U.K. AI Security Institute and the U.S. National Security Commission on Emerging Biotechnology have called for coordinated government actions to address these emerging risks. Current safety evaluations conducted by AI labs are often opaque and fail to capture the real-world risks posed by these technologies. Researchers estimate that even modest advancements in AI’s ability to assist in pathogen-related experiments could lead to thousands of additional fatalities from bioterrorism annually.

Proposals for improved governance are emerging, with suggestions including a managed access framework for biological AI tools that aligns user access with the risk level of the model. The RAND Center on AI, Security and Technology has outlined necessary steps to enhance biosecurity, such as improved DNA synthesis screening and better evaluations of AI models before their release. Some AI companies are beginning to implement their own safety measures, including Anthropic, which activated its highest safety tier upon releasing its advanced model in mid-2025. OpenAI also updated its Preparedness Framework to revise risk thresholds for biological applications.

As AI continues to reshape biological research, questions about its use outside controlled environments remain unanswered. Striking a balance between ensuring safety and fostering innovation is critical; overregulation could stifle talent and investment, while underregulation may expose society to significant risks. Ultimately, how effectively policymakers respond to these challenges will influence the future landscape of AI in biology and its implications for global security.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Google's Gemini AI model claims 91% accuracy, yet it generates tens of millions of errors annually, raising alarms about misinformation in search results

AI Technology

Smart fire detection systems are projected to reach $52.89 billion by 2026, cutting false alarms by up to 97% while enhancing safety and compliance.

AI Government

Over 200 global laws regulate AI, yet environmental impacts like the 700,000 liters of water consumed to train GPT-3 remain largely unaddressed.

AI Cybersecurity

AI-powered walk-through metal detectors achieve a 70% reduction in false alarms, enhancing security efficiency in high-traffic environments like airports and corporate buildings.

AI Government

Leopold Aschenbrenner warns that AI could surpass college graduates by 2026, posing unprecedented national security risks reminiscent of the atomic bomb.

AI Finance

OpenAI acquires Hiro Finance to enhance ChatGPT's capabilities in corporate finance, aiming to leverage Hiro's specialized team for improved accuracy and user engagement.

Top Stories

Stanford's AI Index reveals U.S. investment of $285.9B eclipses China's $12.4B, yet 95% of AI projects see no ROI and model gap narrows to...

AI Generative

Anthropic unveils Claude Opus 4.7, enhancing AI capabilities, while launching a full-stack app platform to streamline developer workflows.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.