Connect with us

Hi, what are you looking for?

AI Research

AI Integration in Peer Review Surges to 53% Amid Calls for Responsible Governance

AI tools now assist 53% of peer reviewers, highlighting opportunities for enhanced rigor and transparency in research publishing, according to Frontiers’ new whitepaper.

A new whitepaper from Frontiers indicates that artificial intelligence (AI) has become increasingly integrated into the peer review process, with 53% of reviewers reporting the use of AI tools. The study, titled “Unlocking AI’s untapped potential: responsible innovation in research and publishing,” highlights a critical juncture for research publishing as it captures insights from 1,645 active researchers worldwide.

The findings reveal a global community that is eager to embrace AI confidently and responsibly. While many reviewers currently utilize AI primarily for drafting reports or summarizing findings, the report underscores the significant untapped potential for AI to enhance rigor, reproducibility, and deeper methodological insights in scientific research.

The survey, conducted between May and June 2025, represents the first large-scale examination of AI adoption, trust, training, and governance within authoring, reviewing, and editorial workflows. According to Kamila Markram, Chief Executive Officer and Co-founder of Frontiers, “AI is transforming how science is written and reviewed, opening new possibilities for quality, collaboration, and global participation.” She emphasizes that the whitepaper serves as a call to action for the entire research ecosystem to harness this potential, advocating for aligned policies and responsible governance that could reinforce scientific integrity and accelerate discovery.

Despite the promise of AI in peer review, there are notable limitations. Most reviewers currently use AI for superficial tasks, like drafting and improving clarity in reports, with only about 19% employing it to assess methodology, statistical validity, or experimental design—areas traditionally seen as the intellectual core of peer review.

The research reflects a strong enthusiasm for more effective AI utilization, particularly among early-career researchers, with 87% reporting usage. In rapidly growing research regions such as China and Africa, usage rates stand at 77% and 66%, respectively. Elena Vicario, Director of Research Integrity at Frontiers, noted that while AI is enhancing efficiency and clarity in peer reviews, its most significant contributions lie ahead, contingent upon proper governance, transparency, and training.

The study also reveals a “trust paradox.” While a majority of scientists believe AI can enhance manuscript quality, 57% express discomfort with the idea of a reviewer utilizing AI to write peer review reports for their manuscripts. This concern diminishes to 42% when AI is perceived merely as a tool to augment existing reports.

Furthermore, a significant 72% of respondents feel they could accurately identify an AI-generated peer review report on a manuscript they authored, though research suggests this confidence may be unwarranted. Junior researchers tend to hold a more favorable view of AI’s impact on peer review compared to their senior colleagues, with 48% of junior researchers anticipating a positive impact versus 34% of more senior researchers.

In the foreword of the paper, Markram points out that AI’s current applications in peer review focus largely on surface-level tasks—polishing language and handling administrative duties—rather than the deeper analytical and methodological work that could substantially enhance scientific rigor and reproducibility.

The report advocates for coordinated action across the research ecosystem, urging publishers to incorporate transparency, disclosure, and human oversight into editorial workflows. It encourages universities and research institutions to integrate AI literacy into formal training programs, while calling on funders and policymakers to harmonize international standards.

Frontiers posits that clearly defined boundaries, human accountability, and well-governed, secure tools will be more effective than blanket prohibitions in safeguarding research integrity. The company cautions that unregulated, opaque, or undisclosed AI usage poses a greater risk to the quality of peer review, a reality that is already unfolding across the research landscape.

The ongoing transformation within peer review is poised to reshape the evaluation of scientific research, paper by paper and reviewer by reviewer. The long-term impact on scientific integrity and public trust will hinge on the global research community’s ability to govern AI with the same rigor it demands of evidence itself.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

AI Regulation

2026 will see AI adoption shift towards compliance-driven frameworks as the EU enforces new regulations, demanding accountability and measurable ROI from enterprises.

Top Stories

AI stocks surge 81% since 2020, with TSMC's 41% sales growth and Amazon investing $125B in AI by 2026, signaling robust long-term potential.

Top Stories

New studies reveal AI-generated art ranks lower in beauty than human creations, while chatbots risk emotional dependency, highlighting cultural impacts on tech engagement.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.