Connect with us

Hi, what are you looking for?

AI Research

AI Integration in Peer Review Surges to 53% Amid Calls for Responsible Governance

AI tools now assist 53% of peer reviewers, highlighting opportunities for enhanced rigor and transparency in research publishing, according to Frontiers’ new whitepaper.

A new whitepaper from Frontiers indicates that artificial intelligence (AI) has become increasingly integrated into the peer review process, with 53% of reviewers reporting the use of AI tools. The study, titled “Unlocking AI’s untapped potential: responsible innovation in research and publishing,” highlights a critical juncture for research publishing as it captures insights from 1,645 active researchers worldwide.

The findings reveal a global community that is eager to embrace AI confidently and responsibly. While many reviewers currently utilize AI primarily for drafting reports or summarizing findings, the report underscores the significant untapped potential for AI to enhance rigor, reproducibility, and deeper methodological insights in scientific research.

The survey, conducted between May and June 2025, represents the first large-scale examination of AI adoption, trust, training, and governance within authoring, reviewing, and editorial workflows. According to Kamila Markram, Chief Executive Officer and Co-founder of Frontiers, “AI is transforming how science is written and reviewed, opening new possibilities for quality, collaboration, and global participation.” She emphasizes that the whitepaper serves as a call to action for the entire research ecosystem to harness this potential, advocating for aligned policies and responsible governance that could reinforce scientific integrity and accelerate discovery.

Despite the promise of AI in peer review, there are notable limitations. Most reviewers currently use AI for superficial tasks, like drafting and improving clarity in reports, with only about 19% employing it to assess methodology, statistical validity, or experimental design—areas traditionally seen as the intellectual core of peer review.

The research reflects a strong enthusiasm for more effective AI utilization, particularly among early-career researchers, with 87% reporting usage. In rapidly growing research regions such as China and Africa, usage rates stand at 77% and 66%, respectively. Elena Vicario, Director of Research Integrity at Frontiers, noted that while AI is enhancing efficiency and clarity in peer reviews, its most significant contributions lie ahead, contingent upon proper governance, transparency, and training.

The study also reveals a “trust paradox.” While a majority of scientists believe AI can enhance manuscript quality, 57% express discomfort with the idea of a reviewer utilizing AI to write peer review reports for their manuscripts. This concern diminishes to 42% when AI is perceived merely as a tool to augment existing reports.

Furthermore, a significant 72% of respondents feel they could accurately identify an AI-generated peer review report on a manuscript they authored, though research suggests this confidence may be unwarranted. Junior researchers tend to hold a more favorable view of AI’s impact on peer review compared to their senior colleagues, with 48% of junior researchers anticipating a positive impact versus 34% of more senior researchers.

In the foreword of the paper, Markram points out that AI’s current applications in peer review focus largely on surface-level tasks—polishing language and handling administrative duties—rather than the deeper analytical and methodological work that could substantially enhance scientific rigor and reproducibility.

The report advocates for coordinated action across the research ecosystem, urging publishers to incorporate transparency, disclosure, and human oversight into editorial workflows. It encourages universities and research institutions to integrate AI literacy into formal training programs, while calling on funders and policymakers to harmonize international standards.

Frontiers posits that clearly defined boundaries, human accountability, and well-governed, secure tools will be more effective than blanket prohibitions in safeguarding research integrity. The company cautions that unregulated, opaque, or undisclosed AI usage poses a greater risk to the quality of peer review, a reality that is already unfolding across the research landscape.

The ongoing transformation within peer review is poised to reshape the evaluation of scientific research, paper by paper and reviewer by reviewer. The long-term impact on scientific integrity and public trust will hinge on the global research community’s ability to govern AI with the same rigor it demands of evidence itself.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Schools leverage AI to enhance cybersecurity, but experts warn that AI-driven threats like advanced phishing and malware pose new risks.

AI Tools

Only 42% of employees globally are confident in computational thinking, with less than 20% demonstrating AI-ready skills, threatening productivity and innovation.

AI Research

Krites boosts curated response rates by 3.9x for large language models while maintaining latency, revolutionizing AI caching efficiency.

AI Marketing

HCLTech and Cisco unveil the AI-driven Fluid Contact Center, improving customer engagement and efficiency while addressing 96% of agents' complex interaction challenges.

Top Stories

Cohu, Inc. posts Q4 2025 sales rise to $122.23M but widens annual loss to $74.27M, highlighting risks amid semiconductor market volatility.

Top Stories

ValleyNXT Ventures launches the ₹400 crore Bharat Breakthrough Fund to accelerate seed-stage AI and defence startups with a unique VC-plus-accelerator model

AI Regulation

Clarkesworld halts new submissions amid a surge of AI-generated stories, prompting industry-wide adaptations as publishers face unprecedented content challenges.

AI Technology

Donald Thompson of Workplace Options emphasizes the critical role of psychological safety in AI integration, advocating for human-centered leadership to enhance organizational culture.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.