A new whitepaper from Frontiers indicates that artificial intelligence (AI) has become increasingly integrated into the peer review process, with 53% of reviewers reporting the use of AI tools. The study, titled “Unlocking AI’s untapped potential: responsible innovation in research and publishing,” highlights a critical juncture for research publishing as it captures insights from 1,645 active researchers worldwide.
The findings reveal a global community that is eager to embrace AI confidently and responsibly. While many reviewers currently utilize AI primarily for drafting reports or summarizing findings, the report underscores the significant untapped potential for AI to enhance rigor, reproducibility, and deeper methodological insights in scientific research.
The survey, conducted between May and June 2025, represents the first large-scale examination of AI adoption, trust, training, and governance within authoring, reviewing, and editorial workflows. According to Kamila Markram, Chief Executive Officer and Co-founder of Frontiers, “AI is transforming how science is written and reviewed, opening new possibilities for quality, collaboration, and global participation.” She emphasizes that the whitepaper serves as a call to action for the entire research ecosystem to harness this potential, advocating for aligned policies and responsible governance that could reinforce scientific integrity and accelerate discovery.
Despite the promise of AI in peer review, there are notable limitations. Most reviewers currently use AI for superficial tasks, like drafting and improving clarity in reports, with only about 19% employing it to assess methodology, statistical validity, or experimental design—areas traditionally seen as the intellectual core of peer review.
The research reflects a strong enthusiasm for more effective AI utilization, particularly among early-career researchers, with 87% reporting usage. In rapidly growing research regions such as China and Africa, usage rates stand at 77% and 66%, respectively. Elena Vicario, Director of Research Integrity at Frontiers, noted that while AI is enhancing efficiency and clarity in peer reviews, its most significant contributions lie ahead, contingent upon proper governance, transparency, and training.
The study also reveals a “trust paradox.” While a majority of scientists believe AI can enhance manuscript quality, 57% express discomfort with the idea of a reviewer utilizing AI to write peer review reports for their manuscripts. This concern diminishes to 42% when AI is perceived merely as a tool to augment existing reports.
Furthermore, a significant 72% of respondents feel they could accurately identify an AI-generated peer review report on a manuscript they authored, though research suggests this confidence may be unwarranted. Junior researchers tend to hold a more favorable view of AI’s impact on peer review compared to their senior colleagues, with 48% of junior researchers anticipating a positive impact versus 34% of more senior researchers.
In the foreword of the paper, Markram points out that AI’s current applications in peer review focus largely on surface-level tasks—polishing language and handling administrative duties—rather than the deeper analytical and methodological work that could substantially enhance scientific rigor and reproducibility.
The report advocates for coordinated action across the research ecosystem, urging publishers to incorporate transparency, disclosure, and human oversight into editorial workflows. It encourages universities and research institutions to integrate AI literacy into formal training programs, while calling on funders and policymakers to harmonize international standards.
Frontiers posits that clearly defined boundaries, human accountability, and well-governed, secure tools will be more effective than blanket prohibitions in safeguarding research integrity. The company cautions that unregulated, opaque, or undisclosed AI usage poses a greater risk to the quality of peer review, a reality that is already unfolding across the research landscape.
The ongoing transformation within peer review is poised to reshape the evaluation of scientific research, paper by paper and reviewer by reviewer. The long-term impact on scientific integrity and public trust will hinge on the global research community’s ability to govern AI with the same rigor it demands of evidence itself.
See also
Delhi’s Education Reform Targets AI and Skill-Based Learning by 2026 Amid Ongoing Challenges
2025 Sees Record-Breaking Scientific Discoveries: Longest Lightning and AI-Generated Genomes
NIT-C and JMR Infotech Launch ₹1.25 Crore AI Innovation Lab NEOHIVE for Real-World Applications
AI Agents’ Memory Systems Evolve: OpenAI and Google DeepMind Push for Enhanced Recall by 2025
DeepSeek AI Reveals Efficiency-Focused Research Framework to Enhance Model Scaling




















































