Researchers are increasingly leveraging generative AI models, such as ChatGPT, to assist in various stages of the academic publishing process, including manuscript preparation. While most journals prohibit AI from being listed as authors, many allow certain applications of AI in writing and editing, provided that these uses are disclosed. A new study led by Yi Bu, an information scientist at Peking University, and co-authored by Yongyuan He, a computer scientist at the same institution, investigates the impact of these policies on AI utilization in academic publishing.
Published in the Proceedings of the National Academy of Sciences, their study examined AI usage in published articles across journals with and without established AI policies. The researchers found that a significant majority of articles that exhibited evidence of AI involvement did not disclose this information, irrespective of the journal’s stance on AI use. “The fact that AI usage has surged increasingly regardless of policy tells us that researchers are voting with their keyboards,” Bu stated. “They find value in these tools for overcoming language barriers, and current policies from journals are not a strong enough guide.”
To assess AI usage, Bu and He utilized a large language model to categorize over 5,000 journals sourced from the Journal Citation Reports, published by Clarivate Analytics. They found that most journals permitted AI for writing and editing support, with more than 60 percent allowing its use for language and grammar assistance. Other applications, such as “reference and citation support,” “content creation and generation,” and “translation services,” were mentioned less frequently in journal guidelines.
The researchers analyzed over one million full-text papers to estimate the likelihood of AI-generated text, discovering a comparable proportion of papers with suspected AI content across journals, regardless of their policies. Notably, the incidence of AI content has risen over the past three years. Papers authored by researchers from non-English-speaking countries exhibited higher levels of probable AI content than those from English-speaking countries, suggesting a role for AI in reducing translation barriers. However, Bu and He acknowledged that their model faced limitations in differentiating between AI usage for language polishing and complete text generation.
Although most journals allowed specific AI applications, more than 3,500 required authors to disclose their AI usage. The study revealed that the majority of articles with suspected AI involvement did not disclose this information, regardless of the journal’s policy. Alex Glynn, a data scientist at the University of Louisville specializing in research literacy, noted the complexity of analyzing AI use because “there isn’t an objectively correct way to perform these analyses.” He acknowledged the researchers’ multipronged approach to verify their findings but raised concerns regarding the categorization of AI applications due to potential overlaps.
While the disclosure rate in publications from early 2025 remains low, it has improved from approximately 0.1 percent in 2023 to 0.43 percent. Bu and He interpreted this increase as a positive indication that policies may be fostering greater transparency regarding AI use. However, they also recognized a residual hesitance among researchers to disclose AI involvement, driven by fears regarding how such disclosure may affect the perception of their work. “Publishers should shift to promoting responsible integration with better infrastructure to detect and validate ethical AI use,” Bu urged, emphasizing the need for enhanced education and training for researchers.
Glynn commended the study for providing a quantifiable estimate of undisclosed AI usage, suggesting that the actual figures might be even higher than reported. However, he contested the authors’ conclusions regarding academic reluctance to disclose AI involvement, equating it to failing to disclose potential conflicts of interest. “I wouldn’t describe failure to disclose as a ‘cautious approach.’ I would describe it as a lie,” he stated. Glynn also criticized the notion that AI use in research is inevitable, advocating instead for clearer guidelines from publishers to eliminate ambiguity, particularly concerning the disclosure of “light editing,” which lacks precise definitions.
The challenge of addressing undisclosed AI use persists, but Glynn emphasized that publishers could take proactive measures to retract papers demonstrating improper AI utilization in a timely manner. As the landscape of academic publishing continues to evolve, the integration of AI technologies remains a critical area for ongoing research and policy development.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































