Connect with us

Hi, what are you looking for?

AI Research

Cornell Study Reveals AI Boosts Paper Output by 50% but Undermines Quality

Cornell researchers found that large language models like ChatGPT boost scientific paper submissions by over 50% but raise concerns about quality and originality.

Researchers at Cornell University have found that the use of large language models (LLMs) like ChatGPT is significantly boosting productivity in scientific research, particularly for non-native English speakers. Their study highlights a growing trend since the release of ChatGPT in late 2022, which has led to an increase in the number of manuscripts submitted to various preprint platforms. However, this uptick in productivity is raising concerns about the quality and originality of scientific work, as many journal editors report a rise in submissions that appear well-written but lack substantial scientific value.

The findings, published in a paper titled “Scientific Production in the Era of Large Language Models” in *Science* on December 18, reveal a notable shift in the ecosystem of scientific publishing. Yian Yin, assistant professor of information science at Cornell, pointed out that this trend spans multiple disciplines, including physical, computer, biological, and social sciences. “There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund,” he stated.

To gauge the impact of LLMs on scientific output, Yin’s team examined over 2 million papers posted from January 2018 to June 2024 across three prominent preprint platforms: arXiv, bioRxiv, and the Social Science Research Network (SSRN). By comparing papers believed to be written by humans with those likely generated using LLM assistance, the researchers developed a model to flag AI-influenced texts. This model enabled them to estimate how many papers authors submitted before and after adopting LLMs and whether these papers were subsequently accepted by journals.

The results indicated a clear productivity boost linked to LLM usage. On arXiv, researchers identified as using LLMs submitted approximately one-third more papers than their peers who did not. On bioRxiv and SSRN, the increase was even more pronounced, exceeding 50%. Notably, the most significant gains were observed among scientists who write in English as a second language. Researchers affiliated with Asian institutions, for instance, increased their output by between 43% and 89% after adopting LLMs, depending on the platform. Yin anticipates this technological advantage may eventually alter global scientific productivity patterns, particularly benefitting regions previously hampered by language barriers.

The study also uncovered potential advantages concerning literature searches and citation practices. Bing Chat, regarded as the first widely used AI-powered search tool, outperformed traditional search engines in identifying newer and relevant papers. Traditional tools often favored older, more frequently cited sources. “People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas,” noted first author Keigo Kusumegi, a doctoral student in information science, who plans to explore the relationship between AI usage and the emergence of innovative, interdisciplinary science.

Despite the productivity benefits, the findings raise critical concerns regarding the peer review process. Historically, complex writing styles characterized by longer sentences and advanced vocabulary have been indicators of high-quality research. However, the study revealed that papers likely written with LLM assistance did not follow this trend. Even when these AI-generated papers excelled in writing complexity, they faced lower acceptance rates from journals. Yin interprets this as an indication that the quality of writing alone may no longer be a reliable measure of scientific merit, complicating the decisions of editors and reviewers who are tasked with identifying valuable contributions to the field.

This disconnect could have far-reaching implications, suggesting that institutions and funding agencies may need to reassess their reliance on raw publication counts as a gauge of scientific significance. Moving forward, the researchers aim to delve deeper into the cause-and-effect relationship of LLM usage in scientific writing through controlled experiments. Yin is also organizing a symposium set for March 3-5, 2026, at Cornell’s Ithaca campus, focusing on the implications of generative AI in research and what policymakers should consider as these tools become increasingly integrated into the scientific process.

Yin concludes that as AI tools become more ubiquitous in writing, coding, and idea generation, their influence will only grow. The pressing question for researchers and institutions is shifting from whether AI is used to how it is being utilized effectively. “Already now, the question is not, have you used AI? The question is, how exactly have you used AI and whether it’s helpful or not,” he stated.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Survey shows 74% of finance professionals use AI tools like ChatGPT weekly, raising significant GDPR compliance and data security concerns.

AI Business

Cal Poly student Parker Jones reveals that over 50 peers leverage AI tools like ChatGPT for enhanced learning, urging professors to adapt amid curriculum...

AI Regulation

OpenAI faces backlash after funding the Parents & Kids Safe AI Coalition, with several members unaware of its financial support, raising transparency concerns.

Top Stories

Penguin Random House sues OpenAI in Munich for copyright infringement, challenging AI's use of proprietary content and seeking clearer legal guidelines.

AI Marketing

Retailers must implement structured data and trust signals to compete effectively in AI-driven product recommendations, as Microsoft's guide reveals evolving consumer reliance on AI...

AI Technology

OpenAI secures $122 billion in funding at an $852 billion valuation, bolstering its competitive edge with over 900 million weekly ChatGPT users.

AI Tools

WVU expert Lauren Cooper warns that relying on AI tools like ChatGPT for tax advice could lead to costly errors due to outdated and...

Top Stories

Google's Gemini aces a nationwide mock exam with an 87.8 average, outperforming ChatGPT's 59.5 and Perplexity's 43.7, highlighting a tech divide in AI education.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.