A study from Tsinghua University has revealed a paradox in the realm of AI-assisted scientific research: while artificial intelligence significantly enhances individual productivity among scientists, it simultaneously contributes to a narrowing of research focus across the scientific community. The research, led by Professor Li Yong from the university’s department of electronic engineering, was published in Nature and covered by Science.
The study aimed to investigate the puzzling contrast between the rapid emergence of high-profile AI-driven breakthroughs and a documented decline in disruptive scientific discoveries across various disciplines. “We observed an intuitive contradiction between micro-level efficiency gains and macro-level feelings of convergence,” Li stated in an interview with China Daily.
To substantiate their observations, the research team created a large-scale knowledge map, analyzing 41.3 million academic papers published over nearly 50 years. They employed a novel methodology combining expert annotation with large language model reasoning, achieving an identification accuracy of 0.875, with 1 representing perfect accuracy.
The findings indicated that individual scientists utilizing AI publish 3.02 times more papers, receive 4.84 times more citations, and take on project leadership roles 1.37 years earlier than their peers who do not leverage the technology. However, this “individual acceleration” comes at a significant collective cost. Research incorporating AI shows a 4.63 percent decline in breadth of knowledge and a 22 percent decrease in cross-disciplinary collaboration. Citation patterns in AI-driven studies exhibit a “star-shaped structure,” heavily centered around a limited number of foundational AI papers, suggesting a trend toward homogenization.
Li likened the phenomenon to “collective mountain-climbing,” noting that most researchers, influenced by current tools and prevailing trends, gravitate toward a few popular and data-rich “known peaks,” while largely overlooking the “unknown peaks.” Current AI models, which depend heavily on extensive datasets, function as strong “climbing accelerators” on established research paths, creating a form of “scientific gravity” that directs the research community toward areas where AI excels, thereby systematically sidelining data-scarce but potentially transformative fields.
The core issue identified by the study is a fundamental “lack of generality” in existing AI-for-science models. This systemic challenge encompasses data availability, algorithm design, and entrenched research incentives. Li pointed out that AI’s strengths in learning and prediction are most evident in data-rich fields, while its effectiveness declines sharply in frontier areas characterized by limited or absent data.
The personal incentives to use AI for rapid publication further exacerbate this trend, prompting researchers to prioritize problems that AI can readily address over those deemed more original or scientifically vital. In response, the research team has developed OmniScientist, an AI system envisioned as a collaborative “AI scientist.” Its core principle aims to evolve AI from a mere efficiency tool into a cohesive participant within the human scientific ecosystem.
This system can autonomously navigate knowledge networks, propose innovative hypotheses, and design experiments, especially in cross-disciplinary and data-sparse domains. Li emphasized the system’s potential to broaden scientific exploration rather than merely accelerate existing research trajectories.
For practicing scientists, Li advocates for a mindset of “conscious, active steering of AI.” Researchers should prioritize fundamental scientific questions—rather than AI’s existing capabilities—as the guiding force behind their work. He encourages the deliberate allocation of resources toward exploring areas where AI struggles, advocating for the use of AI to bolster, not undermine, interdisciplinary collaboration.
Li also highlighted the importance of educational institutions in fostering critical thinking about AI’s limitations alongside technical training. He called for reforms in journal and academic evaluation systems to better reward research diversity, originality, and long-term exploratory projects. Furthermore, for frontier research, longer evaluation periods and an increased tolerance for failure are critical to providing institutional support for scientists willing to venture into the “unknown mountains.”
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions
















































