Since the launch of ChatGPT in late 2022, millions have flocked to large language models (LLMs) for information. The convenience of asking a question and receiving a polished response is undeniably appealing. However, a recent study co-authored by professors Jin Ho Yun and another marketing expert provides compelling evidence that this simplicity might come at a cost: users of LLMs develop a shallower understanding of topics compared to those who employ traditional web searches.
The findings, detailed in a paper that analyzed data from seven studies involving over 10,000 participants, indicate a consistent trend. Participants were tasked with learning about various subjects—such as how to cultivate a vegetable garden—and were randomly assigned to research using either an LLM like ChatGPT or through conventional search engines like Google. Notably, participants faced no restrictions; they could interact freely with the tools, continuing to ask questions or browse links as desired.
Upon completion of their research, participants were asked to draft advice based on their newly acquired knowledge. The results were revealing: individuals who utilized an LLM felt they had learned less, put forth less effort in crafting their advice, and ultimately produced responses that were shorter, less factual, and more generic. When this advice was evaluated by an independent cohort, it was deemed less informative and less likely to be adopted, irrespective of the source used to acquire the information.
Understanding the Learning Gap
One of the pivotal reasons for the observed decline in knowledge retention seems to be the nature of engagement with the material. Through traditional Google searches, users encounter a greater level of “friction”—they sift through diverse web links, read from multiple sources, and engage in the active process of interpreting and synthesizing information. This active engagement fosters a deeper, more nuanced understanding of the subject matter.
See also
Andrew Ng Advocates for Coding Skills Amid AI Evolution in TechConversely, LLMs streamline the process by providing synthesized answers, effectively making learning a passive endeavor. As researchers further tested this theory, they conducted experiments where participants were presented with identical sets of facts, either through Google search results or via an LLM’s response. The results remained consistent: participants who received synthesized LLM responses retained shallower knowledge compared to those who navigated traditional search engines.
Strategic Use of LLMs
Yun and his co-author do not advocate for the complete avoidance of LLMs, acknowledging the significant benefits they bring in specific contexts. Instead, they suggest that users should adopt a more strategic approach when engaging with these tools. For quick, factual inquiries, LLMs can serve as an effective solution. However, for those aiming to achieve a deeper, more comprehensive understanding of a subject, relying solely on LLM-generated content is less beneficial.
As part of their ongoing research into the psychology of technology, Yun is exploring ways to make the learning process with LLMs more interactive. One approach involved using a specialized GPT model that provided real-time web links alongside its synthesized responses. Yet, even in this scenario, participants tended to lean on the summary provided by the LLM rather than delve into the original sources, resulting in shallow learning outcomes.
Moving forward, Yun’s research aims to identify generative AI tools that can introduce constructive challenges or “frictions” into the learning process. Such features could be especially crucial in secondary education, where the challenge lies in equipping students with fundamental skills in reading, writing, and mathematics while preparing them for a world increasingly integrated with LLMs.
In summary, while large language models offer remarkable conveniences, their usage requires careful consideration to avoid compromising knowledge depth. Understanding when to utilize these tools effectively can empower users to enhance their learning experiences without sacrificing the richness of understanding.


















































