Connect with us

Hi, what are you looking for?

AI Research

UK Universities Dismantling Humanities Programs Threaten Critical AI Research Engagement

UK universities are closing humanities programs, endangering vital AI user trust research as PhD students like Chris Tessone face an uncertain future.

A University of Staffordshire PhD student, Chris Tessone, is navigating a challenging path in his research on users’ trust in AI, particularly with large language models like ChatGPT and Claude. His studies come at a time when the philosophy department he initially enrolled in is set to close, although the university has committed to support his doctoral completion. This situation highlights a broader trend in UK higher education, where the humanities face significant cuts, creating “cold spots” in regions where critical thinking tools become increasingly exclusive to the elite.

The ongoing dismantling of humanities departments occurs as society grapples with the complexities introduced by artificial intelligence. As generative AI models proliferate, largely without regulation, there is an urgent need for systematic research that the humanities are uniquely poised to provide. These disciplines can help unpack why users might confide in a chatbot or how the persuasive fluency of AI can blur the lines between human and machine interaction.

Agnieszka Piotrowska, an academic and filmmaker, is also exploring these issues in her forthcoming book on AI-human relationships. Through a blend of autoethnography and Lacanian theory, she introduces the concept of “techno-transference,” explaining how users transfer relational expectations onto generative systems. Without the insights of the humanities, she warns, society risks navigating an unregulated AI landscape devoid of critical interpretation.

Piotrowska points out that the industry’s metrics of success often diverge from user experiences. OpenAI recently released a new model that, just 24 hours after launch, processed one trillion tokens. Although this figure indicates the volume of text handled, it does not account for meaningful improvements in user satisfaction. User feedback on platforms like Reddit reflected a sense of loss rather than advancement, with many reporting disruptions in their interactions and a feeling of continuity being undermined.

This discrepancy between quantitative success and qualitative user experience raises critical questions that warrant rigorous academic inquiry. While discussions around AI often focus on issues like plagiarism and bias, there are deeper questions about how these systems evolve over time and what their behaviors reveal about human responses. Engineers alone cannot tackle these complexities.

Qualitative research methods, which the humanities excel in, face increasing skepticism from funders and university administrators. Many humanities departments are being dismantled or forced to narrow their focus, sidelining vital exploratory work into human-machine interactions. Eoin Fullam, a PhD student researching mental health chatbots, acknowledged that his project would not have received funding had it been framed solely as a theoretical exploration. His critique highlighted that many chatbots lack the advanced capabilities users might expect, yet he had to present his findings as immediately useful to secure support.

This practical framing often overshadows the more profound philosophical implications of AI interactions. The prevailing narrative suggests that large language models are merely statistical tools, dismissing the need for deeper academic engagement. Those who do address the experiential effects of AI face skepticism, often branded as naive or unstable.

Murray Shanahan, an emeritus professor of artificial intelligence at Imperial College London, emphasizes that the most thought-provoking capabilities of AI often surface during extended user interactions. He argues that a sustained dialogue with chatbots can reveal valuable insights, regardless of whether these systems possess consciousness. Such interactions should be recognized as legitimate methods of inquiry, but institutional pressures increasingly discourage this type of research.

This issue is not confined to the UK. Luciano Floridi, a philosopher now at Yale University, has noted a shift in AI ethics toward design-oriented solutions, potentially sidelining the exploration of high-level ethical principles. Although this approach has led to advancements in bias mitigation and transparency, it remains incomplete, as deeper structural and systemic issues are often overlooked.

Researchers Hong Wang and Vincent Blok have highlighted that while observable biases in AI arise from technical conditions, the public discourse increasingly focuses on surface effects, neglecting the root causes. The polarized nature of AI discussions has created a blind spot in areas that urgently require scrutiny. If observable phenomena cannot be discussed in the context of official narratives, the foundation of empirical inquiry is undermined.

The integrity of AI research hinges on studying what these systems truly do rather than relying solely on proclaimed capabilities. For academia to influence the future trajectory of AI, it must reclaim the right to engage deeply with uncomfortable questions and the realities of these technologies. As AI continues to evolve, the need for critical examination and dialogue becomes ever more pressing.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

A recent ACSI survey reveals 43% of Americans fear reduced human interaction due to AI, with Google Gemini leading platforms at 76 satisfaction points.

AI Marketing

Emplifi's new report reveals 93% of consumers believe authentic engagement fosters trust, highlighting the critical role of transparency in AI-driven marketing.

AI Marketing

Reddit captures over 9% of AI citations, compelling brands to overhaul AEO strategies and engage authentically in community-driven discourse.

AI Marketing

Ticketmaster integrates ChatGPT, leveraging its 900M users to revolutionize ticket discovery and purchase with AI-driven conversational interactions.

Top Stories

A new BMJ Open study reveals that five AI chatbots, including ChatGPT and Grok, deliver 49.6% problematic health responses, raising urgent oversight concerns.

AI Finance

70% of finance teams in Australia and New Zealand use shadow AI tools like ChatGPT, risking data governance with only 16% confident in data...

AI Research

Mayo Clinic's Evo 2 AI model analyzes 128,000 genomes to identify cancer-causing mutations, revolutionizing early diagnosis and precision medicine.

Top Stories

Google DeepMind hires philosopher Henry Shevlin to guide ethical AI development and explore machine consciousness as AGI approaches reality

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.