Connect with us

Hi, what are you looking for?

AI Research

UK Universities Dismantling Humanities Programs Threaten Critical AI Research Engagement

UK universities are closing humanities programs, endangering vital AI user trust research as PhD students like Chris Tessone face an uncertain future.

A University of Staffordshire PhD student, Chris Tessone, is navigating a challenging path in his research on users’ trust in AI, particularly with large language models like ChatGPT and Claude. His studies come at a time when the philosophy department he initially enrolled in is set to close, although the university has committed to support his doctoral completion. This situation highlights a broader trend in UK higher education, where the humanities face significant cuts, creating “cold spots” in regions where critical thinking tools become increasingly exclusive to the elite.

The ongoing dismantling of humanities departments occurs as society grapples with the complexities introduced by artificial intelligence. As generative AI models proliferate, largely without regulation, there is an urgent need for systematic research that the humanities are uniquely poised to provide. These disciplines can help unpack why users might confide in a chatbot or how the persuasive fluency of AI can blur the lines between human and machine interaction.

Agnieszka Piotrowska, an academic and filmmaker, is also exploring these issues in her forthcoming book on AI-human relationships. Through a blend of autoethnography and Lacanian theory, she introduces the concept of “techno-transference,” explaining how users transfer relational expectations onto generative systems. Without the insights of the humanities, she warns, society risks navigating an unregulated AI landscape devoid of critical interpretation.

Piotrowska points out that the industry’s metrics of success often diverge from user experiences. OpenAI recently released a new model that, just 24 hours after launch, processed one trillion tokens. Although this figure indicates the volume of text handled, it does not account for meaningful improvements in user satisfaction. User feedback on platforms like Reddit reflected a sense of loss rather than advancement, with many reporting disruptions in their interactions and a feeling of continuity being undermined.

This discrepancy between quantitative success and qualitative user experience raises critical questions that warrant rigorous academic inquiry. While discussions around AI often focus on issues like plagiarism and bias, there are deeper questions about how these systems evolve over time and what their behaviors reveal about human responses. Engineers alone cannot tackle these complexities.

Qualitative research methods, which the humanities excel in, face increasing skepticism from funders and university administrators. Many humanities departments are being dismantled or forced to narrow their focus, sidelining vital exploratory work into human-machine interactions. Eoin Fullam, a PhD student researching mental health chatbots, acknowledged that his project would not have received funding had it been framed solely as a theoretical exploration. His critique highlighted that many chatbots lack the advanced capabilities users might expect, yet he had to present his findings as immediately useful to secure support.

This practical framing often overshadows the more profound philosophical implications of AI interactions. The prevailing narrative suggests that large language models are merely statistical tools, dismissing the need for deeper academic engagement. Those who do address the experiential effects of AI face skepticism, often branded as naive or unstable.

Murray Shanahan, an emeritus professor of artificial intelligence at Imperial College London, emphasizes that the most thought-provoking capabilities of AI often surface during extended user interactions. He argues that a sustained dialogue with chatbots can reveal valuable insights, regardless of whether these systems possess consciousness. Such interactions should be recognized as legitimate methods of inquiry, but institutional pressures increasingly discourage this type of research.

This issue is not confined to the UK. Luciano Floridi, a philosopher now at Yale University, has noted a shift in AI ethics toward design-oriented solutions, potentially sidelining the exploration of high-level ethical principles. Although this approach has led to advancements in bias mitigation and transparency, it remains incomplete, as deeper structural and systemic issues are often overlooked.

Researchers Hong Wang and Vincent Blok have highlighted that while observable biases in AI arise from technical conditions, the public discourse increasingly focuses on surface effects, neglecting the root causes. The polarized nature of AI discussions has created a blind spot in areas that urgently require scrutiny. If observable phenomena cannot be discussed in the context of official narratives, the foundation of empirical inquiry is undermined.

The integrity of AI research hinges on studying what these systems truly do rather than relying solely on proclaimed capabilities. For academia to influence the future trajectory of AI, it must reclaim the right to engage deeply with uncomfortable questions and the realities of these technologies. As AI continues to evolve, the need for critical examination and dialogue becomes ever more pressing.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Anthropic launches Cowork to enhance Claude's utility in document-centric tasks, aiming to streamline workflows and boost productivity for knowledge workers.

AI Finance

AI personal finance assistants like ChatGPT, Google Gemini, Microsoft Copilot, and Claude offer unique advantages in budgeting, but accuracy and privacy concerns persist.

AI Marketing

The Nova Method unveils NovaSight, a strategic AI visibility platform, empowering brands to enhance their discoverability in AI-driven searches and recommendations.

Top Stories

In 2026, the UK introduces the Employment Rights Act with over 30 reforms, while the EU mandates pay transparency to combat gender pay gaps...

AI Cybersecurity

Deepfake technology is set to fuel a surge in corporate fraud by 2026, with potential losses reaching millions as cybercriminals exploit AI to impersonate...

Top Stories

Apple partners with Google to integrate the Gemini AI model into Siri, aiming for a major 2026 update that enhances user interaction and AI...

AI Finance

AI financial assistants like ChatGPT, Google’s Gemini, and Microsoft Copilot are revolutionizing budgeting and savings strategies, but privacy risks necessitate cautious use.

AI Generative

Owkin unveils a groundbreaking agentic infrastructure for biomedical research, enhancing drug discovery with AI agents that improve analysis accuracy by 23.7% and streamline workflows.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.