Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

ChatGPT: Not Useless, but Far From Flawless

The Unstoppable Rise of GenAI in Higher Education: A Call for Critical Engagement and Inclusion

The Unstoppable Spread of GenAI in Higher Education: A Cautionary Tale

On his final album, Leonard Cohen’s gravelly voice croaks one last warning: “As for the fall, it began long ago. Can’t stop the rain. Can’t stop the snow.” These poignant words resonate not only with Cohen’s metaphysical musings but also reflect the relentless advance of Generative AI (GenAI) in higher education. As educators grapple with this paradigm shift, we’re faced with a fundamental question: how do we adapt to a technological landscape that seems unstoppable?

The Rising Tide of GenAI Usage

A recent Sevanta survey revealed a staggering increase in GenAI adoption among undergraduates, with 88% reporting its use for assessments in 2025, up from 53% the previous year. Furthermore, a study from the University of Reading demonstrated that 94% of AI-generated coursework went undetected, often resulting in grades that were half a classification higher than the average.

These statistics fuel anxieties surrounding cheating, skill degradation, and concerns over the quality of student work. Scholars have voiced fears that tools like ChatGPT merely “produce bullshit,” a term popularized by philosopher Harry Frankfurt, arguing that these chatbots are indifferent to truth. Here lies the crux of a misunderstanding about both the philosophy at play and the capabilities of GenAI.

Understanding Bullshit in the Age of AI

Frankfurt’s concern was moral, emphasizing the importance of truth for the health of societal institutions. But this framing suggests that AI is inherently deceitful or lacking understanding. While it is true that GenAI lacks the human capacity for understanding, it does not operate with ulterior motives that undermine truth. A chatbot, like a compass, may not grasp truth but can still point us in the right direction. Its outputs, depending on the quality of its training data, can track truth indirectly.

More importantly, categorizing AI outputs as “bullshit” is not just misleading; it’s detrimental to educational discourse. Such language feeds into a narrative that seeks to prohibit AI use or force it underground rather than engage with it critically.

The Educational Backlash

In response to these fears, two major camps have emerged among educators: the prohibitionists and the diversionists. Prohibitionists advocate for banning AI in assessments, relying on detection tools and strict measures to catch offenders. However, this approach is fundamentally flawed, as tech companies continue to embed GenAI into everyday tools and students develop cunning techniques to camouflage their AI-assisted work.

On the other hand, diversionists propose assessments that make AI use practically inviable, often advocating for supervised exams. Yet, given the rapid evolution of GenAI, this approach seems antiquated, risking a regression in pedagogy. Both stances underline a critical issue: they’re reactive rather than proactive, operating out of fear rather than understanding.

A Path Forward: Critical Inclusionism

A more constructive stance is one of critical inclusionism. This approach integrates GenAI into teaching, focusing on developing students’ critical thinking skills to navigate its complexities. Rather than shunning AI, educators can help students harness it for productive, educational purposes.

By acknowledging the epistemic risks of bias and factual inaccuracies, we can train students to interrogate AI outputs. GenAI can serve as a valuable resource for personalized study plans, tailored reading lists, and even simulate real-world scenarios in a low-stakes environment.

Unpacking Student Dependency on AI

It’s crucial to understand why students often prefer using AI to create their own work. High-stakes, one-shot assessments associated with standardized testing lead to an impersonal learning experience, where students feel more like numbers than individuals. This disconnect, intensified by administrative bureaucracy, fosters an environment ripe for unhealthy reliance on AI.

James Warren aptly highlights this issue: “For a generation, we have been training our undergraduates to be nothing more than AI bots themselves.” The focus must shift towards cultivating intrinsic motivation through active, collaborative learning experiences.

Bridging the Gap with Language and Concepts

To foster a constructive dialogue around GenAI, we need to develop memorable, impactful concepts. Instead of blanket condemnations, we can introduce terms like “botsplaining” to identify confidently but baseless explanations or “botlicking” to highlight uncritical responses. These terms promote critical engagement without instigating moral panic.

Conclusion: Acknowledging the Fall

Indiscriminate pejorative language and a moralistic stance hinder productive discussions about the underpinnings of students’ dependence on AI. Understanding that “the fall began long ago” prompts us to look critically at our educational environments. It’s time to acknowledge the roots of these changes and pivot towards inclusive, proactive strategies in the face of an unstoppable tide.

Andrew J. Routledge
Lecturer in Political Theory, University of Liverpool


As we forge ahead, let us embrace the changes that GenAI brings, shaping its integration within our educational frameworks rather than ignoring its existence or attempting to suppress it. The dialogue must continue, focusing on the potential of AI while being vigilant about its pitfalls.

Latest

How Lendi Transformed the Refinance Process for Customers in 16 Weeks with Agentic AI and Amazon Bedrock

Transforming Home Loan Management with AI: Lendi Group's Innovative...

Cancel ChatGPT Now: Your Subscription Fuels Authoritarianism | Rutger Bregman

The Rise of QuitGPT: A Call to Action Against...

Google DeepMind Introduces Robotics Accelerator Program

Google DeepMind Launches First Accelerator Program for Early-Stage Robotics...

AI in Education Market Expected to Hit USD 73.7 Billion by 2033

Market Overview of AI in Education Revolutionizing Learning through Artificial...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Cancel ChatGPT Now: Your Subscription Fuels Authoritarianism | Rutger Bregman

The Rise of QuitGPT: A Call to Action Against OpenAI's Role in Authoritarianism The QuitGPT Boycott: A Call for Action Against OpenAI's Corporate Ethics OpenAI, the...

ChatGPT: The Imitative Innovator – The Observer

Embracing Originality: The Perils of Relying on AI in Academia Embracing Human Thought: A Call to Value Our Own Intelligence Amidst the Rise of AI As...

The Advertiser’s Perspective on ChatGPT: Exploring the Other Side of Advertising

Navigating the Future of Advertising in ChatGPT: Insights for Businesses Understanding OpenAI's Advertising Model Who Can Advertise? The Mechanics of ChatGPT Ads Comparing ChatGPT Ads to Google Ads Implications...