Across university campuses, professors are wrestling with a new kind of plagiarism panic: the fear that students are letting ChatGPT and other generative AI tools do the thinking for them.
But one education researcher said that the real crisis isn’t cheating — it’s that higher education keeps testing the very skills AI performs best, while neglecting others it can’t.
In an essay for The Conversation published on Sunday, Anitia Lubbe, an associate professor at North-West University in South Africa, said universities are “focusing only on policing” AI use instead of asking a more fundamental question: whether students are really learning.
Most assessments, she wrote, still reward memorization and rote learning — “exactly the tasks that AI performs best.”
Lubbe warned that unless universities rethink how they teach and assess students, they risk producing graduates who can use AI but not critique its output.
“This should include the ability to evaluate and analyse AI-created text,” she wrote. “That’s a skill which is essential for critical thinking.”
Instead of banning AI, Lubbe said, universities should use it to teach what machines can’t do — reflection, judgment, and ethical reasoning.
She proposes five ways educators can fight back:
1. Teach students to evaluate AI output as a skill
She said professors should make students interrogate AI generative tools’ output — asking them to identify where an AI-generated answer is inaccurate, biased, or shallow before they can use it in their own work.
That, she said, is how students learn to think critically about information rather than just consume it.
2. Scaffold assignments across multiple levels of thinking
Rather than letting AI handle every stage of a project, she urged teachers to design tasks that guide students through progressively deeper levels of thinking — moving from basic comprehension to analysis and ultimately to original creation — so they can’t simply delegate the entire process to a machine.
3. Promote ethical and transparent use of AI
Students, she said, must understand that responsible use begins with disclosure — explaining when, how, and why they’ve used tools like ChatGPT.
She said that openness not only builds integrity but also helps demystify AI as a learning partner instead of a secret weapon.
4. Encourage peer review of AI-assisted work
When students critique each other’s AI-generated drafts, she said, they learn to evaluate both the technology and the human thinking behind it.
That process, in her view, restores a sense of dialogue and collaboration that pure automation erases.
5. Reward reflection, not just results
She said grades should factor in how students used AI — whether they documented their process, justified their choices, or demonstrated learning through comparison with the machine’s reasoning.
“But focusing only on policing misses a bigger issue: whether students are really learning,” Lubbe wrote.
A wider academic alarm
Lubbe’s warning echoes a broader unease among educators that students are quietly outsourcing thinking to AI.
Last week, Kimberley Hardcastle, a business professor at Northumbria University, wrote that AI allows students to “produce sophisticated outputs without the cognitive journey traditionally required to create them,” calling it an “intellectual revolution” that risks handing control of knowledge to Big Tech.
While Hardcastle fears AI is hollowing out critical thought, former venture capitalist turned educator Ted Dintersmith warned that schools are already training students to think like machines — a mistake he says will leave them unprepared for a job market where “two or three people who are good at AI will replace 20 or 30 who aren’t.”
Last week, he told BI that schools are already “training kids to follow distantly in the footsteps of AI,” churning out “flawed, expensive versions of ChatGPT” instead of teaching creativity, curiosity, and collaboration — the very skills machines can’t replicate.