‘Tell me what to do’: University staff grapple with AI policy

Charles Miller
4 min readJun 9, 2023

If there’s one thing academics don’t like, it’s looking out of touch. When it comes to AI, there’s an urgency to understanding what it can do and whether students are using it to cheat — and whether in an AI world “cheating” means the same as it did.

At a Roehampton University seminar for staff, almost all hands went up when the group was asked whether they thought they had been presented with AI-generated work in recent student assessments.

The speaker wasn’t surprised: “it’s already out there and they probably think we don’t know much about it.”

One teacher told the group what a student had confessed to him: “we know ChatGPT makes up references, but we assume staff are too busy to check”. And that’s why it took him 15 days to mark 100 papers, he explained, “because I went through the references to see if they were relevant”.

Indeed, one of the things I learnt during this enlightening day was that ChatGPT doesn’t just suggest irrelevant references but also invents scholarly works that might exist, but happen not to.¹

The same teacher believed that 80 per cent of the papers he was marking had used ChatGPT. The giveaway was that the answers were too general. The university has yet to come up with guidance about the use of AI in essays. “I want to be told what to do in September,” he said, ending with a note of desperation, “I am so glad I am reaching the end of my career — I can’t handle it.”

In another session, we were presented with two texts on the same subject, one generated by AI and one written by the speaker when she had been an undergraduate. The group was invited to say which was hers. Hands went up: roughly half for each text. Before the answer was revealed, people were asked to explain their choices. Confident, detailed arguments were put forward — in favour of the wrong text.

One sign of AI output is the lack of grammatical mistakes or typos. But you can understand a teacher’s bamboozlement when asked to be especially suspicious of students who present careful, conscientious work.

A more definite giveaway is text which is presented on the very light grey background that ChatGPT answers come on. A copy and paste of ChatGPT answers will transfer the background too:

The standard academic plagiarism-tester, Turnitin already offers AI detection, but there is an arms race between it and other software to evade detection. Some YouTubers advise students to run their AI-generated content through a paraphrase tool like Quillbot to mangle the pure AI text, but even this is now apparently old hat and detectable.

At the seminar, there was discussion of being alert to students turning in work that is dramatically different from their previous offerings. But then, someone said, we’re supposed to be marking the essay on its own merits; we’re not marking the student. And indeed blind marking is intended to be fairer by ensuring teachers don’t know whose work they’re looking at.

So what should university policy be? There is, as one speaker put it, a limited number of options for responding to AI. You can fight it — by reverting to written, supervised exams. You can try to “outrun” it — by coming up with questions which AI can’t answer, such as accounts of personal experience or descriptions of events or discussions which took place in the classroom.

Or you can “embrace and adapt” — accept AI as being just the latest addition to a technological continuum that includes Google, Grammarly, automatic translators and other applications that have already made dramatic changes to study over the past decades.

That sounds sensible, if a little vague. But it doesn’t answer the practical question about how much AI should be allowed to be used by students or, on a more personal note for this group, what’s left for teachers to do as AI improves.

Someone confidently quoted Arthur C. Clarke: “any teacher who can be replaced by a computer, deserves to be”. It didn’t sound as reassuring as it should have.

¹ After the session, I asked Chat GPT for references for my own subject (mid-Victorian history). It suggested a couple of interesting-sounding books: Clockwork Worlds: Mechanized Environments in Nineteenth-Century Literature by David Trotter (2018) and Becoming Modern: The Nineteenth Century in Literature and Culture edited by Linda M. Shires (2009). It turns out that the author names are real but the books are imagined, with beautifully accurate parodies of academic titles.

--

--

Charles Miller

Writer and producer, CoinGeek. Former BBC documentary producer. PhD student in History, University of Roehampton @chblm