A chart is making the rounds purporting to represent looming advances of Large Language Learning in ChatGPT, showing where we are now (version 3) vs. where we will be soon (version 4) in the automated generation of human-like communication. The thought of ChatGPT 3 alarmed and overwhelmed me enough when I learned students could use the tool to create plausible written documents that I asked both TMU’s AIO and Turnitin.com about ways to safeguard academic integrity. What I learned was not reassuring. This post records what I’ve discovered so far. Undoubtedly a number of you have much more to say on this subject.
My impression of Turnitin.com’s response to AI-created writing is that they’re nowhere near integrating academic-integrity safeguards that we can rely on. TMU’s Office of Academic Integrity currently (February, 2023) admits that it’s not yet possible to detect AI content, nor has Policy 60 been updated to address this issue.
We did have enough notice in September to add a note in our syllabi; however without a reliable tool to combat this kind of plagiarism, we really can’t do much, save having private, probing conversations with students–not a pleasant option. A professor at Innis College’s Writing & Rhetoric program at U of T has told me she may start assigning in-class essays. We could always return to the hand-written in-class writing assignments of yesteryear, but major grumbling, not to mention the headache of marking cursive, are not attractive prospects.
TMU’s Office of Academic Integrity has created a community of practice and provided regular workshops for instructors grappling with this issue. You could ask to be included in the community of practice, and you’ll get a D2L organization on your Brightspace page, where you can see what other members are thinking and can find leads on ways to manage the challenges. Allyson Miller (allyson.miller@torontomu.ca) appears to be the AI point person for the AIO. Topics for the latest workshop’s roundtable discussions reflect the most pressing ones that have emerged at other universities: “how to design assessments that either 1) thwart student use of AI, 2) leverage AI, or 3) challenge and/or critique AI.”
To be honest, I was surprised at how quickly surrender was promoted. Back in December 2022, an article in The Chronicle of Higher Education took this view. Our own AIO office recommended a tip sheet for integrating ChatGPT into the classroom, despite its incoherence and indirect invitation to be a commodity audience. Further afield, Manchester Met has sponsored an open slideshare inviting academics to post their ideas, and so far contributors are more excited than fearful. A common refrain is that we surrender to the ghost in the machine and try to learn from its bloodless shuffling of coherent-sounding sentences.
It’s understandable and probably realistic to contemplate how we’ll adapt, given the inevitable expansion of this tool’s capabilities. Nonetheless, possibly because of my early training and temperament, I prefer to begin with resistance and questions like those produced by Stanford University’s Institute for Human-Centred AI. Though as CUPE lecturers we’re mostly teaching “practical” writing skills, I’d argue we’re also instilling agency and allowing individual students to discover what they’re capable of as communicators. Do we really want to farm out the responsibility for a very human exchange to a non-human voice? I fear that in our effort to discover a solution to simple challenges we’re reaching for the stars but finding the green light on our computer monitors or the mirror image of a blinking cursor from a faceless stranger’s blank page.