Me, a year ago:

Of course universities are going to outsource commentary on essays to AI — just as students will outsource the writing of essays to AI. And maybe that’s a good thing! Let the AI do the bullshit work and we students and teachers can get about the business of learning. It’ll be like that moment in The Wrong Trousers when Wallace ties Gromit’s leash to the Technotrousers, to automate Gromit’s daily walk. Gromit merely removes his collar and leash, attaches them to a toy dog on a wheeled cart, and plays in the playground while the Technotrousers march about. 

And lo, this from Cameron Blevins (via Jason Heppler): 

There is no question that a Custom GPT can “automate the boring” when it comes to grading. It takes me about 15-20 minutes to grade one student essay (leaving comments in the margins, assigning rubric scores, and writing a two-paragraph summary of my feedback). Using a Custom GPT could cut this down to 2-3 minutes per essay (stripping out identifying information, double-checking its output, etc.). With 20 students in a class, that would save me something like 5-6 hours of tedious work. Multiply this across several assignments per semester, and it quickly adds up.

In an ideal world, this kind of tool would free up teachers to spend their time on more meaningful pedagogical work. But we don’t live in an ideal world. Instead, I worry that widespread adoption would only accelerate the devaluing of academic labor. Administrators could easily use it as justification to hire fewer instructors while loading up existing ones with more classes, larger sections, and fewer teaching assistants. 

Alas, I must agree. “Now that we’ve automated grading, we can hire fewer instructors and give them more students!” But then (thinks the same administrator) “Why not train bots on all those lectures posted on YouTube, create professorial avatars — maybe allow students to customize their virtual professors to make them the preferred gender and the desired degree of hotness — and dismiss the instructors also? That’ll free up money to hire more administrators.”  

That will surely be the deanly response. But there’s another way to think of all this, one I suggested in my post of last year. Think about the sales people who use chatbots to write letters to prospective clients, or prepare reports for their bosses. People instinctively turn to the chatbots when they see a way to escape bullshit jobs, or the bullshitty elements of jobs that have some more human aspects as well. For most students, writing papers is a bullshit job; for most professors, grading papers is a bullshit job. (Graeber, p. 10: “I define a bullshit job as one that the worker considers to be pointless, unnecessary, or pernicious — but I also suggest that the worker is correct.”) 

What if we all just admitted that and deleted the bullshit? What if we used the advent of chatbots as an opportunity to rethink the purposes of higher education and the means by which we might pursue those purposes? 

But I suspect is that what universities will do instead is to keep the bullshit and get rid of the humans.