A university professor friend of mine recently confessed that everyone in their department now outsources their letter-of-reference writing to ChatGPT. They feed the chatbot a few bullet points and it spits out a letter, which they lightly edit and sign.
Naturally enough, this is slowly but surely leading to a rise in the number of reference letters they’re asked to write. When a student knows that writing the letter is the work of a few seconds, they’re more willing to ask, and a prof is more willing to say yes (or rather, they feel more awkward about saying no).
The next step is obvious: as letters of reference proliferate, people who receive these letters will ask a chatbot to summarize them in a few bullet points, creating a lossy process where a few bullet points are inflated into pages of puffery by a bot, then deflated back to bullet points by another one.
But whatever signal remains after that process has run, it will lack one element: the signal that this letter was costly to produce, and therefore worthy of taking into consideration merely on that basis.
See this post by me.
I must admit that I hadn’t thought of this particular use of AI, but it raises an interesting question: When do we turn to AI for help with writing? — especially those of us who are competent writers? We don’t do it for every writing task, only for some — but which ones?
Here’s my hypothesis: Competent writers seek help from AI when they’re faced with
- an obligation to write, in situations in which
- certain phrasal formulas are expected, and
- any stylistic vividness is useless or even unwelcome.
Why write as a human being when humanity is a barrier to data processing?