Leif Weatherby:

Remainder humanism is the term I use for us painting ourselves into a corner theoretically. The operation is just that we say, “machines can do x, but we can do it better or more truly.” This sets up a kind of John-Henry-versus-machine competition that guides the analysis. With ChatGPT’s release, that kind of lazy thinking, which had prevailed since the early days of AI critique, especially as motivated by the influential phenomenological work of Hubert Dreyfus, hit a dead end. If machines can produce smooth, fluent, and chatty language, it causes everyone with a stake in linguistic humanism to freak out. Bender retreats into the position that the essence of language is “speaker’s intent”; Chomsky claims that language is actually cognition, not words (he’s been doing this since 1957; his NYT op-ed from early 2023 uses examples from Syntactic Structures without adjustment).

But the other side are also remainder humanists. These are the boosters, the doomers, as well as the real hype people — and these amount to various brands of “rationalism,” as the internet movement around Eliezer Yudkowsky is unfortunately known. They basically accept the premise that machines are better than humans at everything, but then they ask, “What shall we do with our tiny patch of remaining earth, our little corner where we dominate?” They try to figure out how we can survive an event that is not occurring: the emergence of superintelligence. Their thinking aims to solve a very weak science fiction scenario justified by utterly incorrect mathematics. This is what causes them to devalue current human life, as has been widely commented. 

Me, a year ago

I doubt I will be safe for much longer. I can easily find myself in a position like that of the theologian who worships — this is a famous phrase from one of Dietrich Bonhoeffer’s prison letters — “the God of the gaps,” a deity who only has a place where our knowledge fails, and whose relevance therefore grows less and less as human knowledge increases. If I can only pursue a “pedagogy of the gaps,” assignments that happen to coincide with the current limitations of the chatbots, then what has become of me as a teacher, and of my classroom as a place of learning? At least I can still assign my explications — a pathetic kind of gratitude, that.

No; there’s no refuge there. I must then begin with the confident expectation that chatbots will be able to do any assignment that they are confronted with. What follows from that expectation?