Recently, Baylor’s excellent Provost, Nancy Brickhouse, wrote to faculty with a twofold message. The first part:

How do we help our students work with artificial intelligence in ways that are both powerful and ethical? After all, I believe that the future our students face is likely to be profoundly shaped by artificial intelligence, and we have a responsibility to prepare our students for this future.

ChatGPT can be used as a research partner to help retrieve information and generate ideas, allowing students to delve deeply into a topic. It can be a good writing partner, helping students with grammar, vocabulary, and even style.

Faculty may find ChatGPT as a useful tool for lesson planning ideas. For those utilizing a flipped classroom approach, AI tools may be used to generate ideas and information outside the classroom for collaborative work inside the classroom.

Finally, and most importantly, faculty have the opportunity to engage students in critical ethical conversations about the uses of AI. They need to learn how to assess and use the information ethically.

And the second part:

I am interested in how YOU are already using artificial intelligence. I am thinking now about how we might collectively address the opportunities afforded by AI. I would appreciate hearing from you.

So I’ve been thinking about what to say — though of course I’ve already said some relevant things: see this post on “technoteachers” and my new post over at the Hog Blog on the Elon Effect. But let me try to be more straightforward.

Imagine a culinary school that teaches its students how to use HelloFresh: “Sure, we could teach you how to cook from scratch the way we used to — how to shop for ingredients, how to combine them, how to prepare them, how to present them — but let’s be serious, resources like HelloFresh aren’t going away, so you just need to learn to use them properly.” The proper response from students would be: “Why should we pay you for that? We can do that on our own.”

If I decided to teach my students how to use ChatGPT appropriately, and one of them asked me why they should pay me for that, I don’t think I would have a good answer. But if they asked me why I insist that they not use ChatGPT in reading and writing for me, I do have a response: I want you to learn how to read carefully, to sift and consider what you’ve read, to formulate and then give structure your ideas, to discern whom to think with, and finally to present your thoughts in a clear and cogent way. And I want you to learn to do all these things because they make you more free — the arts we study are liberal, that is to say liberating, arts.

If you take up this challenge you will learn not to “think for yourself” but to think in the company of well-chosen companions, and not to have your thoughts dictated to you by the transnational entity some call surveillance capitalism, which sees you as a resource to exploit and could care less if your personal integrity and independence are destroyed. The technocratic world to which I would be handing you over, if I were to encourage the use of ChatGPT, is driven by the “occupational psychosis” of sociopathy. And I don’t want you to be owned and operated by those Powers

The Powers don’t care about the true, the good, and the beautiful — they don’t know what any of those are. As Benedict Evans writes in a useful essay, the lawyers who ask ChatGPT for legal precedents in relation to a particular case don’t realize that what ChatGPT actually searches for is something that looks like a precedent. What it returns is often what it simply invents, because it’s a pattern-matching device, not a database. It is not possible for lawyers — or people in many other fields — to use ChatGPT to do their research for them: after all, if you have to research to check the validity of ChatGPT’s “research,” then what’s the point of using ChatGPT?  

Think back to that culinary school where you only learn how to use HelloFresh. That might work out just fine for a while; it might not occur to you that you have no idea how to create your own recipes, or even adapt those of other cooks — at least, not until HelloFresh doubles its prices and you discover that you can’t afford the increase. That’s the moment when you see that HelloFresh wasn’t liberating you from drudgery but rather was enslaving you to its recipes and techniques. At that point you begin to scramble to figure how to shop for your own food, how to select and prepare and combine — basically, all the things you should have learned in culinary school but didn’t.

Likewise, I don’t want you to look back on your time studying with me and wonder why I didn’t at least try to provide you with the resources to navigate your intellectual world without the assistance of an LLM. After all, somewhere down the line ChatGPT might demand more from you than just money. And you – perhaps faced with personal conditions in which you simply don’t have time to pursue genuine learning, the kind of time you had but were not encouraged to use when you were in college — may find that you have no choice but to acquiesce. As Evgeny Morozov points out, “After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. [Artificial General Intelligence] rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.” Welcome to the Machine.  

That’s the story I would tell, the case I would make. 

I guess I’m saying this: I don’t agree that we have a responsibility to teach our students how to use ChatGPT and other AI tools. Rather, we have a responsibility to teach them how to thrive without such tools, how to avoid being sacrificed to Technopoly’s idols. And this is especially true right now, when even people closely connected with the AI industry, like Alexander Carp of Palantir, are pleading with our feckless government to impose legal guardrails, while thousands of others are pleading with the industry itself to hit the pause button

For what it’s worth, I don’t think that AI will destroy the world. As Freddie DeBoer has noted, the epideictic language on this topic has been extraordinarily extreme: 

Talk of AI has developed in two superficially opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators. Here I can trot out the old cliché that love and hate are not opposites but kissing cousins, that the true opposite of each is indifference. So too with AI debates: the war is not between those predicting deliverance and those predicting doom, but between both of those and the rest of us who would like to see developments in predictive text and image generation as interesting and powerful but ultimately ordinary technologies. Not ordinary as in unimportant or incapable of prompting serious economic change. But ordinary as in remaining within the category of human tool, like the smartphone, like the washing machine, like the broom. Not a technology that transcends other technology and declares definitively that now is over.

So, no to both sides: AI will neither save not destroy the world. I just think that for teachers like me it’s a distraction from what I’m called to do. 

If I were to speak in this way to my students — and when classes resume I just might do so — would they listen? Some will; most won’t. After all, the university leadership is telling a very different story than the one I tell, and doing things my way would involve harder work, which is never an easy sell. As I said in my “Elon Effect” essay, the really central questions here involve not technological choices but rather communal integrity and wholeness. Will my students trust me when I tell them that they are faced with the choice of moving towards liberation or towards enslavement? In most cases, no. Should they trust me? That’s not for me to decide. But it’s the key question, and one that should be on the mind of every single student: Where, in this community of learning, are the most trustworthy guides?