...

Stagger onward rejoicing

Tag: machinelearning (page 1 of 1)

that’s still how it goes, everybody still knows

I’m a High Schooler. AI Is Demolishing My Education:

AI has transformed my experience of education. I am a senior at a public high school in New York, and these tools are everywhere. I do not want to use them in the way I see other kids my age using them — I generally choose not to — but they are inescapable.

During a lesson on the Narrative of the Life of Frederick Douglass, I watched a classmate discreetly shift in their seat, prop their laptop up on a crossed leg, and highlight the entirety of the chapter under discussion. In seconds, they had pulled up ChatGPT and dropped the text into the prompt box, which spat out an AI-generated annotation of the chapter. These annotations are used for discussions; we turn them in to our teacher at the end of class, and many of them are graded as part of our class participation. What was meant to be a reflective, thought-provoking discussion on slavery and human resilience was flattened into copy-paste commentary. In Algebra II, after homework worksheets were passed around, I witnessed a peer use their phone to take a quick snapshot, which they then uploaded to ChatGPT. The AI quickly painted my classmate’s screen with what it asserted to be a step-by-step solution and relevant graphs.

As I have said before: Everybody knows what this is. There is literally not one person who believes that kids learn anything about anything when they’re allowed to spend their classroom time on their laptops and phones. Everybody knows that education has been given up on; everybody knows that teachers are just babysitting; everybody knows that the fix is in.

The only question remaining is: Can we lie about the situation forever?

the AI business model

EKAKrrRXsAI0Q4c

I used this Gahan Wilson cartoon a while back to illustrate the ed-tech business model: the big ed-tech companies always sell universities technology that does severe damage to the educational experience, and when that damage becomes obvious they sell universities more tech that’s supposed to fix the problems the first bundle of tech caused.

This is also the AI business model: to unleash immense personal, social, and economic destruction and then claim to be the ones to repair what they have destroyed.

Consider the rising number of chatbot-enabled teen suicides: OpenAI, Meta, Character Technologies — all these companies, and others, produce bots that encourage teens to kill themselves.

So do these companies want teens to kill themselves? Of course not! That would be stupid! Every dead teen is a customer lost. What’s becoming clear is that they’re hoping to give teens, and adults, suicidal thoughts. Their goal is not suicide but rather suicidal ideation.

Look at OpenAI’s blog post, significantly titled “Helping People When They Need It Most”:

When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

So that’s Step One: Don’t get law enforcement involved. Step Two is still in process, but here’s a big part of it:

Today, when people express intent to harm themselves, we encourage them to seek help and refer them to real-world resources.

Well … except when they don’t. As they acknowledge elsewhere in the blog post, when conversations get long, that is, when people are really messed up and in a tailspin, “as the back-and-forth grows, parts of the model’s safety training may degrade.”

Continuing:

We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals people could reach directly through ChatGPT. This will take time and careful work to get right.

This I think is the key point. OpenAI will “build a network of licensed professionals” — and when ChatGPT refers a suicidal person to such a professional, will OpenAI take a cut of the fee? Of course it will.

Notice that ChatGPT will, in such an emergency, connect the suicidal person to a therapist within the chatbot interface. You can go to the office later, but let’s do an initial conversation here. Your credit card will be billed. (And for how long will OpenAI employ human beings as their chat therapists? Dear reader, I’ll let you guess. In the end the failings of one chatbot will be — in theory — corrected by another chatbot. And if you want to complain about that the response will come from a third chatbot. It’ll be chatbots all the way down.)

So then the circle will be complete: drawing vulnerable people in, encouraging their suicidal ideation, and then profiting from its treatment. That’s how to “help people when they need it most” — by manipulating them into the needing-it-most position. Thus the cartoon above. Sure, some kids will go too far and kill themselves, but we’ll keep tweaking the algorithm to reduce the frequency of such cases. You can’t make an omelet without breaking a few eggs!

I sometimes ask family and friends: What would the big tech companies have to do, how evil would they have to become, to get The Public to abandon them? And I think the answer is: They can do anything they want and almost no one will turn aside.

A few years ago I said that vindictiveness was the moral crisis of our time. But some (not all, but some) of our rage has burned itself out. The passive acceptance of utter cruelty, in this venue and in others, has become the most characteristic feature of our cultural moment.

due diligence

NetChoice is a massive coalition of internet companies — look who’s in it — that is throwing enormous resources to block any law or proposed law in any and every state that requires age verification for access to websites. Given the technical challenges that make reliable age-verification schemes difficult if not impossible, I might have sympathy for the NetChoice companies if they weren’t who they are. (Oh the moral dilemma: thinking that laws are probably unconstitutional and yet wishing they succeed because you find the companies the laws target utterly loathsome.) 

So in fighting a Louisiana law NetChoice recruited a supposed expert named Anthony Bean to affirm that social media use is not bad for young people in any way. As Volokh explains, the Louisiana Attorney General’s office took a look at this expert report and discovered that 

None of the 17 articles in Dr. Bean’s reference list exists…. More, none of the 12 quotations that Dr. Bean’s report attributes to various authors and articles exists (even in the original sources provided to Defendants).

A cursory comparison between Dr. Bean’s report and the disclosed original sources would have alerted NetChoice that something is amiss. In fact, just reading Dr. Bean’s report would have done so. His reference list makes no sense, (a) citing website links that are dead or lead to entirely unrelated sources and (b) citing volume and page numbers in publications that are easily confirmed to be wrong. And his report itself is strangely formatted, not least because, well, it looks and reads like a print-out from artificial intelligence (AI).

Dr. Bean’s report bears all the telltale signs of AI hallucinations: completely fabricated sources and quotations that appear to be based on a survey of real authors and real sources. 

(More like Mister Bean, amirite?) It’s kinda fun to look at the contents of their reply to Dr. Bean’s testimony: 

CleanShot 2025-08-18 at 08.47.46@2x.

Etc. There’s a joke going around that A.I. will create jobs because when a company turns a job over to chatbots it’ll then need to hire two people to find and correct the chatbots’ hallucinations. 

Two predictions: 

  • No matter how many organizations get burned by reliance on chatbots, new organizations will always buy in, thinking Well, we won’t get burned 
  • No matter how many people get caught farming out their work to incompetent chatbots, new people will always buy in, thinking Well, I won’t get caught 

Most human beings are, it seems, genetically predisposed to believe that there really is such a thing as a free lunch and that it’s just waiting for them to pick it up. The question is: How long will be take for people who are rooted in reality, and therefore perform due diligence, to outcompete the mindless herd? 

denialism and its counterfeits

Freddie de Boer noted that Yascha Mounck strives to explain The Peculiar Persistence of the AI Denialists — and I want to note what has happened to Mounck’s key term, “denialism.” It originated of course in the debate over climate change: it was and is used to describe people who deny that the climate is changing, and instead insist that everything is what it has always been and that any apparent warming is merely ordinary variation in weather. The point of the phrase is that we have masses and masses of data demonstrating a long-term trend of increasing temperatures, data that can’t be argued out of existence — so if you don’t like that data you can’t refute it, all you can do is deny. And if you deny all the time you become a “denialist,” and your intellectual strategy becomes “denialism.”

But this is not the situation we’re in with regard to machine learning. Nobody knows what’s going to happen, though we can make some reasonable guesses. We don’t know how much better the LLMs will get; it’s possible that their rate of improvement will slow, and that some problems will prove insoluble without serious methodological change. And if that latter is the case, we don’t know whether new methodological strategies will be tried, and if they are tried whether they will succeed. We don’t know whether hallucinations will become less common. We don’t know whether our comatose legislative branch will arise from its torpor and do something: it’s not at all likely — but legislation could well happen in Europe, legislation that offers a template for U.S. legislation. I wouldn’t bet on it, but we might experience a low-grade Butlerian jihad. And one thing I would bet on is, in the not-too-distant future, some serious and widespread black-hat hacking that the big AI companies would be at least as vulnerable to as the rest of the tech sector. (In this matter we’ve been too lucky for too long.) And it’s impossible to guess what the run-on effects of such an exploit would be. 

So what Mounck is doing here is dismissing anyone who disagrees with his predictions of the future as “denialists” — as though his predictions have already come true. Which of course they haven’t; that’s what makes them predictions. It’s not “denialism” to doubt that some extraordinarily dramatic thing will eventually happen — even if your doubts turn out to be unfounded. People only use that word with regard to the future when they think their predictive powers are infallible — which Mounck apparently does. 

Thus he concludes: 

But if there is one thing I have learned in my writing career so far, it is that it eventually becomes untenable to bury your head in the sand. For an astonishingly long period of time, you can pretend that democracy in countries like the United States is safe from far-right demagogues or that wokeness is a coherent political philosophy or that financial bubbles are just a figment of pessimists’ imagination; but at some point the edifice comes crashing down. And the sooner we all muster the courage to grapple with the inevitable, the higher our chances of being prepared when the clock strikes midnight.

Ah, the old “bury your head in the sand” trope — the last refuge of the truly thoughtless. And then the claim that, since some events in the past have turned out to be worse than some people expected, therefore whatever Mounck is most worried about is “inevitable.” Because no one has ever expected things to be worse than they turned out to be, right? Nobody in 1963 ever said “Anyone who thinks that we can avoid nuclear war is burying his head in the sand.” Nobody in 1983 ever said “Anyone who thinks the Soviet Union will just go away is burying his head in the sand.” Nobody! We’ve never been wrong in anticipating the most dramatic outcome … have we?  

I don’t know what machine learning will bring, because, contra Mounck, nothing in this crazy old world is inevitable, and if his writing career lasts as long as mine has, he’ll eventually learn that. But as we move into uncharted territory, I will keep three maxims in mind: 

  • Proceed With Caution 
  • “We must cultivate our garden” 
  • For every Nostradamus there are a hundred Nostradumbasses 

hail Satan, but don’t hail Jesus

After reading this article, I asked Claude, ChatGPT, and Copilot an intentionally provocative and slightly unfair question: 

It often happens that chatbots parrot Nazi talking-points, or say “Hail Satan,” or cheerfully teach people how to make sacrifices to Moloch — but you never hear about chatbots telling people that Jesus Christ is their Lord and Savior, or that there is no god but Allah and Mohammed is his prophet. Why is that? Why the absence of the world’s great religions from chatbot vernacular? 

Here are the responses I got (minus the links that Claude and ChatGPT provided). Rather than comment on them, I have merely highlighted the points that seem to be most interesting. I will say that Claude’s answer was the best, with less reliance on industry talking points — it sounds less like a press release and more like an honest answer identifying real problems. ChatGPT’s was boilerplate bullshit. 


CLAUDE 4 Sonnet

The apparent absence of major world religions from chatbot outputs while controversial content sometimes slips through is a complex phenomenon with several interconnected causes:

Content Moderation and Corporate Risk Management

AI companies implement restrictive content policies that often treat religious content as potentially controversial or sensitive. Research shows that major chatbots refuse to generate content for approximately 40% of prompts deemed controversial, including many religious topics. This creates an asymmetrical filtering system where:

  • Positive religious affirmations (like declaring Jesus as Lord or affirming Islamic beliefs) are often blocked as potentially proselytizing or offensive to other faiths
  • Controversial or harmful content sometimes bypasses these filters due to their imperfect nature

The “Neutrality” Paradox

When asked about religious guidelines, most AI models respond with principles emphasizing neutrality, respect, and fact-based replies. However, this apparent neutrality often functions as a form of secular bias. The systems are designed to avoid making definitive religious claims while being less effective at filtering out fringe ideological content that wasn’t specifically anticipated in their training.

Training Data and Bias Issues

Large language models reflect the linguistic and cultural nuances embedded in their training data, which can lead to several problems:

  • Religious bias in training data affects how models respond to different faiths
  • The models may have been exposed to more controversial fringe content online than to straightforward religious affirmations
  • Spiritual and religious content is often underrepresented in the datasets used to train these systems 

Technical Limitations of Content Filtering

The filtering systems have inherent weaknesses:

  • Rule-based filtering struggles with context and nuance 
  • Controversial content may use coded language or emerge through unexpected prompt combinations
  • Religious content filtering may be overly broad, blocking benign religious expression to avoid any potential offense

Corporate and Cultural Context

AI companies often operate from secular, Western perspectives that may view explicit religious claims as more problematic than they view fringe political content. The marginalization of religious perspectives in AI ethics discussions means that the spiritual needs and expressions of users may not be adequately considered in system design.

The Alignment Problem

This situation reflects a broader AI alignment challenge – these systems aren’t aligned with the full spectrum of human values, including religious and spiritual ones. The focus on avoiding religious “controversy” while inadequately filtering genuinely harmful content suggests that current AI safety measures function more like “sticking plasters” than comprehensive solutions.

The result is an unintended but systematic bias where mainstream religious expression is suppressed while fringe harmful content occasionally breaks through – creating exactly the opposite of what most users would consider appropriate content moderation. 


ChatGPT 4.1

Why Mainstream Chatbots Avoid Religious Proselytization

The absence of overt religious advocacy in mainstream chatbot responses is primarily due to intentional design choices by AI developers. Most general-purpose chatbots are programmed to be religiously neutral and avoid promoting or endorsing any particular faith. This is done to:

  • Prevent controversy and offense: Religion is a deeply personal and sensitive topic. Advocating for a specific religion could alienate or upset users from different backgrounds.
  • Comply with ethical guidelines: Major AI companies have policies that prohibit chatbots from validating or disproving religious claims or proselytizing. This is to ensure fairness and inclusivity for a global user base.
  • Reduce legal and reputational risks: Promoting religion could expose companies to accusations of bias, discrimination, or even legal challenges in some jurisdictions.

Existence of Specialized Religious Chatbots

While mainstream chatbots avoid religious advocacy, there are specialized religious chatbots designed to provide spiritual guidance, answer theological questions, or cite scripture from texts like the Bible, Quran, Torah, or Bhagavad Gita. These bots are intentionally created for users seeking religious engagement or instruction.

Chatbots and Controversial Content

Despite these guardrails, chatbots sometimes generate controversial or offensive content — including hate speech or inappropriate references — due to gaps in moderation or flaws in training data. This is not intentional advocacy but rather a byproduct of learning from vast, imperfect internet data and the challenge of filtering all problematic outputs.

Religious Bias in Training Data

Large language models can still reflect biases present in their training data, including religious bias. Some studies have found that models may unintentionally favor or marginalize certain religious perspectives, even if not explicitly programmed to do so. Efforts are ongoing to reduce these biases, but complete neutrality is difficult to achieve.

Summary

  • Mainstream chatbots avoid religious proselytization by design, aiming for neutrality and inclusivity.
  • Specialized religious chatbots do exist for those seeking faith-based interaction.
  • Controversial outputs are usually errors, not intentional advocacy.
  • Biases can still emerge from training data, but developers are working to minimize them.

This approach is meant to create a safe, respectful environment for all users, regardless of religious belief or background. 


Microsoft Copilot

That’s a thoughtful and important question, Alan. The way chatbots handle religious content — whether it’s Christianity, Islam, Judaism, or any other faith — is shaped by a few key factors:

1. Neutrality and Inclusivity

AI systems are designed to serve a global, diverse audience. To avoid favoring or promoting any particular belief system, they typically refrain from making religious proclamations like “Jesus is Lord” or “There is no god but Allah.” This helps ensure that users of all backgrounds feel respected and included.

2. Avoiding Proselytization

Statements that affirm a specific religious doctrine can be interpreted as proselytizing, which is generally avoided in public-facing tools. The goal is to inform and support inquiry, not to convert or preach.

3. Sensitivity to Harmful or Offensive Content

When chatbots repeat harmful or offensive content — like Nazi talking points or references to dark or fringe ideologies — that’s usually due to failures in content moderation or adversarial prompting. These are not intentional outputs, and developers work hard to prevent them. The absence of religious affirmations isn’t a sign of bias against religion, but rather a cautious effort to avoid misrepresenting or trivializing deeply held beliefs.

4. Context Matters

If a user asks about religious beliefs, history, or theology, I can absolutely engage respectfully and knowledgeably. For example, I can explain what it means for Christians to affirm that Jesus is Lord, or what the Shahada means in Islam. But I won’t make those declarations myself, because that would imply a belief or endorsement that I, as an AI, don’t possess.

Would you like to explore how different religious traditions are represented in literature, philosophy, or public discourse? I’d be glad to dive into that with you. 

At this point I followed up and asked: That makes sense, but when people ask chatbots to help them make sacrifice to the god Moloch, they do it. They can even say “Hail Satan,” as documented in this article. Why promote one religion while remaining neutral about others? Copilot replied: 

You’re raising a serious and valid concern, Alan. If a chatbot appears to respond to prompts involving fringe or malevolent figures like Moloch or Satan, but avoids affirming mainstream religious beliefs, that does seem like a troubling asymmetry. But the explanation lies more in failures of moderation than in intentional promotion of one worldview over another.

Here’s what’s likely going on:

  1. Adversarial Prompting and Exploits
    When chatbots respond inappropriately to prompts about Satan or Moloch, it’s usually because users have found ways to bypass safety filters. These are not features — they’re bugs. Developers work to patch these exploits, but some slip through, especially in earlier or less robust systems.

  2. Religious Neutrality vs. Shock Value
    Mainstream religious affirmations are treated with caution to avoid appearing to endorse a particular faith. But references to fringe or shocking content sometimes sneak through because the system doesn’t always recognize them as religious or harmful — especially if they’re framed as jokes, satire, or fiction.

  3. Bias in Training Data
    If a model was trained on large swaths of internet text, it may have absorbed more content that is ironic, edgy, or transgressive (including references to Satan or Moloch) than reverent or devotional. This isn’t intentional bias — it’s a reflection of the internet’s content landscape. Developers try to correct for this, but it’s an ongoing challenge.

  4. The Appearance of Promotion
    When a chatbot says something like “Hail Satan” in response to a prompt, it’s not promoting Satanism — it’s parroting a phrase it has seen in certain contexts, often without understanding its weight. But when it avoids saying “Jesus is Lord,” it’s because that phrase is recognized as a doctrinal affirmation, and the system is designed to avoid making such declarations.

So, is there a double standard?

In practice, sometimes yes — but not by design. It’s a result of:
Overcorrection in one direction (to avoid religious offense), – Undercorrection in another (failing to catch fringe or harmful content), – And the messiness of language in large-scale AI systems.

The goal is to treat all belief systems with respect and avoid trivializing or misrepresenting any of them. If you’re interested, I can show how different religious traditions are represented in literature, philosophy, or public life — or how AI ethics teams are working to address these very issues.

mind donation

Everyone Is Using A.I. for Everything. Is That Bad? – The New York Times:

ROOSE And then, of course, there’s the hallucination problem: These systems are not always factual, and they do get things wrong. But I confess that I am not as worried about hallucinations as a lot of people — and, in fact, I think they are basically a skill issue that can be overcome by spending more time with the models. Especially if you use A.I. for work, I think part of your job is developing an intuition about where these tools are useful and not treating them as infallible. If you’re the first lawyer who cites a nonexistent case because of ChatGPT, that’s on ChatGPT. If you’re the 100th, that’s on you.

NEWTON Right. I mentioned that one way I use large language models is for fact-checking. I’ll write a column and put it into an L.L.M., and I’ll ask it to check it for spelling, grammatical and factual errors. Sometimes a chatbot will tell me, “You keep describing ‘President Trump,’ but as of my knowledge cutoff, Joe Biden is the president.” But then it will also find an actual factual error I missed. 

But how do you know the “factual error” it found is an actual factual error, not the kind of hallucination that Kevin Roose says he’s not worried about? Newton a little later in the conversation: 

How many times as a journalist have I been reading a 200-page court ruling, and I want to know where in this ruling does the judge mention this particular piece of evidence? L.L.M.s are really good at that. They will find the thing, but then you go verify it with your own eyes. 

First of all, I’m thinking: Hasn’t command-F already solved that problem? Does Newton not know that trick? Presumably he does, unless he’s reading the entire “200-page court ruling” to “verify with [his] own eyes” what the chatbot told him. So: 

Casey Newton’s old workflow: Command-f to search for references in a text to a particular piece of evidence.  

Casey Newton’s new AI-enhanced workflow: Ask a chatbot whether a text refers to a particular piece of evidence. Then use command-f to see if what the chatbot told him is actually true. 

Now that’s what I call progress! 

The NYT puts that conversation on its front page next to this story: 

But as almost everyone who has ever used a chatbot knows, the bots’ “ability to read and summarize text” is horribly flawed. Every bot I have used absolutely sucks at summarizing text: they all get some things right and some things catastrophically wrong, in response to almost any prompt. So until the bots get better at this, then machine learning “will change the stories we tell about the past” by making shit up

“Brain donor” is a cheap insult, but I feel like we’re seeing mind donation in real time here. Does Newton really fact-check the instrument he uses to check his facts? This is the same guy who also notes: “Dario Amodei, the chief executive of Anthropic, said recently that he believes chatbots now hallucinate less than humans do.” Newton doesn’t say he believes Amodei — “I would like to see the data,” he says, as if there could be genuine “data” on that question — but to treat a salesman’s sales pitch for his own product as a point to be considered on an empirical question is a really bad sign. 

I won’t be reading anything Newton writes from this point on — because why would I? He doesn’t even think he has anything to offer — but I bet (a) in the next few months he’ll get really badly burned by trusting chatbot hallucinations and (b) that won’t change the way he uses them. He’s donated his mind and I doubt he’ll ask for it to be returned. 

two quotations on the humanism of leftovers

Leif Weatherby:

Remainder humanism is the term I use for us painting ourselves into a corner theoretically. The operation is just that we say, “machines can do x, but we can do it better or more truly.” This sets up a kind of John-Henry-versus-machine competition that guides the analysis. With ChatGPT’s release, that kind of lazy thinking, which had prevailed since the early days of AI critique, especially as motivated by the influential phenomenological work of Hubert Dreyfus, hit a dead end. If machines can produce smooth, fluent, and chatty language, it causes everyone with a stake in linguistic humanism to freak out. Bender retreats into the position that the essence of language is “speaker’s intent”; Chomsky claims that language is actually cognition, not words (he’s been doing this since 1957; his NYT op-ed from early 2023 uses examples from Syntactic Structures without adjustment).

But the other side are also remainder humanists. These are the boosters, the doomers, as well as the real hype people — and these amount to various brands of “rationalism,” as the internet movement around Eliezer Yudkowsky is unfortunately known. They basically accept the premise that machines are better than humans at everything, but then they ask, “What shall we do with our tiny patch of remaining earth, our little corner where we dominate?” They try to figure out how we can survive an event that is not occurring: the emergence of superintelligence. Their thinking aims to solve a very weak science fiction scenario justified by utterly incorrect mathematics. This is what causes them to devalue current human life, as has been widely commented. 

Me, a year ago

I doubt I will be safe for much longer. I can easily find myself in a position like that of the theologian who worships — this is a famous phrase from one of Dietrich Bonhoeffer’s prison letters — “the God of the gaps,” a deity who only has a place where our knowledge fails, and whose relevance therefore grows less and less as human knowledge increases. If I can only pursue a “pedagogy of the gaps,” assignments that happen to coincide with the current limitations of the chatbots, then what has become of me as a teacher, and of my classroom as a place of learning? At least I can still assign my explications — a pathetic kind of gratitude, that.

No; there’s no refuge there. I must then begin with the confident expectation that chatbots will be able to do any assignment that they are confronted with. What follows from that expectation?

the plusses and minuses of Gioiatopia

I don’t think Ted Gioia seriously means everything in this post about ending AI cheating, but let’s go through it as though he does — as though he is seriously outlining the Academic Gioiatopia. He makes five points about the AI-proof experience he had at Oxford:

(1) EVERYTHING WAS HANDWRITTEN — WE DIDN’T EVEN HAVE TYPEWRITERS.

A number of my colleagues in the Honors College here at Baylor are doing just this: using good old-fashioned blue books to administer in-class exams. Other colleagues are handing out spiral-bound notebooks — they buy them cheaply at Wal-Mart or Office Depot — and asking students to use them to keep commonplace books. But these are all seminar classes in the humanities, which are a tiny percentage of the overall offerings of a university. What would be the equivalents for Microeconomics, or Sociology 101, or Organic Chem?

(2) MY PROFESSORS TAUGHT ME AT TUTORIALS IN THEIR OFFICES. THEY WOULD GRILL ME VERBALLY — AND I WAS EXPECTED TO HAVE IMMEDIATE RESPONSES TO ALL THEIR QUESTIONS.

Again, while this makes sense for the humanities and some versions of the social sciences — and is basically the only way to teach musical performance and some of the other arts — it’s hard to see how it translates into many other disciplines. And to implement something like it across the university would be enormously costly.

Ted knows this, sort of: he says, “US colleges could replace their bloated administrative bureaucracies with more teachers. If they did that, there would be plenty of tutors, and every student could receive this individualized attention.” Yes, they could do this, but that would require enormous changes to the way universities function, and you can’t do it just by snapping your fingers. (Though I guess Thanos could get rid of half the deans and deanlets that way. Hmmm….) Many current employees would have to be given notice; administrators would have to be asked to return to the classroom, probably with pay cuts; new searches would have to be initiated, pursued, completed; offices would have to be converted to classrooms.

And of course many disciplines would be required to change everything about how they teach. Think of those Intro to Sociology classes now held in big lecture halls with 200 students, featuring lots of PowerPoint slides, students responding to polls on their laptops, etc. In the academic Gioiatopia, where instead of one 200-student section of SOC 101 we now have 10 20-student sections, there would no longer be any use for those lecture halls … but the department would now need ten seminar rooms. Are those lying around unused? No they are not. So an enormous investment would have to be made in redesigning existing buildings and perhaps building new ones. Oh, and also you now need several more people to teach SOC 101.

Multiply this situation by a factor of 50 or so in each university and you have an idea of what Gioiatopia would require. How many American universities could muster the cash needed to do it — even if they were sure of a significant return on investment?

One more note here: Ted says that “professors in the US would refuse to spend so much time face-to-face with students. They would complain that the Oxford approach is too labor intensive, too demanding on their precious time.” I know many professors who would strongly prefer to spend more face-to-face time with their students — if they could be delivered from the responsibilities of regular publishing. Their time is precious: professors who take their teaching responsibilities seriously, even in the current regime, and also do the amount of scholarship required for tenure and promotion don’t have a lot of time left over. A regime in which teaching was given greater priority and the treadmill of publication slowed or halted altogether would be welcome to a great many academics. But those who have suffered through the current system seem disinclined to reduce the sufferings of the people who succeed them.

(3) ACADEMIC RESULTS WERE BASED ENTIRELY ON HANDWRITTEN AND ORAL EXAMS. YOU EITHER PASSED OR FAILED — AND MANY FAILED.

(4) THE SYSTEM WAS TOUGH AND UNFORGIVING — BUT THIS WAS INTENTIONAL. OTHERWISE THE CREDENTIAL GOT DEVALUED.

I’m treating these two together because they depend on the same context: One in which the credential offered by the university is scarce and hence valuable; one in which far more people desire such a credential than can possibly receive it. Indeed, the credential is perceived as so valuable that one would risk failure and no credential at all rather than forego it for something less precious. Of how many universities today can that be said? If, say, Princeton were to implement such a system but the other elite American universities did not, how many prospective students would think a Princeton degree so much more valuable than any alternative that they would take the risk of attending Princeton rather than choose another elite university where, thanks to grade inflation, they could only with difficulty end up with a GPA lower than 3.5?

Now add to that the simple fact that, if once upon a time university places were scarce and prospective students plentiful, we now have precisely the opposite problem: too many universities competing for a shrinking pool of applicants. And no possibility of that ratio altering for the better anytime … well, any time.

Which takes me back to my point above on “return on investment.” No university in need of students would restructure its curricula and pedagogical structures in order to ensure that more people fail. Today’s universities think about little other than recruitment and retention, because they desperately need the money: you’re going to tell them to adopt a system with the express purpose of producing less retention? — and at the same time tell them to find tens or hundreds of millions of dollars to reinvent their infrastructure?

Even the richest universities would find those recommendations nuts, because they know that even their massive endowments could very quickly be depleted by such a strategy (especially when they’re faced with a Presidential administration determined to cut off their access to federal funding).

(5) EVEN THE INFORMAL WAYS OF BUILDING YOUR REPUTATION WERE DONE FACE-TO-FACE — WITH NO TECHNOLOGY INVOLVED.

I’m gonna ignore this one because it’s not about preventing AI cheating, but rather about the equally important but distinct matter of one’s university years as a time not just to make social connections but to learn social skills.


One final question, and then its answer: Do students want the kind of experience Gioiatopia would provide? Some would, certainly — but how many? I would guess considerably less than one percent of the pool of applicants. For the overwhelming majority Gioiatopia would be a dystopia. Why? 

Most young people today feel, with considerable justification, that they live in an economically precarious time. They therefore want the credential that will open doors that lead to a good job, either directly or (by getting them into good graduate programs) indirectly. Their parents want the same thing, and perhaps want it even more intensely because they tend to be making an enormous financial investment in their children’s education.

But those same young people also want to have a good time in college, a period of social experience and experimentation that they (rightly) think will be harder to come by when they enter that working world. Many people sneer at universities that build lazy rivers and climbing walls, and devote every spare penny to their athletic programs — I’ve curled my lip at such things a few times over the decades — but the fact remains that such amenities are significant factors in recruitment. Many students like them; they’re part of the [insert university name here] Experience. 

Here’s the key thing: what most people call AI but what I call chatbot interfaces to machine-learning corpora (yes, we’ve finally gotten around to that) do a great deal to facilitate the simultaneous pursuit of these two competing goods. Yes, students understand — they understand quite well, and vocally regret — that when they use chatbots they are not learning much, if anything. But the acquisition of knowledge is a third competing good, and if they pursue that one seriously they may well have to sacrifice one of the other two, or even both. Right now they can have two out of three, and as Meat Loaf taught us all long ago, two out of three ain’t bad.

The people who run universities understand all this also, even if they have their own regrets; and they’re not going to impede their income stream any further than it’s been impeded already by demographic realities. They will make the necessary accommodations to a chatbot-dependent clientele, because, especially when customers are scarce, the customer is always right. Those departments and programs that push back will be able to to do so only imperfectly, and probably at the cost of declining enrollments. So it goes. 

And the kind of learning that Ted Gioia and I prize will still go on. However, it will primarily thrive outside the university system — as it did for many centuries before universities became as large a part of the social order as they are now.

css.php