...

Stagger onward rejoicing

Tag: AI (page 1 of 1)

two summative thoughts about AI

One: There was until recently a battle for the soul of AGI research and development, a battle between the stewards and the exploiters. The stewards understand themselves to be the duty-bound custodians of an ever-more-enormous power; the exploiters are interested in using that power to make themselves rich and powerful. Had the stewards managed to retain control, or even influence, then I would have been willing to keep a cautiously hopeful eye on developments. However, the stewards have been routed and only the exploiters remain. (OpenAI’s dismissal of Sam Altman was effectively The Stewards’ Last Stand.) I therefore consider it necessary to refuse any use of AI in any circumstances that I can control. 

Two: The powers of law are being summoned by people who see the exploiters as I do, which I guess is a good thing, but … in our society, can anyone as rich as the tech companies behind AGI lose, either in the courts or through legislation? I don’t see how they can. Everyone who stands in their way can be bought, and most of them are pleading to be bought. (Similarly, in Premier League football, Everton is small enough to be smacked down but I cannot imagine Manchester City or Chelsea ever suffering any penalty, no matter how grossly they have defied the financial rules.) As Dana Gioia taught us long ago, 

Money. You don’t know where it’s been,
but you put it where your mouth is.
And it talks.

mechanical writing

Cory Doctorow:

A university professor friend of mine recently confessed that everyone in their department now outsources their letter-of-reference writing to ChatGPT. They feed the chatbot a few bullet points and it spits out a letter, which they lightly edit and sign.

Naturally enough, this is slowly but surely leading to a rise in the number of reference letters they’re asked to write. When a student knows that writing the letter is the work of a few seconds, they’re more willing to ask, and a prof is more willing to say yes (or rather, they feel more awkward about saying no).

The next step is obvious: as letters of reference proliferate, people who receive these letters will ask a chatbot to summarize them in a few bullet points, creating a lossy process where a few bullet points are inflated into pages of puffery by a bot, then deflated back to bullet points by another one.

But whatever signal remains after that process has run, it will lack one element: the signal that this letter was costly to produce, and therefore worthy of taking into consideration merely on that basis. 

See this post by me

I must admit that I hadn’t thought of this particular use of AI, but it raises an interesting question: When do we turn to AI for help with writing? — especially those of us who are competent writers? We don’t do it for every writing task, only for some — but which ones? 

Here’s my hypothesis: Competent writers seek help from AI when they’re faced with 

  • an obligation to write, in situations in which 
  • certain phrasal formulas are expected, and 
  • any stylistic vividness is useless or even unwelcome. 

Why write as a human being when humanity is a barrier to data processing? 

bureaucratic sustainability

Matt Crawford:

The example of China’s explosive growth in the last thirty years showed that capitalism can “work” without the political liberalism that was once thought to be its necessary corollary. The West seems to be arriving at the same conclusion, embracing a form of capitalism that is more tightly tied to Party purposes. But there is a crucial difference in the direction given to the economy by the party-state in the two cases. In the West, the party-state is consistently anti-productive. For example, it promotes proportional representation over competence in labor markets (affirmative action). There are probably sound reasons for doing so, all things considered, but it comes at a cost that is rarely entered into the national ledger. Less defensibly, the party-state installs a layer of political cadres in every institution (the exploding DEI bureaucracy). The mandate of these cadres is to divert time and energy to struggle sessions that serve nobody but the cadres themselves. And the Party is consistently opposed to the most efficient energy technologies that could contribute to shared prosperity (nuclear energy, as well as domestic oil and gas), preferring to direct investment to visionary energy projects. The result has been a massive transfer of wealth from consumers to Party-aligned actors. The stylized facts and preferred narratives of the Party can be maintained as “expert consensus” only by the suppression of inquiry and speech about their underlying premises. The resulting dysfunction makes the present order unsustainable. 

This is an incisive essay by Matt, as always, and I agree with almost all of it — the exception being the last sentence quoted here. It seems to me that the current system is indeed sustainable, for quite some time, at least in many arenas.

For instance, in the American university system the vast expansion of DEI apparat simply follows the previous (and not yet complete) expansion of the mental-health apparat, all of which siphons resources away from the teaching of students. But that’s okay, because almost no one — least of all students and their parents — thinks that learning is the point of university. The university is for socialization, networking, and credentialing, and I expect to see a continuing expansion of the bureaucracies that promote these imperatives and a corresponding contraction of the number of teachers. And anyway, insofar as teaching and learning remain a burdensome necessity, if an annoying one, much of that work can be outsourced to ed-teach products and, now, to chatbots

Genuine teaching and genuine learning will always go on, but for the foreseeable future it will happen at the margins of our universities or outside the universities altogether. Meanwhile, the symbolic work of the party-state will grind on, because it must

For since the law has but a shadow of the good things to come instead of the true form of these realities, it can never, by the same sacrifices that are continually offered every year, make perfect those who draw near. Otherwise, would they not have ceased to be offered, since the worshipers, having once been cleansed, would no longer have any consciousness of sins? But in these sacrifices there is a reminder of sins every year. 

Nick Cave:

ChatGPT rejects any notions of creative struggle, that our endeavours animate and nurture our lives giving them depth and meaning. It rejects that there is a collective, essential and unconscious human spirit underpinning our existence, connecting us all through our mutual striving.

ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation valueless and unnecessary.  That ‘songwriter’ you were talking to, Leon, who is using ChatGPT to write ‘his’ lyrics because it is ‘faster and easier,’ is participating in this erosion of the world’s soul and the spirit of humanity itself and, to put it politely, should fucking desist if he wants to continue calling himself a songwriter.

two quotations on corporations

Charlie Stross (2010):

Corporations do not share our priorities. They are hive organisms constructed out of teeming workers who join or leave the collective: those who participate within it subordinate their goals to that of the collective, which pursues the three corporate objectives of growth, profitability, and pain avoidance. (The sources of pain a corporate organism seeks to avoid are lawsuits, prosecution, and a drop in shareholder value.)

Corporations have a mean life expectancy of around 30 years, but are potentially immortal; they live only in the present, having little regard for past or (thanks to short term accounting regulations) the deep future: and they generally exhibit a sociopathic lack of empathy. […] 

We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden. Individual atomized humans are thus either co-opted by these entities (you can live very nicely as a CEO or a politician, as long as you don’t bite the feeding hand) or steamrollered if they try to resist.

In short, we are living in the aftermath of an alien invasion.

James Bridle (2022)

In the last few years, I have given talks at conferences and spoken on panels about the social impacts of new technology, and as a result I am sometimes asked when ‘real’ AI will arrive – meaning the era of super-intelligent machines, capable of transcending human abilities and superseding us. When this happens, I often answer: it’s already here. It’s corporations. This usually gets an uncertain half-laugh, so I explain further. We tend to imagine AI as embodied in something like a robot, or a computer, but it can really be instantiated as anything. Imagine a system with clearly defined goals, sensors and effectors for reading and interacting with the world, the ability to recognize pleasure and pain as attractors and things to avoid, the resources to carry out its will, and the legal and social standing to see that its needs are catered for, even respected. That’s a description of an AI – it’s also a description of a modern corporation…. Corporate speech is protected, corporate personhood recognized, and corporate desires are given freedom, legitimacy and sometimes violent force by international trade laws, state regulation – or lack thereof – and the norms and expectations of capitalist society. Corporations mostly use humans as their sensors and effectors; they also employ logistics and communications networks, arbitrage labour and financial markets, and recalculate the value of locations, rewards and incentives based on shifting input and context. Crucially, they lack empathy, or loyalty, and they are hard – although not impossible – to kill.

Encyclopedia Babylonica 4: System

A350e8c36c748eba59a144404151c643 1000x1000x1

The Bob Marley and the Wailers album Survival (1979) is one of Marley’s most politically militant recordings. The imagery of the album cover, which with one exception features the flags of African nations, suggests its theme — the need for Pan-African political unity — and the songs on the album say that that unity is to be rooted in emancipation from the dominance of a global political and economic system, a system which is largely controlled by white people. (The Wikipedia page linked to above explains the flags and the image hiding behind the album’s title.) 

One of the most constant and powerful images of Rastafarianism is that of the Babylonian captivity. You may get a brief summary of this theme by reading this essay by David W. Stowe, and then, perhaps, go deeper by reading Stowe’s remarkable book Song of Exile: The Enduring Mystery of Psalm 137. Stowe has a lot to say about the version of Psalm 137 with which I prefaced my previous post, “Rivers of Babylon” by The Melodians. I don’t suppose there’s any song that more fully captures the tone and mood of Rastafari — and the best song on Survival is a kind of extension of it, as though “Rivers of Babylon” were rewritten by a critical race theorist. That song is called “Babylon System,” and I’m invoking it here because I think it suggests a different approach to living in Babylon than the two we have already considered: infiltrating the halls of power and praying for deliverance.  


But before I go any further, I need to clarify some things. It’s pretty obvious what I’m suggesting in these posts: that living in Technopoly is best figured as a kind of Babylonian captivity. But do I really mean to compare my situation — as a comfortable, economically secure white person in one of the world’s richest countries — to those who have been uprooted from their homes and sold into slavery, subjected to endless bigotry and oppression both overt and covert? And my answer is: Yes, I mean to compare — but not to equate.

Consider for instance the moment in “The Scouring of the Shire” when the returning hobbits see what has been done to Hobbiton:

‘This is worse than Mordor!’ said Sam. ‘Much worse in a way. It comes home to you, as they say; because it is home, and you remember it before it was all ruined.’

‘Yes, this is Mordor,’ said Frodo. ‘Just one of its works.’ 

By any rational reckoning, of course, Hobbiton is not “worse than Mordor” — as Sam of all people ought to know, having spent too much time in that blasted land. But you know what he means, you know where the feeling of revulsion comes from: his love of the Shire and his long-cherished hope to return to it, his unreflective expectation that upon his return it would be what he always knew it to be. But would he exchange his condition for that of a prisoner in one of Sauron’s towers, or an Orc soldier? Of course he wouldn’t. The Mordor System is darker and fouler at the center than at the periphery; but its logic is the same everywhere. And what has been learned about it near the center can be used near the periphery as well. That’s the theme of my essay on Albert Murray: white American Christians who think they’re suffering should take some lessons from their Black brothers and sisters, who know what real persecution is. If you feel threatened by the Beast, then maybe you should consult people who have spent generations in that Beast’s belly. 

If I had to choose between (a) raising my child in an environment of material and social comfort but also with the constant preaching of the dark gospel of metaphysical capitalism, and (b) raising my child in an environment of economic hopelessness and racial bigotry, in which he or she must spend a lifetime constantly at tiptoe stance — well, I would certainly choose the former. The first situation has dystopian elements, but also hopeful possibilities and some degree of freedom; the second is dystopian to its core. Those in the first situation can at least learn from the miseries of those in the second. 


Okay, back to Bob Marley. What does Babylon System do? It’s a vampire, sucking the blood of the sufferers; and it builds churches and universities for the express purpose of deceiving the people and keeping them enslaved. That is, to borrow the Marxist terms, it consists of an economic/political base and a cultural superstructure: As Louis Althusser said, it’s a model of political economy that sustains itself not simply by force, or the threat of force, but also through the work of ideological state apparatuses. Foucault borrowed this distinction when he coined the phrase “power-knowledge regime” — the hard power of the state-as-such and the soft power of its knowledge-disseminating apparatuses. 

Not a bad description of life under surveillance capitalism. — at least, once you start thinking about it. If you manage not to think about the costs, life in Babylon can be kinda pleasant at times, and questioning the System can feel risky. But once you start thinking … well, for one thing, to do do all this unpaid labor for social media and AI companies is to tread the winepress, but thirst.

So what do we do? Do we strive to sneak our young men and women into the ruling cadre? Do we pray for deliverance? Or do we do what Bob Marley says we should do: rebel? If we’re drawn to the last, then we have to ask another question: What might successful rebellion look like? 

on technologies and trust

Recently, Baylor’s excellent Provost, Nancy Brickhouse, wrote to faculty with a twofold message. The first part:

How do we help our students work with artificial intelligence in ways that are both powerful and ethical? After all, I believe that the future our students face is likely to be profoundly shaped by artificial intelligence, and we have a responsibility to prepare our students for this future.

ChatGPT can be used as a research partner to help retrieve information and generate ideas, allowing students to delve deeply into a topic. It can be a good writing partner, helping students with grammar, vocabulary, and even style.

Faculty may find ChatGPT as a useful tool for lesson planning ideas. For those utilizing a flipped classroom approach, AI tools may be used to generate ideas and information outside the classroom for collaborative work inside the classroom.

Finally, and most importantly, faculty have the opportunity to engage students in critical ethical conversations about the uses of AI. They need to learn how to assess and use the information ethically.

And the second part:

I am interested in how YOU are already using artificial intelligence. I am thinking now about how we might collectively address the opportunities afforded by AI. I would appreciate hearing from you.

So I’ve been thinking about what to say — though of course I’ve already said some relevant things: see this post on “technoteachers” and my new post over at the Hog Blog on the Elon Effect. But let me try to be more straightforward.

Imagine a culinary school that teaches its students how to use HelloFresh: “Sure, we could teach you how to cook from scratch the way we used to — how to shop for ingredients, how to combine them, how to prepare them, how to present them — but let’s be serious, resources like HelloFresh aren’t going away, so you just need to learn to use them properly.” The proper response from students would be: “Why should we pay you for that? We can do that on our own.”

If I decided to teach my students how to use ChatGPT appropriately, and one of them asked me why they should pay me for that, I don’t think I would have a good answer. But if they asked me why I insist that they not use ChatGPT in reading and writing for me, I do have a response: I want you to learn how to read carefully, to sift and consider what you’ve read, to formulate and then give structure your ideas, to discern whom to think with, and finally to present your thoughts in a clear and cogent way. And I want you to learn to do all these things because they make you more free — the arts we study are liberal, that is to say liberating, arts.

If you take up this challenge you will learn not to “think for yourself” but to think in the company of well-chosen companions, and not to have your thoughts dictated to you by the transnational entity some call surveillance capitalism, which sees you as a resource to exploit and could care less if your personal integrity and independence are destroyed. The technocratic world to which I would be handing you over, if I were to encourage the use of ChatGPT, is driven by the “occupational psychosis” of sociopathy. And I don’t want you to be owned and operated by those Powers

The Powers don’t care about the true, the good, and the beautiful — they don’t know what any of those are. As Benedict Evans writes in a useful essay, the lawyers who ask ChatGPT for legal precedents in relation to a particular case don’t realize that what ChatGPT actually searches for is something that looks like a precedent. What it returns is often what it simply invents, because it’s a pattern-matching device, not a database. It is not possible for lawyers — or people in many other fields — to use ChatGPT to do their research for them: after all, if you have to research to check the validity of ChatGPT’s “research,” then what’s the point of using ChatGPT?  

Think back to that culinary school where you only learn how to use HelloFresh. That might work out just fine for a while; it might not occur to you that you have no idea how to create your own recipes, or even adapt those of other cooks — at least, not until HelloFresh doubles its prices and you discover that you can’t afford the increase. That’s the moment when you see that HelloFresh wasn’t liberating you from drudgery but rather was enslaving you to its recipes and techniques. At that point you begin to scramble to figure how to shop for your own food, how to select and prepare and combine — basically, all the things you should have learned in culinary school but didn’t.

Likewise, I don’t want you to look back on your time studying with me and wonder why I didn’t at least try to provide you with the resources to navigate your intellectual world without the assistance of an LLM. After all, somewhere down the line ChatGPT might demand more from you than just money. And you – perhaps faced with personal conditions in which you simply don’t have time to pursue genuine learning, the kind of time you had but were not encouraged to use when you were in college — may find that you have no choice but to acquiesce. As Evgeny Morozov points out, “After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. [Artificial General Intelligence] rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.” Welcome to the Machine.  

That’s the story I would tell, the case I would make. 

I guess I’m saying this: I don’t agree that we have a responsibility to teach our students how to use ChatGPT and other AI tools. Rather, we have a responsibility to teach them how to thrive without such tools, how to avoid being sacrificed to Technopoly’s idols. And this is especially true right now, when even people closely connected with the AI industry, like Alexander Carp of Palantir, are pleading with our feckless government to impose legal guardrails, while thousands of others are pleading with the industry itself to hit the pause button

For what it’s worth, I don’t think that AI will destroy the world. As Freddie DeBoer has noted, the epideictic language on this topic has been extraordinarily extreme: 

Talk of AI has developed in two superficially opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators. Here I can trot out the old cliché that love and hate are not opposites but kissing cousins, that the true opposite of each is indifference. So too with AI debates: the war is not between those predicting deliverance and those predicting doom, but between both of those and the rest of us who would like to see developments in predictive text and image generation as interesting and powerful but ultimately ordinary technologies. Not ordinary as in unimportant or incapable of prompting serious economic change. But ordinary as in remaining within the category of human tool, like the smartphone, like the washing machine, like the broom. Not a technology that transcends other technology and declares definitively that now is over.

So, no to both sides: AI will neither save not destroy the world. I just think that for teachers like me it’s a distraction from what I’m called to do. 

If I were to speak in this way to my students — and when classes resume I just might do so — would they listen? Some will; most won’t. After all, the university leadership is telling a very different story than the one I tell, and doing things my way would involve harder work, which is never an easy sell. As I said in my “Elon Effect” essay, the really central questions here involve not technological choices but rather communal integrity and wholeness. Will my students trust me when I tell them that they are faced with the choice of moving towards liberation or towards enslavement? In most cases, no. Should they trust me? That’s not for me to decide. But it’s the key question, and one that should be on the mind of every single student: Where, in this community of learning, are the most trustworthy guides?

My old friend Noah Millman, who writes and directs:

I love actors, and I want to see them continue to get jobs. More so, I love actors as actors, and I dread the prospect of a future where their deeply human activity is replaced by a machine that feels nothing, when feeling is so essential to what it is an actor does. I had a marvelous time working with all my actors on my recent film, very much including the background actors (of which I had quite a few). Those background actors were a non-trivial part of my budget, and I believe they were worth every penny because they brought themselves to their tiny roles, and those selves mattered, and mattered in ways I couldn’t have anticipated without their presence in person, on set. In their absence, we’re left with just the director’s solitary self fiddling with knobs on a machine, doing precisely what he thinks he wants, and never learning that something else was possible. The essentially collaborative and hence surprising aspect of filmmaking will, I suspect, progressively be drained away in the brave new world aborning, and we’re going to feel that loss in ways that we can’t yet fully comprehend.

excerpt from my Sent folder: the day of reckoning

About fifteen years ago I started moving away from the standard research essay assignment. In my Literary Theory classes I assigned dialogues; in Christianity and Fantasy I asked students to make critical editions of texts; since I got to Baylor I’ve used dialogues in some classes and in others have given take-home exams asking people to do close-reading explications of passages I’ve chosen for them. The LLMs do not yet know how to do any of these things, so I am not having to think too much about their effects on my classes; my chief challenge is to avoid smug self-satisfaction. But I remind myself that the day of reckoning is surely coming for me also.

(And yes, I know this courts that smugness, but: I stopped assigning research essays because what my students gave me was so maddeningly predictable — predictable because formulaic — that I just couldn’t bear to grade them any more, not after thirty-plus years of doing it. Now I think: the predictability was in some crucial pedagogical respects the whole point, and so of course LLMs can do those assignments!)

I’m no Mr. Miyagi

KarateKid WaxOnWaxOff

My friend Richard Gibson:

Emerging adults need to see, as one of my colleagues put it, “the benefits of the struggle” in their own lives as well as their instructors’. The work before us — to preserve old practices and to implement new ones — provides the ideal occasion to talk to our students about not only the intellectual goods offered by our fields that we want them to experience. It is also a chance to share how we have been shaped for the better by the slow, often arduous work of joining a discipline. Our students need more than technology, and the answer cannot simply come in the form of another list of dos-and-don’ts in the already crowded space of a syllabus. They need models for how to navigate these new realities — paradigms for their practice, life models. AI’s foremost challenge for higher education is to think afresh about forming humans.

My heart says Yes, but my head says I think it’s impossible. That is, I suspect the owl of Minerva really does fly only at night, and one cannot learn the value of “the slow, often arduous work of joining a discipline” in advance. That value can only be discerned in retrospect. 

Imagine that Mr. Miyagi teaches Daniel the wax-on/wax-off technique and leaves him to get to work. Then, as Daniel is straining and sweating, a guy comes up to him with a little machine labeled WaxGPT. “Hey kid! Wow, you’re working hard there. But I have good news: this machine will do the waxing for you. And it’s free! All you have to do is give me your phone number.”

You think Daniel will hesitate? You think his reverence for Mr. Miyagi will keep him in line? Unlikely; but just possible, I guess, since Daniel has seen Mr. Miyagi’s prowess and has asked to be taught by him. That’s not how it is for many of us who teach university students. My students are not free agents making free choices. I haven’t saved any of them from vicious thugs. A few of them may admire me in certain respects, but almost all of them think of my assignments as mere impediments, and I don’t think I can change that view in advance of their maturation. Later they may thank me — a good many have, over the years — but they don’t thank me in the moment. Even Daniel is likely to think that WaxGPT can do the dirty work for him so that, in his own good time, Mr. Miyagi will teach him something useful. 

So while I don’t disagree with Rick’s overall emphasis on personal formation, I believe we should reassess — radically reassess — the relationship between such formation and our assignments, assessment, and testing. I think that’s where we have to begin learning to live in the Bot Age. My guess is that those schools that are already disconnected from the standard assessment metrics — places like Reed College and the two St. John’s colleges — will be best placed to deal with the challenges of this era. 

Scott Alexander:

If you could really plug an AI’s intellectual knowledge into its motivational system, and get it to be motivated by doing things humans want and approve of, to the full extent of its knowledge of what those things are3 – then I think that would solve alignment. A superintelligence would understand ethics very well, so it would have very ethical behavior. 

Setting aside the whole language of “motivation,” which I think wildly inappropriate in this context, I would ask Alexander a question: Are professors of ethics, who “understand ethics very well,” the most ethical people? 

The idea that behaving ethically is a function or consequence of understanding is grossly misbegotten. Many sociopaths understand ethics very well; their knowledge of what is generally believed to be good behavior is essential to their powers of manipulation. There is no correlation between understanding ethics and living virtuously. 

without principle

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead:

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore. 

As someone who has been writing for some years now about what I call the Oppenheimer Principle, I find this moment piquant. 

But I was also troubled by President Biden’s Grandpa Joe moment when he wandered into a meeting between the Vice-President and the nation’s leading professional sociopaths and then asked those sociopaths to “educate us.” Ah well. It could be worse, and we all know how it could be worse. 

Technoteachers

Lorna Finlayson · Diary: Everyone Hates Marking:

Students want – or think they want – more and faster feedback. So tutors write more and more, faster and faster, producing paragraph on paragraph that students, in moments of sheepish honesty, sometimes admit they don’t read. However infuriating, it’s understandable. This material is far from our best work. Much of it is vague, rushed or cribbed. In order to bridge the gap between staff capacity and student ‘demand’, some universities are outsourcing basic feedback to private providers. One company, Studiosity, lists thirty institutions among its ‘partners’, including Birkbeck and SOAS.

Managers often seem to assume that marking is a quasi-mechanical process whereby students are told what is good and bad about their work, and what they need to do to improve. But students don’t improve by being told how to improve, any more than a person learns to ride a bicycle by being told what to do – keep steady, don’t fall off. There’s a role for verbal feedback, but the main way that learning happens is through practice: long, supported, unhurried practice, opportunities for which are limited in the contemporary university. 

Of course universities are going to outsource commentary on essays to AI — just as students will outsource the writing of essays to AI. And maybe that’s a good thing! Let the AI do the bullshit work and we students and teachers can get about the business of learning. It’ll be like that moment in The Wrong Trousers when Wallace ties Gromit’s leash to the Technotrousers, to automate Gromit’s daily walk. Gromit merely removes his collar and leash, attaches them to a toy dog on a wheeled cart, and plays in the playground while the Technotrousers march about. 

Let the automated system of papers and grading march mindlessly; meanwhile, my students and I are are gonna play on the slide. 

If an AI can write it, and an AI can read it and respond to it, then does it need to be done at all? 

Dishonor Code: What Happens When Cheating Becomes the Norm?:

Most professors, students said, grasp that the American campus has changed—big time. That the paradigm has shifted. Professors want a comfortable perch that looks nice on their résumés where they can write their articles and books and get ahead—just like the students want to get ahead, just like the universities want to get ahead. (Sam Beyda, the Columbia economics major, pointed out that his own school’s administration had been accused of manipulating data to game the U.S. News & World Report rankings.)

A recent Yale University graduate said his professors had encouraged him to get diagnosed with ADHD so he could get more time to finish homework or take exams. One student he knew received extra time for “academic-induced depression.” He smirked when he said it. 

I hear from my fellow professors all the time that recent technologies (and not just the new chatbots) have simply exposed for all to see the heretofore unspoken deal between teachers and students: We pretend to teach them and they pretend to learn. Henry James Sumner Maine may have talked about the move from status to contract as the foundation of the social order, but what we have in academia is an unwritten contract that allows both parties to increase their status. 

I know this will be hard to believe, but: We genuinely do things differently here in Baylor’s Honors College. Why? I think it’s a combination of (a) the presence of Christian commitments, among both professors and students, that encourage us to remember that education is personal formation; and (b) the fact that Baylor as a whole is not an elite institution. Students who come here tend not to think that they’re gonna rule the world someday; they want to do well in life, of course, but they’re not set on a lifetime of climbing Success’s greasy pole. And we can help them think about how to pursue good things in life that don’t involve stock options. 

unsimulated

Re: this essay by Alexa Hazel — of course people think we’re in a computer simulation. We always conceive of our minds as a dominant technology of our moment. As Gary Marcus wrote a few years ago, “Descartes thought that the brain was a kind of hydraulic pump, propelling the spirits of the nervous system through the body. Freud compared the brain to a steam engine. The neuroscientist Karl Pribram likened it to a holographic storage device.” But, Marcus insists, when we say a mind is a computer this time we’re right. Say others, No we’re not

Me, I think we’re always wrong. We make idols and worship them — we remake ourselves in the image of our own technologies. See Brad Pasanek’s Metaphors of Mind: An Eighteenth-Century Dictionary to understand how this works, but John Calvin put the point most forcibly when he said that “the mind is a perpetual forge of idols” — thus critiquing this practice and exemplifying it at the same time, a neat trick. 

Hazel continues:  

When I talk to friends who don’t live in Palo Alto, they suggest that I have been here too long. I hear things like, you have drunk the Kool-Aid. No one wants this, they say. No one will use these devices.

Meanwhile, a lab at Stanford has already manufactured an effective retinal implant. The clunkiness of existing VR headsets is beside the point. How our lives will become more digital is undecided. That they will become more digital seems to me basically inevitable. To gesture to Meta’s slumping stock price in order to clinch the argument for VR’s irrelevance is to draw attention away from the question of who’s steering the ship, to what end, and why. 

This strikes me as the despair of a humanist forced to dwell in the molten core of the Californian ideology. The truth is that many lives will become more digital, but some will opt out of that bullshit. Be one of the opt-outers. 

For Chat-Based AI, We Are All Once Again Tech Companies’ Guinea Pigs – WSJ:

Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge. Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.

Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,” she says.

Charlie Stross:

The thing I find most suspicious/fishy/smelly about the current hype surrounding Stable Diffusion, ChatGPT, and other AI applications is that it is almost exactly six months since the bottom dropped out of the cryptocurrency scam bubble.

This is not a coincidence.

Cory Doctorow: “In its nearly 25-year history, Google has made one and a half successful products: a once-great search engine and a pretty good Hotmail clone. Everything else it built in-house has crashed and burned.” Ouch.

question and answer

Question: How bad would the whole AI/search/chat situation have to get — how much real-world harm have to be done — before any of the tech companies pulled their version from the market? 

Answer: The publicly-held companies might pull theirs in response to a stock-market collapse, but the privately-held ones? I can’t imagine any circumstances short of legislative action that would cause them to pull back. They believe in the “move fast and break things” mantra, they think no publicity is bad publicity, and their technological justification is that the bots will improve only through iteration. 

UPDATE: So Microsoft — one of the public companies in this racket — hasn’t taken down Bing Chat but has “lobotomized” it. Sydney, we hardly knew ye. 

Arvind Narayanan and Sayash Kapoor:

The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it’s always convincing, so it’s hard to tell the difference. 

So why not have chatbots replace our elected representatives, who also have benefitted from “no source of truth during training”? An experiment worth trying, I say. 

more, please

Ah, here it is: the musical equivalent of ChatGPT. Cool. I want to see more of this. I’ve written before — see the links here — about the ways that musicians have been forced into more inflexibly formulaic compositions and performances. Given the way that the music industry thinks today, who needs musicians? If you want the inflexibly formulaic, computers do that better than humans.

My advice to the big music labels: Cut out the middleman (i.e. the musicians). 

My advice to musicians and people who love actual music: Check out Bandcamp

the ed-tech business model

EKAKrrRXsAI0Q4c

NYT

The misuse of A.I. tools will most likely not end, so some professors and universities said they planned to use detectors to root out that activity. The plagiarism detection service Turnitin said it would incorporate more features for identifying A.I., including ChatGPT, this year.

More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect A.I.-generated text, said Edward Tian, its creator and a senior at Princeton University.

I’ll believe in AI when I can say, “Hey Siri, please hide from me all references to AI. Also every conversation in which journalists snark at other journalists. And, no references to Twitter or Mastodon.” 

defining immortality down

Digital Eternity Is Just Around the Corner

As these technologies develop and become more accessible, they will increasingly be used in combination, creating “intelligent avatars” of ourselves that continue to “live” long after we have died. We are seeing the beginnings of this with the metaverse company Somnium Space, whose Live Forever mode allows users to create “digital clones” built from data they have stored while alive, including conversational style, gaits, and even facial expressions. 

This sense of immortality may be reassuring, but there is a catch. AI avatars will rely on us feeding their algorithms a huge amount of personal data, accumulated through the course of our lives. If we want our digital selves to live on, this is the exchange we must accept: that the unfiltered beliefs and opinions we express today may not only be archived, but consequently used to build these posthumous personae. In other words, we can have a voice in the afterlife, but we cannot be certain about what it may say. This will force us to reconsider how our behaviors today might influence digital versions of ourselves set to outlive us. Faced with this prospect of virtual immortality, 2023 will be the year we broaden our definition of what it means to live forever, a moral question that will fundamentally change how we live our day-to-day lives, but also what it means to be immortal. 

Notice how the quotes around “live” in the first paragraph disappear in the second. Notice also — this is universal in such discourse — the unexamined “we”: “2023 will be the year we broaden our definition of what it means to live forever.” Depends on who “we” are, I think. I for one am not interested in broadening my definition of what means to live forever in such a way that it isn’t living and doesn’t last forever. But you be you! 

There’s a powerful passage from C. S. Lewis’s autobiography Surprised by Joy

I had recently come to know an old, dirty, gabbling, tragic, Irish parson who had long since lost his faith but retained his living. By the time I met him his only interest was the search for evidence of “human survival.” On this he read and talked incessantly, and, having a highly critical mind, could never satisfy himself. What was especially shocking was that the ravenous desire for personal immortality co-existed in him with (apparently) a total indifference to all that could, on a sane view, make immortality desirable. He was not seeking the Beatific Vision and did not even believe in God. He was not hoping for more time in which to purge and improve his own personality. He was not dreaming of reunion with dead friends or lovers; I never heard him speak with affection of anybody. All he wanted was the assurance that something he could call “himself” would, on almost any terms, last longer than his bodily life. 

Whenever I read about someone who sees a technological route to immortality I think about this “ravenous desire for personal immortality” combined with “a total indifference to all that could, on a sane view, make immortality desirable.” So you want a digital imitation of yourself to live on after you die. But why

The Struggle To Be Human – by Ian Leslie – The Ruffian:

Whether it’s music, movies or politics, we seem to be creating a world more amenable to AI by erasing more and more of what makes us, us. Even if we think we have got the better of this deal up until now, we shouldn’t assume we always will. A little resistance is prudent. The bar for being human has just been raised; the first thing we should do is stop lowering it.

Olivia Snow:

I’ve already been lectured about the dangers of how using [Lensa] implicates us in teaching the AI, stealing from artists, and engaging in predatory data-sharing practices. Each concern is legitimate, but less discussed are the more sinister violations inherent in the app, namely the algorithmic tendency to sexualize subjects to a degree that is not only uncomfortable but also potentially dangerous. 

Who could have known? 

words: bashed

Noah Smith and “roon”:

It’s important to realize exactly why the innovations of the past didn’t result in the kind of mass obsolescence that people feared at the time.

The reason was that instead of replacing people entirely, those technologies simply replaced some of the tasks they did. If, like Noah’s ancestors, you were a metalworker in the 1700s, a large part of your job consisted of using hand tools to manually bash metal into specific shapes. Two centuries later, after the advent of machine tools, metalworkers spent much of their time directing machines to do the bashing. It’s a different kind of work, but you can bash a lot more metal with a machine.

Note the planted axioms here — the governing assumptions that the authors may not even know they’re making:

  1. That metalwork is neither an art nor a craft in which humans might take satisfaction but is simply a matter of “bashing” metal;
  2. That it’s better to direct machines to bash than to do one’s own bashing, because working with metal is drudgery but overseeing machines isn’t;
  3. That more metal-bashing is better than less metal-bashing.

I have, shall we say, some doubts about all those axioms. But let’s move on.

Consider the following, produced in the year 2322:

If, like Noah’s ancestors, you were a writer in the 2000s, a large part of your job consisted of using keyboards to manually bash characters into specific shapes. Two centuries later, after the advent of AI, writers spent much of their time directing machines to do the bashing. It’s a different kind of work, but you can bash a lot more characters with a machine.

What a utopian dream! No one has to write any more — no one has to think of what to say, to struggle for the best words in the best order, to strive to persuade or entertain. You just say, “Hey Siri, write me an essay on why there’s no reason to fear that AI will replace humans.”

Wait — I was being sardonic there but it turns out that that’s what Smith and roon really think:

Take op-ed writers, for instance – an example that’s obviously important to Noah. Much of the task of nonfiction writing involves coming up with new ways to phrase sentences, rather than figuring out what the content of a sentence should be. AI-based word processors will automate this boring part of writing – you’ll just type what you want to say, and the AI will phrase it in a way that makes it sound comprehensible, fresh, and non-repetitive. Of course, the AI may make mistakes, or use phrasing that doesn’t quite fit a human writer’s preferred style, but this just means the human writer will go back and edit what the AI writes.

In fact, Noah imagines that at some point, his workflow will look like this: First, he’ll think about what he wants to say, and type out a list of bullet points. His AI word processor will then turn each of these bullet points into a sentence or paragraph, written in a facsimile of Noah’s traditional writing style.

Behold: an image of the future of writing produced by a writer who quite obviously doesn’t like to write.

What seems to be missing here is the question of why the people who now pay Noah Smith to write wouldn’t just cut out the middleman, i.e., Noah Smith. Maybe that’s the future of Substack: AI drawing on a large corpus of hand-bashed text so that instead of paying Freddie deBoer to write I can just say, “Hey Substack, write me an essay on professional wrestling in the style of Freddie deBoer.” After all, people who write for Substack have limited time, limited energy, limited imagination, but AI won’t have any of those limits. It can bash infinitely more words.

I think Smith and roon don’t consider that possibility because they have another planted axiom, one that can be extracted from this line in their essay: our AI future “doesn’t mean humans will have to give up the practice of individual creativity; we’ll just do it for fun instead of for money.” But we will only do that if we have time and energy to do it, which we will have only have if we’re not busting our asses to make a living. Thus the final planted axiom: AI and human beings will flourish together in a post-scarcity world, like that of Iain M. Banks’s Culture novels.

three versions of artificial intelligence

Artificial Creativity? – O’Reilly:

AI has been used to complete Beethoven’s 10th symphony, for which Beethoven left a number of sketches and notes at the time of his death. The result is pretty good, better than the human attempts I’ve heard at completing the 10th. It sounds Beethoven-like; its flaw is that it goes on and on, repeating Beethoven-like riffs but without the tremendous forward-moving force that you get in Beethoven’s compositions. But completing the 10th isn’t the problem we should be looking at. How did we get Beethoven in the first place?  If you trained an AI on the music Beethoven was trained on, would you eventually get the 9th symphony? Or would you get something that sounds a lot like Mozart and Haydn?

I’m betting the latter. 

A story from qntm:

As the earliest viable brain scan, MMAcevedo is one of a very small number of brain scans to have been recorded before widespread understanding of the hazards of uploading and emulation. MMAcevedo not only predates all industrial scale virtual image workloading but also the KES case, the Whitney case, the Seafront Experiments and even Poulsen’s pivotal and prescient Warnings paper. Though speculative fiction on the topic of uploading existed at the time of the MMAcevedo scan, relatively little of it made accurate exploration of the possibilities of the technology, and that fiction which did was far less widely-known than it is today. Certainly, Acevedo was not familiar with it at the time of his uploading. 

As such, unlike the vast majority of emulated humans, the emulated Miguel Acevedo boots with an excited, pleasant demeanour. He is eager to understand how much time has passed since his uploading, what context he is being emulated in, and what task or experiment he is to participate in. If asked to speculate, he guesses that he may have been booted for the IAAS-1 or IAAS-5 experiments. At the time of his scan, IAAS-1 had been scheduled for August 10, 2031, and MMAcevedo was indeed used for that experiment on that day. IAAS-5 had been scheduled for October 2031 but was postponed several times and eventually became the IAAX-60 experiment series, which continued until the mid-2030s and used other scans in conjunction with MMAcevedo. The emulated Acevedo also expresses curiosity about the state of his biological original and a desire to communicate with him.  

MMAcevedo’s demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload’s cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo’s innate skills and personality make it fundamentally unsuitable for many workloads. 

Charlie Stross

Here’s the thing: our current prevailing political philosophy of human rights and constitutional democracy is invalidated if we have mind uploading/replication or super-human intelligence. (The latter need not be AI; it could be uploaded human minds able to monopolize sufficient computing substrate to get more thinking done per unit baseline time than actual humans can achieve.) Some people are, once again, clearly superior in capability to born-humans. And other persons can be ruthlessly exploited for their labour output without reward, and without even being allowed to know that they’re being exploited. […] 

Our intuitions about crimes against people (and humanity) are based on a set of assumptions about the parameters of personhood that are going to be completely destroyed if mind uploading turns out to be possible.

How to prevent the coming inhuman future – by Erik Hoel:

There are a handful of obvious goals we should have for humanity’s longterm future, but the most ignored is simply making sure that humanity remains human. […] 

… what counts as moral worth surely changes across times, and might be very different in the future. That’s why some longtermists seek to “future-proof” ethics. However, whether or not we should lend moral worth to the future is a function of whether or not we find it recognizable, that is, whether or not the future is human or inhuman. This stands as an axiomatic moral principle in its own right, irreducible to other goals of longtermism. It is axiomatic because as future civilizations depart significantly from baseline humans our abilities to make judgements about good or bad outcomes will become increasingly uncertain, until eventually our current ethical views become incommensurate. What is the murder of an individual to some futuristic brain-wide planetary mind? What is the murder of a digital consciousness that can make infinite copies of itself? Neither are anything at all, not even a sneeze — it is as absurd as applying our ethical notions to lions. Just like Wittgenstein’s example of a talking lion being an oxymoron (since a talking lion would be incomprehensible to us humans), it is oxymoronic to use our current human ethics to to answer ethical questions about inhuman societies. There’s simply nothing interesting we can say about them.

making God

To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.

— Mission statement of Way of the Future (2017)

In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end. That is what we are working to. When you are asked, “Where is God? Who is God?” stand up and say, “I am God and here is God, not as yet completed, but still advancing towards completion, just in so much as I am working for the purpose of the universe, working for the good of the whole of society and the whole world, instead of merely looking after my personal ends.”

— George Bernard Shaw, “The New Theology” (1907)

css.php