Eight years ago, I wrote about a dominant and pernicious ideology that features two components: 

Component one: that we are living in a administrative regime built on technocratic rationality whose Prime Directive is, unlike the one in the Star Trek universe, one of empowerment rather than restraint. I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.”

The topic of that essay was the prosthetic reconstruction of bodies and certain incoherent justifications thereof, so I went on: “We change bodies and restructure child-rearing practices not because all such phenomena are socially constructed but because we can — because it’s ‘technically sweet.’” Then:

My use of the word “we” in that last sentence leads to component two of the ideology under scrutiny here: Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

In light of current debates about the development of AI – debates that have become more heated in the wake of an open letter pleading with AI researchers to pause their experiments and take some time to think about the implications – the power of the Oppenheimer Principle has become more evident than ever. And it’s important, I think, to understand what in this context is making it so powerful.

Before I go any further, let me note that the term Artificial Intelligence may cover a very broad range of endeavors. Here I am discussing a recently emergent wing of the overall AI enterprise, the wing devoted to imitating or counterfeiting actions that most human beings think of as distinctively human: conversation, image-making (through drawing, painting, or photography), and music-making.

I think what’s happening in the development of these counterfeits – and in the resistance to asking hard questions about them – is the Silicon Valley version of what the great economist Thorstein Veblen called “trained incapacity.” As Robert K. Merton explains in a famous essay on “Bureaucratic Structure and Personality,” Veblen’s phrase describes a phenomenon identified also by John Dewey – though Dewey called it “occupational psychosis” – and by Daniel Warnotte – though Warnotte called it “Déformation professionnelle.” It is curious that this same phenomenon gets described repeatedly by our major social scientists; that suggests that it is a powerful and widespread phenomenon indeed. 

Peggy Noonan recently wrote in the Wall Street Journal of the leaders of the major Silicon Valley companies,

I am sure that as individuals they have their own private ethical commitments, their own faiths perhaps. Surely as human beings they have consciences, but consciences have to be formed by something, shaped and made mature. It’s never been clear to me from their actions what shaped theirs. I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.

I want to make a stronger argument: that the distinctive “occupational psychosis” of Silicon Valley is sociopathy – the kind of sociopathy embedded in the Oppenheimer Principle. The people in charge at Google and Meta and (outside Silicon Valley) Microsoft, and at the less well-known companies that are being used by the mega-companies, have been deformed by their profession in ways that prevent them from perceiving, acknowledging, and acting responsibly in relation to the consequences of their research. They have a trained incapacity to think morally. They are by virtue of their narrowly technical education and the strong incentives of their profession moral idiots.

The ignorance of the technocratic moral idiot is exemplified by Sam Altman of OpenAI – an increasingly typical Silicon Valley type, with a thin veneer of moral self-congratulation imperfectly obscuring a thick layer of obedience to perverse incentives. “If you’re making AI, it is potentially very good, potentially very terrible,” but “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe.” He can’t even imagine that “the way to get it right” might be not to do it at all. (See Scott Alexander on the Safe Uncertainty Fallacy: We have absolutely no idea what will result from this technological development, therefore everything will be fine.) The Oppenheimer Principle trumps all.

These people aren’t going to fix themselves. As Jonathan Haidt (among others) has often pointed out – e.g. here – the big social media companies know just how much damage their platforms are doing, especially to teenage girls, but they do not care. As Justin E. H. Smith has noted, social media platforms are “inhuman by design,” and some of the big companies are tearing off the fig leaf by dissolving their ethics teams. Deepfakes featuring Donald Trump or the Pope are totally cool, but Chairman Xi gets a free pass, because … well, just follow the money.

Decisions about these matters have to be taken out of the hands of avaricious professionally-deformed sociopaths. And that’s why lawsuits like this one matter.