...

Stagger onward rejoicing

Tag: HTT (page 1 of 3)

the irrelevance of thinking

Mary Harrington:

As I argued here, it would would be more accurate (if less snappy) to describe AI as “powerful modelling and prediction tools based on pattern recognition across very large datasets”. It is, in other words, not a type of cognition in its own right, but – to borrow a term from Marshall McLuhan – one of the “extensions of man”: specifically a means of extending cognition itself.

I don’t think this is correct; what LLMs do is not the extension of cognition but rather the simulation and commodification of the palpable products of cognition.

The people who make LLMs have little discernible interest in cognition itself. Some of them may believe that they’re interested in cognition, but what they’re really focused on is product — that is, output, what gets spat out in words or images or sounds at the conclusion of an episode of thinking.

Seeing those products, they want to simulate them so that they can commodify them: package them and serve them up in exchange for money.

This doesn’t mean that LLMs are evil, or that it’s wrong to sell products for money; only that thinking itself is irrelevant to the whole business. 

UPDATE: From a fascinating essay by Dario Amodei, the CEO of Anthropic:  

Modern generative AI systems are opaque in a way that fundamentally differs from traditional software.  If an ordinary software program does something—for example, a character in a video game says a line of dialogue, or my food delivery app allows me to tip my driver—it does those things because a human specifically programmed them in.  Generative AI is not like that at all.  When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does—why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate.  As my friend and co-founder Chris Olah is fond of saying, generative AI systems are grown more than they are built—their internal mechanisms are “emergent” rather than directly designed.  It’s a bit like growing a plant or a bacterial colony: we set the high-level conditions that direct and shape growth, but the exact structure which emerges is unpredictable and difficult to understand or explain.  Looking inside these systems, what we see are vast matrices of billions of numbers.  These are somehow computing important cognitive tasks, but exactly how they do so isn’t obvious. 

UPDATE 2: Essays by Melanie Mitchell of the Sante Fe Institute — one and two — on what LLMS do instead of thinking. The “bag of heuristics” idea is a vivid one. 

true believers

David Brooks:

In days gone by, parties were political organizations designed to win elections and gain power. Party leaders would expand their coalitions toward that end. Today, on the other hand, in an increasingly secular age, political parties are better seen as religious organizations that exist to provide believers with meaning, membership and moral sanctification. If that’s your purpose, of course you have to stick to the existing gospel. You have to focus your attention on affirming the creed of the current true believers. You get so buried within the walls of your own catechism, you can’t even imagine what it would be like to think outside it.

When parties were primarily political organizations, they were led by elected officials and party bosses. Now that parties are more like quasi-religions, power lies with priesthood — the dispersed array of media figures, podcast hosts and activists who run the conversation, define party orthodoxy and determine the boundaries of acceptable belief.

This is brilliant by Brooks, so read the whole thing. But than I would think so, wouldn’t I, because this converges with points I have been making for years. When the Repugnant Cultural Other becomes the Repugnant Religious Other — when the Other is a heretic out to destroy your very soul — then being “buried within the walls of your own catechism” is the Prime Directive. (“For the love of God, Montresor, don’t tear down this wall.”) 

Wow, that’s three allusions in, like, ten words. I should be on BookTok or something.

Anyway, this analysis helps to explain one of Brooks’s key points, which is that none of the priests who lead these two competing religions seem interested in making converts, only in dissing the other side.  As I wrote in another post

Recently I was reading Minds Wide Shut by Gary Saul Morson and Morton Schapiro, and while I venerate GSM just this side idolatry, I don’t think the book quite works as intended. At the risk of oversimplification, I’ll say that its core argument is (a) that our culture is dominated by a set of fundamentalisms — “At the heart of any fundamentalism, as we define it, is a disdain for learning from evidence. Truth is already known, given, and clear” — and (b) that the fundamentalist mindset is incapable of persuasion, of bringing skeptics over to its side. 

All of which is true, but (and this is a major theme of my How to Think) what if people don’t want to persuade others? What if they don’t just hate their Repugnant Cultural Other but need him or her in order to define themselves and their Inner Ring? 

If I may cite myself one more time: Hatred alone is immortal. This is our problem in a nutshell. 

Jacobs’s First Law

Here it is:

People who know nothing about a subject are radically vulnerable to those who know less than nothing. 

 — those knowing less than nothing being people who have strongly held but baseless opinions. This law of the universe raises its head almost every time some American journalist or tweeter or YouTuber makes an argument based on a claim about our history. Some fatuous statement becomes a mantra for people who lack (a) the knowledge to test the claim and (b) the initiative to acquire the knowledge. 

clichés, yes or no

Amanda Montell:

Since the moment I learned about the concept of the “thought-terminating cliche” I’ve been seeing them everywhere I look: in televised political debates, in flouncily stencilled motivational posters, in the hashtag wisdom that clogs my social media feeds. Coined in 1961 by psychiatrist Robert Jay Lifton, the phrase describes a catchy platitude aimed at shutting down or bypassing independent thinking and questioning. I first heard about the tactic while researching a book about the language of cult leaders, but these sayings also pervade our everyday conversations: expressions such as “It is what it is”, “Boys will be boys”, “Everything happens for a reason” and “Don’t overthink it” are familiar examples.

From populist politicians to holistic wellness influencers, anyone interested in power is able to weaponise thought-terminating cliches to dismiss followers’ dissent or rationalise flawed arguments. 

This seems exactly right to me. But perhaps it’s worth noting here that two years ago the Chronicle of Higher Education, of all journals, published an essay by Julie Stone Peters, a Professor of English and comparative literature at Columbia, arguing that thought-terminating clichés are super-cool because they are politically effective and because students aren’t smart enough to do any better. No, seriously:

Not all of our students will be original thinkers, nor should they all be. A world of original thinkers, all thinking wholly inimitable thoughts, could never get anything done. For that we need unoriginal thinkers, hordes of them, cloning ideas by the score and broadcasting them to every corner of our virtual world. What better device for idea-cloning than the cliché? 

Note here a doozy of a false dichotomy: either applaud clichés or have a world of people with “whole inimitable thoughts.” Sure, Peters concedes, sometimes academic clichés “may go rogue” and “might explode on you.” But that’s the chance she is willing to take. The alternative — expecting students to think and trying to help them do that better — is so much more unpleasant. I have to give Peters credit for being willing to say the quiet part out loud, because this really is how a lot of professors think. 

This might be a good time to remind y’all that, as William Deresiewicz writes, some interesting groups of people are abandoning universities not because they disdain the humanities and the liberal arts, but because they love them. 

Gilead revisited

The way we speak and think of the Puritans seems to me a serviceable model for important aspects of the phenomenon we call Puritanism. Very simply, it is a great example of our collective eagerness to disparage without knowledge or information about the thing disparaged, when the reward is the pleasure of sharing an attitude one knows is socially approved. And it demonstrates how effectively such consensus can close off a subject from inquiry. I know from experience that if one says the Puritans were a more impressive and ingratiating culture than they are assumed to have been, one will be heard to say that one finds repressiveness and intolerance ingratiating. Unauthorized views are in effect punished by incomprehension, not intentionally and not to anyone’s benefit, but simply as a consequence of a hypertrophic instinct for consensus. This instinct is so powerful that I would suspect it had survival value, if history or current events gave me the least encouragement to believe we are equipped to survive.

– Marilynne Robinson, “Puritans and Prigs” (1996)


I’m re-reading Gilead now, in preparation for teaching it, and I am struck all over again by what an extraordinary book it is, what a gift it has been to so many readers — millions of them, maybe. (Promotional material for the book has long shouted A MILLION COPIES SOLD, but the count might be two million by now, and of course many thousands of people have read used and library copies.) Really, it’s some kind of miracle. The novels that have followed it are excellent novels indeed, but they aren’t miraculous. Gilead certainly is. 

But today, twenty years later, would Gilead even be published by a big trade house? As long as the author could say that she teaches at the Iowa Writer’s Workshop, probably. Would it be widely read and celebrated? Almost certainly not. The self-appointed cultural gatekeepers would denounce it as a project of white cis-het imperialism, and trepidatious reviewers would either ignore it or offer, at best, muted praise. And if it were a first novel, it might not get published at all — though perhaps an outfit like Belt Publishing would take it on.

As I read Gilead today it still feels like a great gift, but also an artifact of a lost era. 

Ceci N’est Pas une Current-Events Post

No no no, this is not at all about a current controversy. Hang in there, you’ll see what I mean. 

Recently some people — including grifters, but also a few people who want to have a reputation for responsible thinking and writing — have been promoting a re-interpretation of the death of George Floyd, an alternative account in which Derek Chauvin is not guilty of murder. So Radley Balko looked into the matter, and … well, as far as I can tell, after Radley has done his thing there’s not much left of the revisionist case. 

Let me correct that: there’s nothing left of the revisionist case. 

But I’m not writing here to refute that case, or rejoice in its refutation. I’m writing because if you read Balko’s piece you’ll see what it takes to do something like this the right way. It requires persistence, patience, extreme attentiveness, and the willingness to turn over every stone. Read that piece and you’ll see that Balko has studied the materials that the revisionists have never bothered to look at: he’s read police-procedure manuals — not just current ones, but also older ones, and has noted the changes from one to another; he’s watched police training videos; he’s surveyed court documents, and shared illustrations that were provided in court testimony, as well as the associated verbal testimony; he’s looked into the history of Minneapolis police actions against black members of the community; he’s watched with minute scrutiny the documentary that has made the revisionist claim popular, and has found the hidden seams in the presentation. Basically, he has done it all. 

It’s hard to find journalists as thorough as Balko has been here — and in many other writings over the years — because journalists know that almost no one cares. Well more than 99% of readers/viewers/listeners have one question about a work of journalism: Does it or does it not confirm the views I already have about this case? That is all they know on earth, and all they (think they) need know. But if you’re one of the <1% who care about the truth, a journalist like Radley Balko is an invaluable resource. 

And not just because he’ll help you find out what really happened — no, there’s another benefit to reading pieces like this one. It’ll will help you to a better understanding of where, when, and how other journalists (or “journalists”) cut corners. You’ll see the very particular consequences of motivated reasoning: selective attention, question-begging, concealment of evidence, faulty logic of every variety. And that’s an education in itself, whether you care about the particular case at hand or not. 

Silence, Violence, and the Human Condition

I don’t believe that “silence is violence,” ever. And I doubt that anyone else would either, if they were to spend a bit of time thinking about it. People remain silent when they see violence (either threatened or performed) for a wide variety of reasons: sometimes they are indifferent to the sufferings of others, sometimes they enjoy the sufferings of others, but sometimes they have quite legitimate fears that any protest will lead to violence being inflicted upon them without anyone else being saved. Protest is not inevitably successful, and truthful accusation does not inevitably lead to arrest and conviction. Moreover, there are ways other than speech of responding to, or striving to prevent, impermissible violence.

Even when one’s silence does make it more likely that someone will be hurt, we do not benefit from erasing the distinction between sins of omission and sins of commission. Indifference to the suffering of others is a grave sin, but there are sins still graver. And different. As Auden wrote in his poem “The More Loving One,”

Looking up at the stars, I know quite well
That for all they care, I can go to hell,
But on earth indifference is the least
We have to dread from man or beast.

So, no: it is untrue that silence is violence. 

Shall we say, then, that silence is complicit in violence? It’s obvious why that is a more defensible argument, but it is not as dispositive as people who use it believe. I recently wrote something about Israel and Gaza, but I didn’t do it because people told me that otherwise I would have been complicit in the violence done there – though indeed people did tell me that. I wrote it for my own reasons, not because I felt that I was complicit in anything.

There are more evil things going on in the world than any one person can respond to. You could spend all day every day on social media just declaring that you denounce X or Y or Z and never get to the end of what deserves to be denounced. If my silence about Gaza is complicit in the violence being done there, what about my silence regarding the Chinese government’s persecution of the Uighurs? Or the government of Myanmar’s persecution of the Rohingya? Or what Boko Haram has done in Nigeria? Or what multinational corporations do to destroy our environment? Or dogfighting rings? Or racism in the workplace? Or sexism in the workplace?

There are two possible responses to this problem. One is to say that I am inevitably complicit in every act of violence I do not denounce, even if it would be impossible for me to denounce all such acts. But that position leads to a despairing quietism: Why should I denounce anything if in so doing I remain guilty for leaving millions of violent acts undenounced?

The second way is better: pick your spots and pick them unapologetically. It’s perfectly fine for people to have their own causes, the causes that for whatever reason touch their hearts. We all have them, we are all moved more by some injustices than by others; not one of us is consistently concerned with all injustices, all acts of violence, nor do we have a clear system of weighting the various sufferings of the world on a scale and portioning out our attention and concern in accordance with a utilitarian calculus.

Some effective altruists, especially the so-called longtermists, try to do this, but their endeavor is full of errors. One is longtermism’s inevitably speculative character, its belief that future dangers to humanity can be predicted with sufficient reliability to guide our actions. A greater error inheres in the great unstated axiom of effective altruism: Money is the only currency of compassion. (As the Archbishop of Canterbury says in Charles Williams’s poem Taliessen through Logres, “Money is a medium of exchange.”)

The silence-is-violence crowd, to their credit, don’t think that money is the only commodity we have to spend: they think we can and must spend our words also. And they always believe they know what, in a given moment, we must spend our words on. What they never seen to realize, though, is that some words are a debased currency. As the Lord says to Job, “Who is this that darkeneth counsel by words without knowledge?” To speak “words without knowledge” is to “darken counsel,” that is, confuse the issue, mislead or confuse one’s hearers. The purpose of counsel is to illuminate a situation; one does not illuminate anything by speaking out of ignorance or mere rage. 

Above all we need to acknowledge that no one — no one — operates with consistency in these matters. As David Edmonds writes in his recent biography of the philosopher Derek Parfit, Parfit refused to meet with a dying friend, Susan Hurley — a fellow philosopher who was the first woman to be elected a Fellow of All Souls College, Oxford — because he considered it more important to work on his philosophical writing. Yet Edmonds reports on several acts of generosity by Parfit, acts which also deprived him of work time. Similarly, as Julian Baggini writes in a review of Edmonds’s book,

He objected to the effective altruism movement’s Giving What We Can pledge to donate at least 10 per cent of signees’ incomes to relieve poverty, because he thought it was obvious that people could donate more. He also objected to the word “giving” for implying that this was optional, when he thought we were not morally entitled to our wealth. Yet in the years when he pursued photography as a serious hobby, he would spend thousands of pounds on a single print. Obsessed with typesetting, he offered to reimburse his publisher Oxford University Press for the extra costs of following his strict instructions, on one occasion paying £3,000 for wet proofs to check how the pages would actually come out from the plates. He also overpaid for a house by £50,000 just because he fell in love with it.

I am sure that Parfit thought of himself as a principled actor, but he certainly wasn’t: like almost all of us, he acted according to his own preferences. I’m sure that when he was kind and generous it was because that felt good to him, and I’m sure that when he declined to meet with a dying friend he declined not for philosophically defensible reasons but because he found such a meeting unpleasant.

Now, I am not suggesting that Derek Parfit should be a role model for anyone. To judge from Edmonds’s biography, he was an exceptionally unpleasant man, though Edmonds treats him not as wicked but rather as profoundly strange. I am merely pointing out that, for all his fierce labor to identify and describe the objective roots of morality, Parfit’s own behavior was as inconsistent and unprincipled as yours and mine or the Effective Altruist next door.

I think what we should learn from all this is simply that one should have principles — ideally better ones than Derek Parfit had — but we should not be ashamed of the subjectivity inherent in them. I know people who care for abandoned dogs, and whose attention to those abandoned dogs makes them effectively, if not theoretically, indifferent to matters that many people believe to be much greater concern: what’s happening in Gaza, who the next President of the United States will be, global climate change, etc. I think that’s just fine. The world has so much more suffering than any of us could possibly address that any remediation, any limiting of harm and pain and suffering, is a good thing. And we are not wired in such a way that we can maintain our commitment to undoing or preventing harm that (for whatever reason) doesn’t really touch our hearts. We should not feel guilty for failing to think about — still less for failing to speak about — climate change when there is something else, some other suffering or violence right before us that we can to some degree ameliorate. That’s the human condition and we ought to embrace it. In enables us to leave the world in at least a slightly better condition than we found it. 

Legal Sauce for the Legal Goose

From an an interview with Jill Lepore:

I’m working on a long book about the history of attempts to amend the Constitution. And on the one hand, we have a Constitution that has a provision that allows for generativity and invention and adjustment and improvement and alteration and remedy and making amends, and all of these wonderful, beautiful ideas that we associate with the idea of the future. And yet, we live in a world where we can’t actually use that provision because our politics are so overridden with the idea of the past. Consider the Supreme Court’s history-and-tradition test, under which we can’t do anything that doesn’t derive from the past. The week that we’re speaking, the Supreme Court is hearing oral arguments on the question of whether people who have restraining orders against them due to domestic abuse can be prohibited from buying or owning weapons, and the test that the Supreme Court uses is to ask: “Was there an analogous law like that in 1787?” That is plainly nuts. In that sense, we are held hostage by the dead. 

This is a mess. Lepore means not “history and tradition,” but “text, history, and tradition” (THT for short) — see e.g. this article. (THT may or may not be a coherent model of interpretation. One complication, raised by some legal hermeneuts, is whether “history” and “tradition” are always compatible: sometimes legal tradition can be shown to be indifferent to or ignorant of the relevant history. But we won’t get into that here.) In a far more important error she is confusing the THT standard of interpretation with a different one, originalism: “Was there an analogous law like that in 1787?” is not a THT question but a question about original meaning. Originalists don’t care much about how their judicial predecessors have interpreted the law. They care primarily about what the text’s original public meaning was. They think that that should be the essential interpretative canon. (Originalism can sometimes be in tension with textualism, but that’s another matter to ignore for now.) 

So Lepore is writing “a long book about the history of attempts to amend the Constitution” but doesn’t have even the most elementary knowledge of the rival schools of legal interpretation. We just have to hope that she learns as she goes along. 

But let’s continue by posing a hypothetical. Suppose Donald Trump becomes President again; suppose also that he has a majority in the House and Senate. In light of what he says is an unprecedented influx of dangerous illegal immigrants, Trump declares a State of Emergency and invokes the Alien Enemies Act. (That might have its own interesting legal consequences, but let’s set those aside for now.) Then Congress, with the President’s support, passes a law deeming criticism of the President’s policies in this time of Emergency a form of sedition, to be punished appropriately. The law is challenged and the Supreme Court rules that the law violates the First Amendment’s protections of freedom of the press. Some of the justices employ THT principles to articulate their case, and some of them use originalist canons, but they agree on the decision. 

“That is plainly nuts,” Trump then says. “We’re being held hostage by the past. We’re looking to achieve generativity and invention and adjustment and improvement and alteration and remedy and making amends, and the Courts are getting in our way!”

And Jill Lepore would have to agree, wouldn’t she? 

The answer is: No, of course she wouldn’t agree. Because, we would learn, many of the legal protections that Lepore admires, reveres, and relies on were also made in the past. Indeed, any existing law is by definition the product neither of the future nor the present but the past. If existing laws prevented her from being arrested and tried for sedition with (say) her New Yorker articles used as evidence against her, she would not feel that anyone was being “held hostage by the past,” but rather that the Founders, in making the Bill of Rights, had shown remarkable foresight, wisdom, and commitment to freedom. 

So Lepore’s actual position is not “We should not be held hostage by the past,” because that would be to say that we should not have any laws. What she means is something more like this: “It should be easier for us to change the laws to get what we want.” But — the eternal question returns — who are “we”? And it’s obvious that by “we” Lepore means “people who share my politics.” Which would be fine if people who share Lepore’s politics are the only people who will ever be elected to political office in this country. But they aren’t. If we ever get a MAGA President and a MAGA Congress, and they set out to implement their vision of “generativity and invention and adjustment and improvement and alteration and remedy and making amends” — another word for “making amends” is “retribution” — then you can bet that Lepore would be one of the first people insisting that those political ambitions be forcibly restrained by law, i.e., that they be “held hostage to the past.” 

I suspect that Lepore was formed in an environment in which leftish people like her wanted change, which is good, while people on the right wanted sameness, which is bad. She hasn’t adjusted her thinking to the rise of MAGA populism, which wants change as much as she does, and feels the restraint of existing law and legal interpretation as much as she does. They just want different changes than the ones she prefers. MAGAworld ain’t conservative.  

Basically what I’m saying is: Jill Lepore hasn’t thought this through. She hasn’t thought it through because — here again she is like her MAGA counterparts — she lives in an intellectual monoculture. And one bad consequence of living in an intellectual monoculture is that it makes you incurious. THT, originalism, whatever, it’s all the same to people who want the same changes and don’t like having their desires thwarted. 

People whose political desires are thwarted by judges are always quick to declare the legal system illegitimate. Today it’s leftists who think the Supreme Court lacks legitimacy, but in the Clinton era it was the right that felt that way — and in both cases the feeling arose directly and uncomplicatedly from disliking judicial outcomes. But there’s a lot more to the evaluation of the judiciary than looking at outcomes. It would be nice if a distinguished historian writing a book about attempts to amend the Constitution knew that.  


P.S. The domestic-abuser-weapon-ownership case that Lepore mentions is United States v. Rahimi. I think that this situation should be and will be decided in the way that Lepore prefers, but if you read some of the material I’ve linked to you’ll discover why the question has made it all the way to SCOTUS. “Why is this even a thing?” is usually an exclamation rather than a question, but if you really ask you can learn a bit. 

words, words, words

Many of our arguments are fruitless because we don’t know the meaning of the words we use. And we don’t know the meaning of the words we use because meaning is not a property of language that our culture thinks important. In common usage, especially on social media, words are passwords, shibboleths — they are not employed to convey any substantive meaning but to mark identity. You use the words that people you want to associate yourself with use; it doesn’t go any further than that. If they call Israel an example of “colonialism,” then you will too, regardless of the appropriateness of the word. 

For this reason, my frequent inquiries into the words and phrases people rely on as identity markers are probably the most useless things I write. But I keep writing them in the hope that at least a few readers will realize that they don’t have to accept the language that is most widely used, that they are free to use other words, or to ask other people what, specifically, they mean by the words they rely on. 

In How to Think I conducted such an inquiry into the phrase “think for yourself.” 

A while back on this blog, I tried to understand what people mean when they denounce “critical theory” — and “critical race theory” as well. 

In a recent essay in Comment I ask whether people know what they mean when they use the word “gender.” 

And today I’ve posted a short essay at the Hog Blog in which I suggest that the term “self-censorship” is incoherent and inappropriate. 

Those are just a few examples; I could cite a hundred. I keep doing this kind of thing, fruitless as it sometimes feels, because if even a few people disrupt the thoughtless recycling of automatic phrases, some of our shouting contests could become actual arguments. And that would be a win. 

John Stuart Mill:

So long as an opinion is strongly rooted in the feelings, it gains rather than loses in stability by having a preponderating weight of argument against it. For if it were accepted as a result of argument, the refutation of the argument might shake the solidity of the conviction; but when it rests solely on feeling, the worse it fares in argumentative contest, the more persuaded its adherents are that their feeling must have some deeper ground, which the arguments do not reach; and while the feeling remains, it is always throwing up fresh intrenchments of argument to repair any breach made in the old. 

what everything costs

As long as resources are finite, any political or social policy helps some people at the expense of others. Any serious thinker will admit this and will be quite clear about who gets hurt. For instance, Marx & Engels in the Communist Manifesto say to the bourgeoisie, “In one word, you reproach us with intending to do away with your property. Precisely so; that is just what we intend.” And, they explain, we’re doing it because (a) you have immiserated the proletariat and therefore deserve to have everything taken away from you, and (b) revolution against your class is a necessary step towards social utopia. Couldn’t be more straightforward! (That said, M & E are not always so straightforward, as I will explain in a later post.)

I speak of political thinkers here because I take it as axiomatic that no politician will ever acknowledge the costs of his or her preferred policies.

Especially in time of war, few political commentators take even the first step towards this vital honesty, which is to admit that someone will be hurt. Significantly fewer still take the next step, which is to acknowledge the extent of such pain — they will make their calculations based on the best-case scenario, or indeed something rather better than that.

Commentators who frankly and openly acknowledge the real likely costs of their preferred policies are to be prized above rubies. But there will never be many of them, because — again, especially in time of war — almost every policy has higher costs than its supporters want to admit, and if readers see the probable consequences, they may well decide that the game isn’t worth the candle. And indeed, partisans and advocates are always (usually unconsciously) preventing themselves from thinking through what will happen if they get their way, because if they look clear-sightedly at reality they might lose their nerve. This is why, George Orwell, in the essay in which he says, “In our time, political speech and writing are largely the defence of the indefensible,” also says that people employ vacuous clichés because “at need they will perform the important service of partially concealing your meaning even from yourself.”

I just finished teaching Middlemarch, that incomparable book, and there’s an immensely touching moment near the end when Dorothea is preparing to embrace a more financially constrained life, and in her ardent way talks of how she will change her habits. She ends by saying, almost sobbing, “And I will learn what everything costs.” Socrates, what is best for men? Maybe it’s to learn what everything costs.

ignorance, vincible and invincible

‘Childhood has been rewired’:

[Jonathan Haidt:] ‘TikTok and Twitter are incredibly dangerous for our democracy. I’d say they’re incompatible with the kind of liberal democracy that we’ve developed over the last few hundred years.’ He’s quite emphatic about all of this, almost evangelical. Which makes me think of his 2012 book, The Righteous Mind, in which he argued about the danger of getting too caught up in your own bubble, believing your own spin. Might he be guilty of that here? Might it just be the case, I ask, that there’s less of a stigma around mental health now, so teenagers are far more likely to admit that they have problems?

‘But why is it, then, that right around 2013 all these girls suddenly start checking into psychiatric inpatient units? Or suicide – they’re making many more suicide attempts. The level of self-harm goes up by 200 or 300 per cent, especially for the younger girls aged ten to 14. So no, the idea that it’s just a change in self-report doesn’t hold any water because we see very much the same curves, at the same time, for behaviour. Suicide, certainly, is not a self-report variable. This is real. This is the biggest mental health crisis in all of known history for kids.’ 

People are absolutely desperate to believe that this isn’t true, but as Jean M. Twenge shows, the alternative explanations are getting less defensible by the day. 

One oddity of this: People used to worry desperately about boys being immersed in gaming, but it turns out that gaming is not as bad for young minds as social media, and therefore boys are not being as thoroughly traumatized as girls. The smartphone era is bad for boys, but it’s nightmarish for girls. 

My guess is that parents who continue to provide smartphones for their kids are, epistemically speaking, indistinguishable from those who declare that the 2020 Presidential election was stolen from Donald Trump. They cannot now back down; they have made themselves invincibly ignorant. Their sunk costs are just too great for them to consider evidence. They’ll keep doing what they’re doing, no matter the suffering their children undergo. 

Now, these people are not invincibly ignorant in the proper sense of that term: The truly invincibly ignorant are not culpable because they cannot remedy their ignorance. I am using, and perhaps abusing, the term by employing it to describe parents for whom the admission of tragic error is psychologically impossible

It’s noteworthy, I think, that in his current and forthcoming work Haidt links the smartphone plague with helicopter parenting: the very same parents who fret ceaselessly about their children’s safety, and prevent them from achieving independence, also put those kids in the way of certain dangers by tethering them to social media. Worse and worse!

But: Lenore Skenazy, of Free Range Kids fame/notoreity/infamy, writes on Haidt’s Substack about a new study demonstrating … well, you can put it two different ways. You can say that while parents accept that their kids need to be more independent, their actions don’t reflect that acceptance: they just keep on helicoptering and snowplowing. But today I choose to put the point more hopefully: Though most of them cannot yet break themselves of what they know to be very bad habits — they can’t summon the courage to take away their kids’ smartphones or let them walk to the local library by themselves — at least they know these habits are bad. Which is the necessary first step, after all. Maybe if I meditate on that I’ll become less despairing. 


P.S. On the other hand, I’m reading stories about how A.I. + social media = guys using their phones to make deepfake porn videos featuring their female classmates, so maybe parents who don’t take their kids’ smartphones and smash them to pieces should be sent to prison for, like, fifty years. 

the wayfaring mind

What follows is a talk I gave some years ago at Smith College. It weaves together some common threads in my work, and draws on some previous published stuff — for instance, the Introduction to my book of essays Wayfaring. I was chatting with a friend the other day when I remembered something I wrote in this piece, and did a quick internet search because I couldn’t remember where I had published it. But nothing turned up. I am now thinking that I never did publish it — perhaps because I thought that it has too much recycled material? I dunno. So I’m posting it here. 


The essay is a representational genre, and what it represents is the movement of the mind across ideas, experiences, and sensations. The mind broods over these waters, restlessly, unpredictably, with a kind of Brownian motion. E. B. White once wrote, “The essayist arises in the morning and, if he has work to do, selects his garb from an unusually extensive wardrobe: he can pull on any sort of shirt, be any sort of person — philosopher, scold, jester, raconteur, confidant, pundit, devil’s advocate, enthusiast.” This description encourages and comforts, but it’s wrong. It would be more truthful if White had added that the essayist experiences during the course of the day unplanned and unwanted changes of clothing: you look down to smooth your philosopher’s robes and find that you’re now wearing the jester’s motley. The bells of your cap tinkle as you lower your head.

William Hazlitt’s essay “On the Pleasure of Hating” begins with a semi-comical account of the writer’s encounter with a spider which disgusts him but which he magnanimously refuses to kill. “We do not tread upon the poor little animal in question (that seems barbarous and pitiful!) but we regard it with a sort of mystic horror and superstitious loathing. It will ask another hundred years of fine writing and hard thinking to cure us of the prejudice and make us feel towards this ill-omened tribe with something of ‘the milk of human kindness,’ instead of their own shyness and venom.” But by the end of the essay this is what Hazlitt’s eccentric meditation on his — our — foibles has come to:

Seeing all this as I do, and unravelling the web of human life into its various threads of meanness, spite, cowardice, want of feeling, and want of understanding, of indifference towards others, and ignorance of ourselves, — seeing custom prevail over all excellence, itself giving way to infamy — mistaken as I have been in my public and private hopes, calculating others from myself, and calculating wrong; always disappointed where I placed most reliance; the dupe of friendship, and the fool of love; — have I not reason to hate and to despise myself? Indeed I do; and chiefly for not having hated and despised the world enough.

There would have been no way to see this coming from the opening anecdote; perhaps Hazlitt, when he sat down to write, hadn’t seen it coming either. (Yet note how the bitter comment on “unravelling the web of human life” evokes the spider with which the essay began.) We can after a fashion tell our minds how to think, and what to think about, but they seldom obey, and the essay as a genre forms a great long chronicle of their habitual disobedience. 

But in any given essay we can only bear so much of this — as the early listeners of Wagner were often agitated by his propensity to leave certain chords unresolved, keys ambiguous, readers want their prose pieces to add up to something, to have a clear point or purpose, and ideally an identifiable story. This is the primary reason why true essays are rarely long. The longest one I know, and perhaps the finest in the English language, is Virginia Woolf’s A Room of One’s Own. To speak of beginnings again, I’ll note that she opens with an account of her puzzlement when asked to speak on “women and fiction”:

I sat down on the banks of a river and began to wonder what the words meant. They might mean simply a few remarks about Fanny Burney; a few more about Jane Austen; a tribute to the Brontës and a sketch of Haworth Parsonage under snow; some witticisms if possible about Miss Mitford; a respectful allusion to George Eliot; a reference to Mrs Gaskell and one would have done. But at second sight the words seemed not so simple. The title women and fiction might mean, and you may have meant it to mean, women and what they are like, or it might mean women and the fiction that they write; or it might mean women and the fiction that is written about them, or it might mean that somehow all three are inextricably mixed together and you want me to consider them in that light. But when I began to consider the subject in this last way, which seemed the most interesting, I soon saw that it had one fatal drawback. I should never be able to come to a conclusion. I should never be able to fulfil what is, I understand, the first duty of a lecturer to hand you after an hour’s discourse a nugget of pure truth to wrap up between the pages of your notebooks and keep on the mantelpiece for ever.

Woolf tells her kind inviters that she cannot achieve the task they set her: he thoughts could not be shaped into a lecture, with its “nugget of pure truth” to keep on the mantelpiece, but could only be an essay — a curiously misshapen thing, continually altering and re-forming according to … well, according to what? According to the conditions of the moment, both internal and external: an excited or downcast mind, a lively or enervated body — how well did you sleep last night? — surroundings that are peaceful and quiet or throbbing with positive energy or afflicted by multiple tensions. Thought is always moved by these chemical and muscular and environmental forces, and therefore so is the essay itself.

In addition to being perhaps the finest English essay, A Room of One’s Own offers the best demonstration I know of how essayistic thinking actually happens. Having received her assignment, Woolf, as she has already told us, “sat down on the banks of a river and began to wonder.” A little later, she writes,

Thought — to call it by a prouder name than it deserved — had let its line down into the stream. It swayed, minute after minute, hither and thither among the reflections and the weeds, letting the water lift it and sink it until — you know the little tug — the sudden conglomeration of an idea at the end of one’s line: and then the cautious hauling of it in, and the careful laying of it out? Alas, laid on the grass how small, how insignificant this thought of mine looked; the sort of fish that a good fisherman puts back into the water so that it may grow fatter and be one day worth cooking and eating.

But the thought was fish enough to excite Woolf, to set her “walking with extreme rapidity across a grass plot.” But “Instantly a man’s figure rose to intercept me…. he was a Beadle; I was a woman. This was the turf; there was the path. Only the Fellows and Scholars are allowed here; the gravel is the place for me.” Quick to obey, Woolf resumed the path — but the interruption “had sent my little fish into hiding. What idea it had been that had sent me so audaciously trespassing I could not now remember.”

One could accurately enough describe A Room of One’s Own as an attempt to recover that fish so frustratingly driven away by the abrupt appearance of the monitory Beadle. That small event is, of course, deeply and intrinsically connected to the things she will say about women and fiction. For the Beadles of the world have a great deal to say about whether women write fiction at all — they exert force (even if only rhetorically) on women’s bodies and therefore on women’s thoughts. “The book has somehow to be adapted to the body, and at a venture one would say that women’s books should be shorter, more concentrated, than those of men, and framed so that they do not need long hours of steady and uninterrupted work. For interruptions there will always be.”

Interruptions there will always be — Beadles, literal or metaphorical, there will always be. So To the Lighthouse (200 pages or so) rather than The Brothers Karamazov; Pride and Prejudice (300 pages) rather than War and Peace. An intriguing argument, though perhaps Woolf concedes too much. Half a century before she wrote those words, Middlemarch and Daniel Deronda had somehow emerged, after all — though here, as elsewhere in her work, Woolf is strangely reluctant to acknowledge George Eliot’s achievement. And perhaps she should be more hopeful about a future when women, women who have rooms of their own, will be interrupted no more often than men. A decade after Woolf’s speculations on the future of women’s writing, a woman named Rebecca West would produce one of the very greatest books of the twentieth century, Black Lamb and Grey Falcon, which is even longer than War and Peace.

But still: surely it’s true that “the book must be adapted to the body”; that interruptions happen to bodies, or to people with bodies; that what we write is intertwined with the frequency and nature of our interruptions. Perhaps we might also infer that people who suffer many interruptions ought to be writing essays: for essays can (as A Room of One’s Own does) record those interruptions and make them into art. So Kafka’s famous parable: “Leopards break into the temple and drink to the dregs what was in the sacrificial pitchers; this is repeated over and over again; finally it can be calculated in advance, and it becomes part of the ceremony.” A person whose writing can usefully incorporate the interrupting leopards is a natural essayist.

A few years ago the science fiction writer Cory Doctorow commented, “The biggest impediment to concentration is your computer’s ecosystem of interruption technologies.” We all know what he means — and, if we are honest, we know that we choose to enable those technologies. We are our own Beadles. We invite all available leopards to drink from our chalices. We’re like the women Woolf talks about who needed desperately to acquire rooms of their own, but as soon as we get such rooms we contrive ways for them to be invaded. To us, essays are starting to feel as expansive as three-decker novels: we default to tweeting, and when we can manage long periods of sustained concentration — say, ten minutes or so — we craft blog posts.

I exaggerate, a bit. Novels are still being written, some of them quite large. But I can’t help thinking that the essay, with its brevity, its tolerance of shifting patterns and values, ought to be the genre of our time. We should see it as a reservoir capable of holding a world of images and ideas.

But this is unlikely. The essay has always been the neglected stepchild of the literary genres. It is what writers settle for when nothing more attractive presents itself. In a letter Saul Bellow wrote in his old age, to his fellow novelist Cynthia Ozick, he lamented that he had “become such a solitary, and not in the Aristotelian sense: not a beast, not a god. Rather, a loner troubled by longings, incapable of finding a suitable language and despairing at the impossibility of composing messages in a playable key.” Near the end of his life, he thought, “I have only the cranky idiom of my books — the letters-in-general of an occult personality, a desperately odd somebody who has, as a last resort, invented a technique of self-representation.” This description of novels as “the letters-in-general of an occult personality” makes me wonder whether Bellow is one of those figures — there are many in the history of literary writing — who was really not suited to writing novels but wrote novels because novels were the thing to write. Consider, for instance, the famous letters Moses Herzog writes to dead philosophers. They are just brilliant little self-contained torpedos; they don’t need Moses Herzog. Bellow thought he had to come up with Herzog, and make things happen to him, in order to provide a plausible context for these letters, but he was wrong. They would have been just fine, better maybe, as “letters-in-general,” or (it amounts to the same thing) essays.

Similarly, by a long shot Walker Percy’s best book is Lost in the Cosmos, a comic Kierkegaardian satire with a forty-page excursus on semiotics (a lucid one, too) stuck in the middle. The book is basically a series of wonderfully crazy essays that spin off fictional cadenzas from time to time — in just the way that at a certain point in A Room of One’s Own Virginia Woolf improvises a moving speculation about Shakespeare’s imaginary sister. I wish Bellow had written a book like Lost in the Cosmos, on a grander scale, maybe: a big letter-in-general that dispensed with the trappings of fiction and just let that style, that extraordinary Bellovian voice, carry itself and us along. Why didn’t he? Because he lived in a time when novels were what mattered: to be an essayist would be to abandon any aspirations to recognizable greatness. It’s impossible to imagine an essayist winning the Nobel Prize for Literature.

So essays hide in novels. There must be twenty fine essays in Thomas Mann’s The Magic Mountain, all disguised as speeches by various characters. And Robert Musil’s unfinished masterpiece The Man without Qualities might best be seen as a vast ramifying network of essays and commentary on essayistic thought — and this is indeed the precise reason why Musil did not finish, could not have finished, the book. To finish it would have been to pretend that it was a novel. But Musil makes clear, within The Man without Qualities itself, that his deepest concern is to produce an exposition and defense of a certain philosophy — or a certain way of life — that he calls “essayism.” Just as one might write an essay about a work of fiction, Musil writes a work of fiction about the essay, or the act of essaying.

“The accepted translation of ‘essay’ as ‘attempt,’” Musil writes,

contains only vaguely the essential allusion to the literary model, for an essay is not a provisional or incidental expression of a conviction capable of being exposed as an error (the only ones of that kind are those articles or treatises, chips from the scholar’s workbench, with which the learned entertain their special public); an essay is rather the unique and unalterable form assumed by a man’s inner life in a decisive thought… . There have been more than a few such essayists, masters of the inner hovering life, but there would be no point in naming them. Their domain lies between religion and knowledge, between example and doctrine, between amor intellectualis and poetry; they are saints with and without religion, and sometimes they are also simply men on an adventure who have gone astray.

I want to place special emphasis on this one idea: an essay is rather the unique and unalterable form assumed by a man’s inner life in a decisive thought. As an example, consider Virginia Woolf’s decision to write a book in order to explain how she came to the “decisive thought” that a woman who wants to write fiction needs £500 a year and a room of her own. It is intrinsic to Woolf’s authorial modesty to say, or imply, that she took this path because it was easier than coming up with a clean, pure nugget of truth that we might set upon our mantlepiece; but in fact what she chose to do was almost infinitely more difficult. “Unfortunately,” Musil writes, “nothing is so hard to achieve as a literary representation of a man thinking.”

Therefore, Musil concludes, a serious attempt at this task is a very rigorous enterprise, though it is rarely recognized as such. He once asked, “Is the essay something left over in an area where one can work precisely?” For this is what many people think: that one could but precise about one’s subject if one had the disciplined intelligence, but one chooses instead to draft a frivolous little bagatelle. But what if, Musil continues, the essay is rather “the strictest form attainable in an area where one cannot work precisely”?

Let me return to my opening sentence: “The essay is a representational genre, and what it represents is the movement of the mind across ideas, experiences, and sensations.” This thinking about thinking is enormously difficult, because its target is always moving. I’m reminded here of Kierkegaard’s complaint about retrospection: “It is quite true what philosophy says: that life must be understood backwards. But then one forgets the other principle: that it must be lived forwards. Which principle, the more one thinks it through, ends exactly with the thought that temporal life can never properly be understood because I can at no instant find complete rest in which to adopt the position: backwards.”

To be in motion oneself, trying to fix one’s attention on something also in motion, something that’s a part of you (or that you’re a part of), twisting your head to peer behind even as your feet propel you forward … surely this is a recipe for confusion. But confusion is not always to be shunned. As Iris Murdoch once wrote, “Coherence is not necessarily good, and one must question its cost. Better sometimes to remain confused.” To be in the midst of one’s own mind, to be making one’s way by dead reckoning, to be looking around always for orientation or distraction, does not lead to coherence, but it does tend to produce surprise. And often that surprise is productive.

One might think that “wayfaring” is rather too dignified and directional a word for such ceaseless mental movement. Earlier I compared it to Brownian motion, which is the textbook example of physical randomness. And after all, Montaigne wrote when he was virtually inventing this genre, “I cannot keep my subject still. It goes along befuddled and staggering, with a natural drunkenness. I take it in this condition, just as it is at the moment I give my attention to it. I do not portray being; I portray passing.”

And yet, stagger drunkenly though he may, Montaigne discovers over time that his travels take in in a generally discernible direction. He reports, in this same essay, that he “rarely repents” — an admission not as damning as it sounds, because he also explains that he has learned how rare genuine repentance is, as opposed to those changes that are brought on poor health and old age. “God must touch our hearts,” he writes. “Our conscience must reform by itself through the strengthening of our reason, not through the weakening of our appetites… . What I owe to the favor of my colic, is neither chastity nor temperance.” Later, in the final essay, “Of experience,” he eagerly and affectionately describes the kind of life he has come over the years most to admire: “The most beautiful lives, to my mind, are those that conform to the common human pattern, with order, but without miracle and without eccentricity.” He does not awake in the morning with a completely different set of preferences and beliefs that he had possessed when going to bed the night before. He has gotten somewhere. He has made his way.

This was not the achievement of a single snapshot, one instant captured, even if captured perfectly: like any good navigator, Montaigne took his bearings frequently and recorded them with care. Eventually the points on the graph stopped looking random and began to assume a shape. That shape got clearer over time. It wasn’t Brownian motion after all. This is why we need essayists, not just essays. Truth told about a life, all things considered, matters more than truth told about a moment. Honest self-description today is good; honest self-description over a lifetime is invaluable — and revelatory. If we could see the direction in which we are trending, if we could see that we are indeed inscribing something like a path through life, and not just moving randomly like motes of pollen in a glass of water, we might not be consoled. But better to discern what is the case, to be what Musil calls “a master of the inner hovering life.” “It is an absolute perfection and virtually divine,” Montaigne wrote in one of his last sentences, “to know how to enjoy our being rightfully.” This is just what the wayfarer seeks to learn.

periodicity

This piece from the Dispatch (possibly paywalled) on how The New York Times misled its readers with an overly “Hamas-friendly” headline makes a valid point, I guess — but I think much of the problem here is baked-in to minute-by-minute journalism. You don’t have to be a hard-core opponent of Israel to get a headline like that wrong — in the heat of the moment even a slight lean towards the people living in Gaza might be enough to influence your headline. If you have to post something on your website, and post it right now, you’ll not be consistently judicious and fair-minded. 

[UPDATE: The Times has published an apology.] 

I didn’t know that the Times had perpetrated this headline because any political crisis strengthens me in the habits I have been trying to cultivate for some years now: to watch no TV news at all — that part’s easy, I haven’t seen TV news in the past thirty years, except when I’m in an airport — and to read news on a once-a-week rather than a several-times-a-day basis. My primary way to get political news, national and international, is to read the Economist when it shows up at my house, which it does on Saturday or Monday. (I don’t keep the Economist app on my phone.) I have eliminated political sites from my RSS feed, and only happened upon the Dispatch report when I was looking for something else at the site. 

The more unstable a situation is, the more rapidly it changes, the less valuable minute-by-minute reporting is. I don’t know what happened to the hospital in Gaza, but if I wait until the next issue of the Economist shows up I will be better informed about it than people who have been rage-refreshing their browser windows for the past several days, and I will have suffered considerably less emotional stress. 

It’s important to remember this: businesses that rely on constant online or televisual engagement — social media platforms, TV news channels, news websites — make bank from our rage. They have every incentive, whether they are aware of it or not, to inflame our passions. (This is why pundits who are always wrong can keep their jobs: they don’t have to be right, they just have to be skilled at stimulating the collective amygdala.) As the intervals of production increase — from hourly to daily to weekly to monthly to annually — the incentives shift away from being merely provocative and towards being more informative. Rage-baiting never disappears altogether, but books aren’t well-suited to it: even the angriest book has to have passages of relative calm, which allows the reader to stop and think — a terrible consequence for the dedicated rage-baiter. 

“We have a responsibility to be informed!” people shout. Well, maybe, though I have in the past made the case for idiocy. But let me waive the point, and say: If you’re reading the news several times a day, you’re not being informed, you’re being stimulated. Try giving yourself a break from it. Look at this stuff at wider intervals, and in between sessions, give yourself time to think and assess.


UPDATE 2023–10–23: One tiny result of the Israel/Gaza nightmare, for me, is that it has revealed to me those among the writers I follow via RSS who are prone to making uninformed, dimwitted political pronouncements. Those feeds I have deleted without hesitation. 

diseases of the intellect

Twenty years ago, I had an exceptionally intelligent student who was a passionate defender of and advocate for Saddam Hussein. She wanted me to denounce the American invasion of Iraq, which I was willing to do — though not in precisely the terms that she demanded, because she wanted me to do so on the ground that Saddam Hussein was a generous and beneficent ruler of his people. That is, her denunciation of America as the Bad Guy was inextricably connected with her belief that there simply had to be on the other side a Good Guy. The notion that the American invasion was wrong but also that Saddam Hussein’s tyrannical rule was indefensible — that pair of concepts she could not simultaneously entertain. Because there can’t be any stories with no Good Guys … can there? 

This student was not a bad person — she was, indeed, a highly compassionate person, and deeply committed to justice. She was not morally corrupt. But she was, I think, suffering from a disease of the intellect

What do I mean by that? Everyone’s habitus includes, as part of its basic equipment, a general conceptual frame, a mental model of the world that serves to organize our experience. Within this model we all have what Kenneth Burke called terministic screens, but also conceptual screens which allow us to employ key terms in some contexts while making them unavailable in others. We will not be forbidden to use a word like “compassion” in responding to our Friends, but it will not occur to us to use it when responding to our Enemies. (Paging Carl Schmitt.) 

My student’s conceptual screens made certain moral descriptions — for instance, saying that a particular politician or action is “cruel” or “tyrannical” — necessary when describing President Bush but unavailable when describing Saddam Hussein. But I seriously doubt that this distinction ever presented itself to her conscious mind. It worked in the background to determine which thoughts were allowed to rise to conscious awareness and therefore become a matter for debate. To return to a distinction that, drawing on Leszek Kołakowski, I have made before, the elements of our conceptual screens that can rise to consciousness belong to the “technological core” of human experience, while those that remain invisible (repressed, a Freudian would say) belong to the “mythical core.” 

I could see these patterns of screening in my student; I cannot see them in myself, even though I know that everything I have said applies to me just as completely as it applies to her, if not more so. 

Certain writers are highly concerned with these mental states, and the genre in which they tend to describe them is called the Menippean satire. (That link is to a post of mine on C. S. Lewis as a notable writer in this genre, though this has rarely been recognized.) In his Anatomy of Criticism, Northrop Frye wrote, 

The Menippean satire deals less with people as such than with mental attitudes. Pedants, bigots, cranks, parvenus, virtuosi, enthusiasts, rapacious and incompetent professional men of all kinds, are handled in terms of their occupational approach to life as distinct from their social behavior. The Menippean satire thus resembles the confession in its ability to handle abstract ideas and theories, and differs from the novel in its characterization, which is stylized rather than naturalistic, and presents people as mouthpieces of the ideas they represent…. The novelist sees evil and folly as social diseases, but the Menippean satirist sees them as diseases of the intellect. [p. 309] 

Thus the title of my post. 

I think much of our current political discourse is generated and sustained by such screening, screening that an age of social media makes at once more necessary and more pathological. Also more universally “occupational,” because in some arena of our society — journalism and the academy especially — the deployment of the correct conceptual screens becomes one’s occupational duty, and any failure so to maintain can result in an ostracism that is both social and professional. And that’s how people, and not just fictional characters, become “mouthpieces of the ideas they represent.” 

None of this is hard to see in some general and abstract sense, but it’s hard to see clearly. What Lewis calls the “Inner Ring” is largely concerned to enforce the correct conceptual screens, and because those screens don’t rise to conscious awareness, much less open statement, the work of enforcement tends to be indirect and subtle, and perhaps for that very reason irresistible. It’s like being subject to gravity. 

In certain cases the stress of maintaining such conceptual screens grows to be too much for a person; the strain of cognitive dissonance becomes disabling. Crises in one’s conceptual screening, as Mikhail Bakhtin wrote in Problems of Dostoevsky’s Poetics, were of particular interest to Dostoevsky:

In the menippea there appears for the first time what might be called moral-psychological experimentation: a representation of the unusual, abnormal moral and psychic states of man — insanity of all sorts (the theme of the maniac), split personality, unrestrained daydreaming, unusual dreams, passions bordering on madness, suicides, and so forth. These phenomena do not function narrowly in the menippea as mere themes, but have a formal generic significance. Dreams, daydreams, insanity destroy the epic and tragic wholeness of a person and his fate: the possibilities of another person and another life are revealed in him, he loses his finalized quality and ceases to mean only one thing; he ceases to coincide with himself. [pp. 116-19]

This deserves at least a post of its own. But in general it’s surprising how powerful people’s conceptual screens are, how impervious to attack. But maybe it shouldn’t be surprising, since those screens are the primary tools that enable us to “mean only one thing” to ourselves; they allow us to coincide with ourselves in ways that soothe and satisfy. The functions of the conceptual screens are at once social and personal. 

All this helps to explain why the whole of our public discourse on Israel and Palestine is so fraught: the people participating in it are drawing upon some of their most fundamental conceptual screens, whether those screens involve words like “colonialism” or words like “pogrom.” But this of course also makes rational conversation and debate nearly impossible. The one thing that might help our fraying social fabric is an understanding that, when people are wrong about such matters — and that includes you and me —, the wrongness is typically not an indication of moral corruption but rather the product of a disease of the intellect.

And we all live in a social order whose leading institutions deliberately infect us with those diseases and work hard to create variants that are as infectious as possible. So my curse is straightforwardly upon them

I don’t want to pretend that I am above the fray here. I have Opinions about the war, pretty strong ones at that, and I have sat on this post for a week or so, hemming and hawing about whether I have an obligation to state my position, given the sheer human gravity of the situation. But while I’m not wholly ignorant, I don’t think that my Opinions are especially well-informed, and if I put them before my readers — well, I feel that that would be presumptuous. (Even though I live in an era in which most people find it disturbing or even perverse if you hold views without proclaiming them.) There are thousands of writers you could read to find stronger and better-informed arguments than any I could make.

But I do think I can recognize and diagnose diseases of the intellect when I see them. That’s maybe the only contribution I can make to this horrifying mess of a situation, and I’m counting on its being more useful if it isn’t accompanied by a statement of position.   

I hope this won’t be taken as a plague-on-both-your-houses argument, though I’m sure it will. (I have made such arguments about some things in the past, but I am not making one here.) When you write, as I do above, about the problem with a conceptual screen that requires one purely innocent party and one purely guilty party, you will surely be accused of “false equivalency” or “blaming the victim.” But you don’t have to say that a person, or a nation, or a people is utterly spotless in order to see them as truly victimized. Sometimes a person or a nation or a people is, to borrow King Lear’s phrase, “more sinned against than sinning” without being sinless. And I think that applies no matter what role you assign to which party in the current disaster. 

With all that said, here are some concluding thoughts: 

  1. A monolithic focus on assigning blame to one party while completely exonerating the other party is a sign of a conceptual screen working at high intensity. 
  2. Such a monolithic focus on blame-assignation is also incapable of ameliorating suffering or preventing it in the future. (Note the use of the italicized adjective in these two points: the proper assessment of blame is not a useless thing, but it’s never the only thing, and it is rarely the most important thing, for observers to do.) 
  3. If you are consumed with rage at anyone who does not assign blame as you do, that indicates two things: (a) you have a mistaken belief that disagreement with you is a sign of moral corruption, and (b) your conceptual screen is under great stress and is consequently overheating. 
  4. It is more important, even if it’s infinitely harder, for you to discover and comprehend your own conceptual screens that for you to see the screens at work in another’s mind. And it is important not just because it’s good for you to have self-knowledge, but also because our competing conceptual screens are regularly interfering with our ability to develop practices and policies that ameliorate current suffering and prevent future suffering. 
  5. A possible strategy: When you’re talking with someone who says “Party X is wholly at fault here,” simply waive the point. Say: “Fine. I won’t argue. So what do we do now?” Then you might begin to get somewhere — though you’re more likely to discover that your interlocutor’s ideas begin and end with the assigning of blame. 

Justin Smith-Ruiu:

The risk of attempting such a thing is that one will appear unserious and will accordingly begin to lose the professional and social advantages that slowly began accumulating throughout all those years of pretending to be an adult. I don’t mean to overdo the curious parallels between art and faith, but it does seem to me that to be willing to take this risk is somewhat analogous to the choice Saint Paul said one must make to become a “fool for Christ”. The sixteenth-century Muscovite saint known as “Basil Fool for Christ”, for whom the world’s most iconic onion-dome church is named, was also known as “Basil the Blessed”: it comes out to the same thing.

I want, I mean, to spend the rest of my life, consciously and willfully, as a fool, at least in those domains that matter most to me. Of course I can still “clean up real nice” whenever the circumstances require, but I now see those circumstances more or less in the same way I see filing taxes or updating my passwords — just part of the general tedium of maintaining one’s place in a world that pretends to be built around the interests and expectations of sober-minded grown-ups, but that in the end is a never-ending parade of delirious grotesques.

I have in effect undergone a double conversion, then, to both faith and art, which in the end may be only a single but two-sided conversion, to a mode of existence characteristic of children and fools.

The Media Very Rarely Lies – by Scott Alexander:

Suppose Infowars claimed that police shootings in the US cannot be racially motivated, because police shoot slightly more white people each year than black people (this is true). This is missing important context: there are ~5x as many white people in the US as black people, so police shooting only slightly more white people suggests that police are shooting black people at ~5x higher rates. But I claim it’s also a failure of contextualization when NYT claims police shootings must be racially motivated because they happen to black people at a 5x higher rate, without adding the context that police are called to black neighborhoods at about a 5x higher rate and so have no more likelihood per encounter of shooting a black person than a white person. Perhaps the failure to add context is an honest mistake, perhaps a devious plot to manipulate the populace — but the two cases stand or fall together with each other, and with other failures of contextualization like Infowars’ vaccine adverse response data.

But lots of people seem to think that Infowars deserves to be censored for asserting lots of things like their context-sparse vaccine data claim, but NYT doesn’t deserve to be censored for asserting lots of things like their context-sparse police shooting claim. I don’t see a huge difference in the level of deceptiveness here. Maybe you disagree and do think that one is worse than the other. But I would argue this is honest disagreement — exactly the sort of disagreement that needs to be resolved by the marketplace of ideas, rather than by there being some easy objective definition of “enough context” which a censor can interpret mechanically in some fair, value-neutral way. 

I think the difference between Infowars and The New York Times is fairly clear. Because Infowars only covers issues that its editors and readers are exercised about, its stories are reliably dishonest. By contrast, the Times covers a much broader range of stories. When those stories don’t touch on the deep prejudices of the newspaper’s staff and readers, then they can usually be trusted; but on the hot-button issues, the Times is no more trustworthy than Infowars. 

movie cards

QMa5LDABuOW8s9uZkSB3fTZnD4fFQV

MOzJaAABjHQoS4OpV0jc0ACt4zkhUn

When preparing to shoot a scene, the film director Alejandro Iñárritu prepares note cards to help him organize his thoughts. ”During the production of a film, it’s easy to lose track of and forget the original intentions and motives that you had three years ago in your head and that are always so important.” Moreover, “As Stanley Kubrick once said, making a movie is like trying to write poetry while you’re riding a roller coaster. When one is shooting a film, it feels like a roller coaster, and it’s much more difficult to have the focus and mental space that one has during the writing or preparation of a film.”

So to forestall these problems, Iñárritu grabs a blank note card and divides it into columns, usually featuring six categories:

  1. He begins with the facts. “I try to write down the facts in a neutral way. What is happening in this particular scene?”
  2. “The second part is that I try to imagine what happened immediately preceding the moment in question. Where have the characters come from?”
  3. “The third column details the ‘objective.’ What is the purpose of this scene? … Here I try to dissect the objective of the scene as a whole and what it’s about, as well as the objective of each of the characters who appear…. Each character has a need, and it’s important for me to know what it is.”
  4. “The fourth part of these cards — the ‘action verb’ column — is particularly difficult … and extremely helpful, too, in further clarifying a character’s objective so that the actor can execute the scene. If one character wants something from another character, one way of achieving the goal is by seducing the other character. Another way is by threatening him. Another way is by ignoring or provoking him. Within a scene, there can also be several action verbs — these kinds of transitions in tactics for obtaining the same objective are important because they bring a scene to life and add color to it.”
  5. “The subtext … is often almost more important than the text. The text can often be contrary to the subtext, and the subtext is what should be very clearly understood. In other words, if one person says to another, ‘Go away, I don’t want to see you again,’ it is very possible that what they really mean is ‘I need you now more than ever.’ The words we use can often oppose what we feel, and I believe that acknowledging this human contradiction can help give great weight to a performance.”
  6. Finally, the ‘as if‘ column. There are two ways I believe one can take on a performance — one is through the actor’s own personal and emotional experience, applied to a scene through an association with it, and the other is through imagination.… I have been in situations where an actor, at a given moment, lacks imagination for some personal reason or does not have emotional baggage that he can refer to. In such a case, I sometimes like to have an image that I can leave with an actor or actress. Our body is the master. Sometimes, with a physical or sensorial experience (a burn, cold, etc.), the actor or actress will know already how that feels, and that can help in channeling that feeling.”

I love stuff like this – the improvisations, the invented methods and practices, what Matt Crawford calls the jigs, that any creative person must develop in order to manage the complexities of a serious project.

Latewood | A Working Library:

In the spring, when the weather is (hopefully) warm and wet, a tree will grow rapidly, forming large, porous cells known as “earlywood.” Later, as the weather cools, it will grow in smaller, more tightly packed cells known as “latewood.” You can spot the difference when looking at a tree’s rings: earlywood appears as light-colored, usually thick, bands, while latewood shows up thinner and darker. What doesn’t show up in the rings is the dormant period — the winter season, when the tree doesn’t grow at all, but waits patiently for spring.

I think this is a useful metaphor for thinking about how we grow, too. There are times and seasons when the conditions are right for earlywood — for big, galloping growth, where you learn a lot in short order. This is often the case when you first step into a new role, or take on a new and challenging project, or start at a new organization. But those periods of rapid growth are often (and ideally) followed by periods when the growth is slower, more focused, moving in short and careful steps instead of giant leaps. These latewood periods are when the novelty of a new situation has worn off, and the time for reflection and deep-skill building arrives.

Space debris expert: Orbits will be lost—and people will die—later this decade | Ars Technica:

Ars: Given what has happened over the last few years and what is expected to come, do you think the activity we’re seeing in low-Earth orbit is sustainable?

Moriba Jah: My opinion is that the answer is no, it’s not sustainable. Many people don’t like this whole “tragedy of the commons” thing, but that’s exactly what I think we’re on a present course for. Near-Earth orbital space is finite. We should be treating it like a finite resource. We should be managing it holistically across countries, with coordination and planning and these sorts of things. But we don’t do that. I think it’s analogous to the early days of air traffic and even maritime and that sort of stuff. It’s like when you have a couple of boats that are coming into a place, it’s not a big deal. But when you have increased traffic, then that needs to get coordinated because everybody’s making decisions in the absence of knowing the decisions that others are making in that finite resource.

Ars: Is it possible to manage all of this traffic in low-Earth orbit?

Jah: Right now there is no coordination planning. Each country has plans in the absence of accounting for the other country’s plans. That’s part of the problem. So it doesn’t make sense. Like, if “Amberland” was the only country doing stuff in space, then maybe it’s fine. But that’s not the case. So you have more and more countries saying, “Hey, I have free and unhindered use of outer space. Nothing legally has me reporting to anybody because I’m a sovereign nation and I get to do whatever I want.” I mean, I think that’s stupid. 

It is stupid, but a familiar kind of stupid. I must have seen a dozen essays arguing that if you can find any examples of people collaborating with regard to shared goods then the tragedy of the commons argument is wrong. Which is also stupid! If we can sometimes resist the temptations to abuse any given commons, that’s not an argument that such abuse is unlikely to happen. Of course the abuse of common goods isn’t inevitable; but it is distressingly common and we should always be on the lookout for it. In space we aren’t paying sufficient attention. 

attention and reading

In response to my post on my readerly annotations, my friend Adam Roberts writes: 

I buy a lot of second-hand books, and previous owners’ annotations are almost always a mere irritation. But then I think of Coleridge. From before he settled at Highgate, but very much once he had settled there, his marginalia were specifically, particularly prized. People would lend him books, sometimes very rare and very valuable books, specifically in the hope that he would annotate them, which he did so far as I can see as automatically as a dog pees on a lamppost, simply because it was what he did. And then the people who had loaned him these books would retrieve them from the Gillmans’ house with glee. STC died 1834: the first publication of his marginalia was 1836, and magazines like Blackwoods continued to print examples of them throughout the century. By far the most expensive-to-produce element in the Princeton/Bollinger Collected Coleridge set are the five volumes of Marginalia, partly because each is 1000 pages long, but also because the marginalia are printed in a different colour font to the text being annotated, and the volumes come with lavish photos of representative STC pages.

The thing that strikes me about this (speaking as somehow who’s read a lot of it) is that the marginalia themselves are, probably, per proportion, something like 20:80, interesting/incisive:blather. So what was the appeal? Some must have been the same that inspired generations of autograph hunters … and I wonder if that’s a hobby that has died out in the digital age? But some of it must have been the way owning a book that Coleridge had annotated brought you closer to the idea of Coleridge himself reading a book. STC was a great writer (obviously you’d expect me to say so) but he’s perhaps an even greater reader, and there’s value, as you put it, in the sense that by reading STC’s marginalia you are as it were peering over STC’s shoulder as he reads. 

I think this is a fascinating comment, and, because I have work facing me that I want to put off, I shall now expand on it. 

I’ve just read David Marno’s fine book Death Be Not Proud: The Art of Holy Attention, which is largely about John Donne but also about what people in the seventeenth century thought attention is. Marno points out that philosophers like Descartes and Malebranche talk about attention a good deal, see it as essential to the task of philosophy, but also never define it. They don’t bother to do so, Marno claims, because everyone already understood attention through its religious contexts, its centrality to Christian prayer. Such philosophers thus secularized the act (or faculty) of attention; and as those religious contexts moved from the cultural center to its margins, attention eventually had to be defined, a project still ongoing today. 

Marno further argues — or rather, I guess, implies; I am somewhat overstating his case here — that when Donne’s poetry was rediscovered in the early twentieth century and became greatly celebrated (most famously by T. S. Eliot), the focus on “holy attention” in his sacred poems became matters of scholarly interest. Critics like I. A. Richards continued the work of secularizing attention by seeing the challenge of attentiveness that Donne describes as a challenge for 20th-century readers also. As Donne strove to attend to Christ on the Cross, so Richard’s students strive to attend to Donne’s poetry. Marno notes that Richards isn’t at all interested in the Christian context of Donne’s meditations, and so, rather than suggesting that his students learn something about either Catholic or Protestant devotional endeavors, he points them towards Confucian practices. 

What I want to suggest here is that Coleridge is a kind of bridge spanning the 17th-century and the 20th-century accounts of attentiveness. He is, for his contemporaries, a kind of icon of holy attention — but the holiness resides not in the objects of his attention (which are typically poetic, historical, and philosophical) but rather in the particular character of his own mental dispositions and practices. Yes, Coleridge had deep theological interests, and those intensified as he grew older, but those who saw him as the ideal reader and wanted to collect the sacred relics of his reading didn’t necessarily share his interests or his beliefs. For them, the holiness was not in the text but in the reader. 

why liberals should read smart conservatives

Liberals should read smart conservatives not because they need to be convinced by conservative arguments — though let’s face it, sometimes they do — but rather because conservatives frame issues differently than liberals do. They describe the conditions of history, and the circumstances of our debates, in a language that’s strange to liberals. And dealing with these alternative framings can be very clarifying indeed. 

An example: in The Age of Entitlement: America Since the Sixties Christopher Caldwell argues that the grief over the assassination of President Kennedy led to more sweeping legislation than JFK himself would have dared to pursue: “A welfare state expanded by Medicare and Medicaid, the vast mobilization of young men to fight the Vietnam War, but, above all, the Civil Rights and Voting Rights acts — these were all memorials to a slain ruler, resolved in haste over a few months in 1964 and 1965 by a people undergoing a delirium of national grief.” And he then claims that this set in motion a dramatic transformation of the American legal and political order — a transformation that we have inherited: 

The changes of the 1960s, with civil rights at their core, were not just a major new element in the Constitution. They were a rival constitution, with which the original one was frequently incompatible — and the incompatibility would worsen as the civil rights regime was built out. Much of what we have called “polarization” or “incivility” in recent years is something more grave — it is the disagreement over which of the two constitutions shall prevail: the de jure constitution of 1788, with all the traditional forms of jurisprudential legitimacy and centuries of American culture behind it; or the de facto constitution of 1964, which lacks this traditional kind of legitimacy but commands the near-unanimous endorsement of judicial elites and civic educators and the passionate allegiance of those who received it as a liberation. The increasing necessity that citizens choose between these two orders, and the poisonous conflict into which it ultimately drove the country, is what this book describes.

Now, it is probably true that only someone who questions the wisdom of “the de facto constitution of 1964” would frame our recent history in this way; but it is certainly true that this framing is powerfully illuminating: it yields insight into both the nature and the intensity of our current political differences. You may not interpret or judge those differences as Caldwell does, but even so, he has presented their causes in ways that ought to earn your assent. 

Another example: Mary Harrington is not just a conservative, she is a self-described reactionary. But some of her recent work is, like that of Caldwell, extremely useful, especially her argument — in, for instance, this essay, which has many links to her earlier work — that what I have called Left Purity Culture (see the LPC tag at the bottom of this post) operates as a kind of de-personalized and even de-humanized swarm. And in certain recent controversies, especially the ones involving Twitter, that swarm is confronted by a version of what she calls Caesarism: 

The Biden administration is fond of talking about “democracy” versus “autocracy”, but it might be more accurate to talk about swarmism and Caesarism. Swarmism is a kind of post-democratic democracy: a mutant form of liberal proceduralism, characterised by collective decision-making in which no one is ever individually accountable. Instead, consequential decisions are as far as possible pushed out to supposedly neutral procedures or even machines. When NGO officials whom you can’t vote out of your political ecosystem talk about “our democracy”, they’re talking about swarmism.

Caesarism, on the other hand, looks substantially the same at lower levels. The main difference is that you get named humans in key decision-making roles — complete with human partiality, eccentricity, and occasional fallibility. Twitter was, until recently, a key vector of elite swarmism. And to swarmists, such rule by a named individual, rather than a collective and some committee-generated “guidelines”, is by definition morally wrong. This core assumption oozes, for example, from this report on the takeover, with its empathetic depiction of the anonymous, collegiate collective of sacked Trust and Safety workers sharply contrasted with the autocratic, erratic individual Elon Musk. 

This, like Caldwell’s framing of American history since the 1960s, is not just interesting but useful. It helps me to think about the structure, as it were, of the debates over Twitter. Now, I might prefer a swarm to a Caesar — and Harrington herself doesn’t see anyone to support here: “I’m not cheerleading for Musk as Caesar. Just because I dislike faceless proceduralism doesn’t mean I have much appetite to see political authority gathered into the mercurial hands of a transhumanist billionaire who wants to implant microchips in human brains.” But whether you take the swarm’s side or Caesar’s side or no side at all, this is a very helpful way of describing the conflict, and is a description that neither a a swarmist nor a Caesarist would have been likely to discern. 

hiding your hand

I don’t know who Noah Kulwin is — someone, I don’t remember whom, linked to this post, which among other things talks about the assassination of JFK. If this sounds like I’m picking on poor Mr. Kulwin, my apologies; I really don’t mean to. I just want to illustrate a point. 

In the post Kulwin quotes Don DeLillo’s comments on the Zapruder film. Here’s the key part: 

The Zapruder film is a home movie that runs about eighteen seconds and could probably fuel college courses in a dozen subjects from history to physics. And every new generation of technical experts gets to take a crack at the Zapruder film. The film represents all the hopefulness we invest in technology. A new enhancement technique or a new computer analysis — not only of Zapruder but of other key footage and still photographs — will finally tell us precisely what happened. 

For the rest of his post Kulwin speaks of “DeLillo’s faith in technology” and wants to argue with it. But of course — as anyone would know after reading even a smidgen of his fiction — DeLillo doesn’t have any faith in technology. Kulwin has misunderstood the last sentence of the quote above. He thinks DeLillo is making a claim, but in fact the novelist is narrating a perspective

You can tell this is so by looking at the previous sentence, in which DeLillo speaks in broadly cultural terms of “all the hopefulness we invest in technology.” By “we” he doesn’t mean himself and his interviewer; he means Americans in general. And the sentence that follows — the one that Kulwin misunderstands — is not a statement of his own views, but a kind of expansion of or commentary on that hope. DeLillo is not saying that he believes that a new enhancement technique or a new computer analysis will finally tell us precisely what happened; he’s saying that the hope Americans invest in technology makes us — collectively, as a society — believe that a new enhancement technique or a new computer analysis will finally tell us precisely what happened. 

As I’ve said many times before — e.g. here — I don’t think my students today are any worse than my students from years or even decades ago. But I believe that students today need more explanation of how writers think, how fictional narrative works. They have grown up in a media environment in which, as far as I can see, language is almost exclusively used for three purposes: to praise cultural friends, to condemn or mock cultural enemies, and to declare the Truth. The idea that language might be used to explore a way of seeing the world without judging that way — without issuing a 👍 or a 👎 — is pretty foreign to most of them, especially since most of the literature they’ve been assigned in school is either intrinsically didactic or is taught to them didactically. 

This leads fairly regularly to misreadings of the type that Kulwin commits. Again, I know nothing about Kulwin, so I’m not trying to account for his error — only to note that it’s a very common kind of error these days. We live in a moment too polemical and defensive for undidactic art to flourish; most people, it seems, suspect any artwork that doesn’t declare its principles unambiguously. 

Some years ago Mandy Patinkin described the lunch meeting at which Rob Reiner recruited him to play in The Princess Bride. He recalled that 

He said to me, ‘The way I want everybody to play this is as though you have a hand of cards, and I want all of us to almost show the hand to the audience, but we never really show it. That’s how I want it to happen.’ So, he collected a bunch of people who would play cards that way. 

I think that’s actually a pretty good explanation for why The Princess Bride is such a brilliant movie, but whether it is or not, it’s a great way to describe what the greatest stories always do. You get a peek at the cards maybe, but not enough to be sure about the whole hand. You have to guess; you have to think; you have to ask difficult questions like “How might I behave if I had a great hand? A lousy one?” You have to imagine. All the great artists and writers do. 

ark head

Venkatesh Rao:

One mental model for this condition is what I call ark head, as in Noah’s Ark. We’ve given up on the prospect of actually solving or managing most of the snowballing global problems and crises we’re hurtling towards. Or even meaningfully comprehending the gestalt. We’ve accepted that some large fraction of those problems will go unsolved and unmanaged, and result in a drastic but unevenly distributed reduction in quality of life for most of humanity over the next few decades. We’ve concluded that the rational response is to restrict our concerns to a small subset of local reality — an ark — and compete for a shrinking set of resources with others doing the same. We’re content to find and inhabit just one zone of positivity, large enough for ourselves and some friends. We cross our fingers and hope our little ark is outside the fallout radius of the next unmanaged crisis, whether it is a nuclear attack, aliens landing, a big hurricane, or (here in California), a big wildfire or earthquake. […] 

… it’s gotten significantly harder to care about the state of the world at large. A decade of culture warring and developing a mild-to-medium hatred for at least 2/3 of humanity will do that to you. General misanthropy is not a state conducive to productive thinking about global problems. Why should you care about the state of the world beyond your ark? It’s mostly full of all those other assholes, who are the wrong kind of deranged and insane. At least you and I, in this ark, are the right kind of deranged and insane. It’s worth saving ourselves from the flood, but those other guys can look out for themselves.

I think this is largely true, but I think some other things as well — primarily that any such retreat-to-the-ark is an inevitable response to the inflexible limits of our Dunbar’s Number minds. 

“That’s perhaps the way out — keep trying to tell stories beyond ark-scale until one succeeds in expanding your horizons again.” Nope. We don’t need our horizons expanded, we need our attention narrowed and focused. 

Rules: A short study of what we live by by Lorraine Daston | Book review

All history is, it would seem, the history of regulative struggles. After surveying two thousand years of western civilization, and reconstructing battles between manic regulators and recalcitrant regulatees in fields ranging from monasticism through cookery to astronomy and military tactics, Daston is able to discern a few long-term trends. In the beginning, she finds, rules tended to be “thick”, in the sense of being replete with examples, observations and exceptions; but with the passage of time they have grown thinner and thinner and are now approaching the extreme etiolation of the absolute algorithm. At the same time rules that used to be flexible have become more and more rigid, and the specificity of old-world regulations has been replaced by universality, or rather – as Daston surmises – by the pretence or illusion of it. Behind all of these changes she notices a larger one, in which rules have followed a “rough historical arc” that leads from an ancient world of “high variability, instability and unpredictability” to a modern one in which we all tend to assume, without much justification, that “the future can be reliably extrapolated from the past, standardisation ensures uniformity, and averages can be trusted”.

This is fascinating. I don’t know what to do with it, except read the book, but it’s fascinating. 

stats

How to Lie with Statistics

Just a quick reminder that the use of statistics to mislead is a never-ending thing: The Guardian, in an attempt to cast a skeptical eye on Ron DeSantis, notes that Florida “had the third-highest death toll of any US state.” Now, I am no fan of Ron DeSantis, to say the least, but come on: Florida is the third most populous state, so it would be very surprising if it didn’t have one of the highest death tolls. Plus, it has a very high percentage of elderly residents, and as we all know, the elderly are significantly more endangered by Covid than any other age group.

The relevant statistic here — if you’re interested specifically in deaths — is number of deaths per 100,000 residents, and by that measure Florida is 12th. Nothing to boast about, certainly, but better than Michigan and New Jersey and only slightly worse than Pennsylvania and New York — again, despite having an older population than any of those states. It’s also 21st in percentage of residents vaccinated.

I’m calling attention to this not because I want to defend DeSantis, but merely to note a reliable journalistic practice: If the relevant statistics don’t tell the story you want to peddle, then choose irrelevant statistics that do. Most readers won’t ask questions.

The actual story of Florida and Covid is extremely interesting, I think, precisely because the evidence doesn’t yield clear answers. Derk Thompson has a good piece on these complexities.

comparisons are odorous

Don’t Fear the Artwork of the Future – The Atlantic:

What is so tiresome about the fear of AI art is that all of this has been said before—about photography. It took decades for photography to be recognized as an art form. Charles Baudelaire famously called photography the “mortal enemy” of art. The Museum of Fine Arts in Boston, which was among the first American institutions to collect photographs, didn’t start doing so until 1924. The anxiety around the camera was nearly identical to our current fear of creative AI: Photography wasn’t art, but it was also going to replace art. It was “mere mechanism,” as one critic put it in 1865. I mean, it’s not art if you’re just pushing some buttons, right? 

This is one of the laziest tropes of pseudo-thinking, but also one of the most common. If you want to try it for yourself, follow these steps:  

  1. Note that people are afraid of something; 
  2. Find something in history that people were unnecessarily afraid of; 
  3. Conclude that if people were wrongly afraid of something in the past, then, logically, people who are afraid in the present must also be wrong. 

Indubitable! (Just make sure you don’t notice any situations in the past in which the people who were afraid were right. Nobody says, “Those who worry about appeasing Putin should remember that in the late 1930s a bunch of nervous Nellies worried about appeasing Hitler too.”)  

But often there’s another element of dumbness to this kind of take: not just the inability to reason sequentially, but the ignoring of inconvenient facts. For while photography didn’t “replace art,” it largely did replace certain kinds of art, and radically changed the cultural place of drawing and painting. 

For my part, I think some of these changes were good and some were not so good. When it became clear that to most people photographs looked more “real,” more precisely representational, than paintings, painters began exploring various alternatives to straightforward representation: first Impressionism, pointillism, and later on completely non-representational painting. (Nowadays “photorealistic painting” is merely a joke or meta-artistic game, as in the works of Chuck Close.) I think these were exciting and vital developments, and I wouldn’t want to see them undone. But, that said, when I think about how Picasso could draw at the age of eleven — 

3939

— I do find myself wondering how he might have developed as an artist if he had been working in the days before photography. If much was gained when Picasso was liberated from straightforwardly representational art, we can’t know what was lost. But we lost something

The rise of photography had a broader cultural consequence too. Before photography became commonplace, an ability to draw was almost a requirement for travelers. People needed to be able to make competent sketches of the exotic places they visited, because otherwise how would they be able to remember everything, or properly describe it to others? A world in which Ruskin had simply taken photographs in Venice rather than draw its monuments would be a diminished world. 

So, did photography kill art? By no means. Did it change it radically? It certainly did. And were all those changes positive ones? Nope.  

defeaters

Ukraine Can Win This War – by Liam Collins and John Spencer: Two or three times a day I see an article like this one: a confident prediction that Ukraine is in a winning position that never once considers the possibility that Russia will use nukes. I don’t see how this is anything other than the purest wishful thinking. The Ukrainian politician who says that Russia is like a monkey with a hand grenade is reckoning more seriously with the real conditions of this war. 

Here’s my thesis about our current political discourse: The more controversial the topic, the more likely that writers on it will simply ignore any perspective other than their own. They won’t consider it even to refute it; they’ll just pretend it doesn’t exist.

My secondary thesis is that such people try really hard not to think of alternatives because they know they can’t deal with the objections. A. E. Housman used to say that many textual critics simply ignore possibilities for establishing a reading of a text other than the one they prefer because they’re like a donkey poised equidistant between two bales of hay who thinks that if one of the bales of hay disappears he will cease to be a donkey. 

Epistemologists like to talk about defeaters of a particular proposition: it’s SOP, for them, when making an argument to ask: What eventuality would defeat my proposition? People writing enthusiastically that Ukraine will win this war never ask that question because the answer is both obvious and terrifying: Ukraine won’t win this war if Russia uses nuclear weapons against them. 

two versions of covid skepticism

From a long, intricate, subtle, and necessary essay by Ari Schulman:

The skeptical type I have targeted here is not the one who believes merely that prolonged school closures were a travesty (which is true), that natural immunity should have counted as equivalent to vaccination (true), that an egalitarian view of the virus meant that too little was done to protect people in nursing homes (true), that with different choices, restrictions could have ended far sooner than they did (true again).

No, he was the one who gave himself over wholly to Unmasking the Machine. Starting from entirely reasonable frustrations, the skeptical project took its followers to dark places. The unmasker insisted a million of his countrymen would not die and then when they did felt no reckoning. He at one moment cast himself as Churchill waiting to lead us out of our cowering fear of the Blitz (Death is a part of life) and in the next said that actually the Luftwaffe is a hoax (Those death certificates are fake anyway). He feels no reckoning because he has been taken in by a force as totalizing as the Technium’s; he is so given over to it that he too no longer accepts his own agency.

This skeptic is no aberration. An entire intellectual ecosystem is fueled by his takes. He owns, if not the whole movement of the Right, then certainly its vanguard.

Yet still, still one can hear the reply: Corrupt powers lied and demanded ritual pieties and put their boot on our necks and tore the country apart, and you want a reckoning from us?

An understandable reply — but the answer is, Yeah, we’d like a reckoning from you skeptics, because in a well-functioning society people don’t demand accountability and responsibility only from their political opponents. 

At the heart of Ari’s essay is a simple yet essential distinction between two phenomena: (a) skepticism about the competence and integrity of our technocratic public health regime and (b) skepticism about the seriousness of the coronavirus. Those who were right about the former all too often allowed themselves to be drawn into the latter. And very few of those who were most dismissive of the dangers posed by Covid have admitted their error — they’re too busy taking a “victory lap” because they think — with, as Ari shows, a good deal of justification — that they were right about the self-serving turf-protecting rigidity of the regime. 

(And if you don’t think National Review can be trusted with regard to the profound shortcomings of that regime, then by all means read Katherine Eban’s many illuminating and distressing reports in Vanity Fair.) 

As I look back on my own scattered writings on this topic, I think I often made the opposite error: because I rightly took seriously the dangers of the coronavirus, I was often too trusting of the regime. 

sequence

A: I don’t know, I think we need to get our own house in order before we launch into critiques of our enemies. 

B: There’s no time for that! The situation is too dire. 

A: But what if the situation is dire precisely because we never got our house in order, because we tolerated dishonesty, corruption, and short-term and shabby thinking for so long? 

B: Maybe that’s true, but we can’t worry about that now. Our enemies are taking over and we have to stop them at all costs! 

A: But isn’t that what you were saying years ago, in a situation that you now see as less dire than the current one? 

B: This is a do-or-die moment. You’re just denying reality if you don’t see that. 

A: You said that years ago also. If every moment is one of absolute crisis, then no moment is. I know a guy, a maker, a very successful small businessman, who is running way behind on his work. He’s facing a genuine crisis. And yet he just took six weeks off to completely reorganize his workflow and his workspace. He did so because he knows that (a) he should have done it long ago and (b) his problems won’t go away — unless people stop ordering his products because he doesn’t deliver. In that case his problems will have gone away because his business will have gone away. He’s taking a big hit to his business in the short term because that is the only way for him to be successful over the long haul. That’s precisely how we ought to be thinking. It is never the wrong time to get your house in order. And maybe the greater the crisis the more essential it is to take a good hard look at ourselves before flailing at our opponents.  

B: Stop blaming the victims! 

(P.S. I’m A. You wouldn’t have guessed.) 

Mary Leng:

Indeed, armed with a new toolbox of Latin names for fallacies, eager students all too often delight in spotting fallacies in the wild, shouting out their Latin names (ad hominem!; secundum quid!) as if they were magic spells. This is what Scott Aikin and John Casey, in their delightful book Straw Man Arguments, call the Harry Potter fallacy: the “troublesome practice of invoking fallacy names in place of substantive discussion”. However, they note another, less wholesome reason why some may be interested in fallacy theory. If one’s aim is not so much discovering the truth as winning an argument at all costs, fallacy theory can provide a training in the dark arts of closing down a discussion prematurely, leaving the impression that it has been won.

This, for Aikin and Casey, is the essence of what makes the straw man a fallacy: if we successfully “straw man” our opponent by knocking down a misstated version of their argument, we give the mistaken impression that the issue is closed.

Paradoxically, the straw man works particularly well on people well trained in the norms of good argument (the authors call this the “Owl of Minerva problem”: “we, in making our practices more self-reflective … create new opportunities for second-order pathologies that arise out of our corrective reflection”)…. Observers are generally more likely to be taken in by shoddy reasoning if they are already sympathetic to one side, and straw-manning contributes to the polarization of political debate. In today’s political environment it is not uncommon for partisans intuitively to see themselves as being on the right side of history, with their rivals adding nothing of value to the conversation and deserving of intellectual – or even moral – contempt. The prevalence of this fallacy in democratic political debate is thus a matter of significant concern: as Aikin and Casey write, it is “a threat to a properly functioning system of self-government”.

the two enemies

I have come to believe that almost all of our social pathologies stem from two deeply-ingrained tendencies: 

  1. People care more about belonging to the Inner Ring than about telling the truth. Indeed, in many cases lying for the Ingroup is the best means of demonstrating one’s commitment to it. 
  2. People are presentists not just in the sense I often talk about — attending only to the stimuli the dominant media offer now — but in the sense of doing whatever offers immediate gratification and leaving the future to care for itself.  

Anyone who cares about the flourishing of our social order and of the planet needs to think about how to chip away at these tendencies — and I think “chipping away” is the best we can do. A while back I was on a Zoom call with Jonathan Haidt, and I asked him whether he thought that seeing and acknowledging the truth about one’s condition might be an evolutionarily adaptive trait. He replied that beyond some fairly low-level lizard-brain things — like seeing that that really is a bear charging at you and you had therefore better take evasive action — he thought not. For human beings, the ability to belong is more adaptive than the ability to see what’s true. 

So: these are incredibly powerful forces; none of us will ever be completely free from them, and some of us will be prisoners of them all our lives long; but their hold can be weakened, and I think constantly about how to do that — for myself first, and for others later. 

habits of the American mind

The American Civil War was not that long ago. The last surviving Civil War veteran died two years before my birth. A conflict of that size and scope and horror leaves marks — marks on the land and marks on the national psyche — not readily erased.

I have come to believe that certain habits of mind arising directly from the Civil War still dominate the American consciousness today. I say not specific beliefs but rather intellectual dispositions; and those dispositions account for the form that many of our conflicts take today. Three such habits are especially important.

  1. Among Southerners – and I am one – the primary habit is a reliance on consoling lies. In the aftermath of the Civil War Southerners told themselves that the Old South was a culture of nobility and dignity; that slaves were largely content with their lot and better off enslaved than free; that the war was not fought for slavery but in the cause of state’s rights; that Robert E. Lee was a noble and gentle man who disliked slavery; and so on. Such statements were repeated for generations by people who knew that they were evasive at best – the state’s right that the Confederacy was created to defend was the right to own human beings as chattel – and often simply false, and if the people making those statements didn’t consciously understand the falsehood, they kept such knowledge at bay through the ceaseless repetition of their mantras. (Ty Seidule’s book Robert E. Lee and Me is an illuminating account, from the inside, of how such deceptions and self-deceptions work.) And now we see precisely the same practice among the most vociferous supporters of Donald Trump: a determined repetition of assertions – especially that the 2020 Presidential election was stolen, but also concerning COVID–19 and many other matters – that wouldn’t stand up even to casual scrutiny, and therefore don’t receive that scrutiny. It’s easy to fall into a new set of lies when you have a history of embracing a previous set of lies. 
  2. Among Northerners, the corresponding habit is a confidence in one’s own moral superiority. Because the North was right and the South wrong about the institution of slavery, it was easy for the North then to dismiss any evidence of its own complicity in racism. Our cause is righteous – that is all we know on earth, and all we need know. (But if our cause is righteous, doesn’t that suggest that we are too?) And then, later, whenever there were political conflicts in which the majority of Northerners were on one side and the majority of Southerners on the other – about taxation, or religious liberty, or anything – the temptation was irresistible to explain the disagreement always by the same cause: the moral rectitude of the one side, the moral corruption of the other. The result (visible on almost every page of the New York Times, for instance) is a pervasive smugness that enrages many observers while remaining completely invisible to those who have fallen into it. 
  3. And among Black Americans, the relevant disposition is a settled suspicion of any declarations of achieved freedom. Emancipation, it turns out, is not achieved by proclamation; nor is it achieved by the purely legal elimination of slavery. Abolition did not end discrimination or violence; indeed, it ushered in a new era of danger for many (the era of lynching) and a new legal system (Jim Crow) that scarcely altered the economic conditions of the recently enslaved. After this happens two or three times you learn to be skeptical, and you teach your children to be skeptical. Brown v. Board of Education produces equal educational opportunity for blacks and whites? We have our doubts. The Civil Rights Act outlaws racial discrimination? We’ll see about that. I wonder how many times Black Americans have heard that racism is over. They don’t, as far as I can tell anyway, believe that things haven’t gotten better; but they believe that improvement has been slow and uneven, and that many injustices that Americans think have died are in fact alive and often enough thriving.   

I think these three persistent habits of mind explain many of the conflicts that beset Americans today. And if I were to rank them in order of justifiability, I would say: the first is tragically unjustifiable — and the chief reason why, to my lasting grief, we Southerners have so often allowed our vices to displace our virtues —; the second is understandable but dangerously misleading; and the third … well, the third is pretty damn hard to disagree with.  

As the man said: The past is not dead; it is not even past. 

two quotations on culture wars

Ian Leslie:

I have long thought it’s a bit odd quite how much people on the left love to bemoan culture war discourse. They talk about it all the time, despite or perhaps because of the fact the left has made a lot of progress on the cultural battles of recent years and met surprisingly little resistance. But it’s always the other side which makes war, never ‘us’. Meanwhile, to most voters, it’s probably the other way around. The left comes across as more culturally aggressive than the right, the more likely to ‘call out’ incorrect language or behaviour. I don’t think trying to make or police cultural change is necessarily a bad thing, by the way — the left has changed society for the better that way in the past. I just think it’s a bad thing not to be honest about it. […] 

I think we should stop using culture war as an insult. After all, culture is very important to society and worth arguing over. I’ve written a whole book about how conflict can be productive. But for conflict to be healthy it has to happen out in the open rather than under the table or behind closed doors. It shouldn’t disguise itself as something else. If you think ‘decolonisation’, for example, is a meaningful and necessary activity, then recognise it as a contentious political goal, argue for it on that basis, and welcome counter-arguments. Instead, it gets presented as a neutral, merely bureaucratic term, and the pearl-clutching epithet of ‘culture war’ is wheeled out when anyone questions it. All of the actual arguments are thereby avoided. 

 

Yuval Levin

By allowing the chimeric ethos of the culture war to infiltrate every part of our lives, we have come to mistake the mores of cultural-political combat for all-purpose norms of social interaction. When we ignore them at work or in church, and just do our work without regard for party, we feel like we have made a sordid concession. If our entire common life is one big yes-or-no question, then we must always make sure to answer it correctly.

But that is not what our entire common life consists of, and acknowledging that fact need not mean cordoning off your conscience. There are core moral commitments that must apply to every part of our experience. We must always respect the equal dignity of others, and live by the truth and not by lies. We can never let economic imperatives or team spirit overwhelm our fundamental religious and ethical obligations. But such core matters of conscience leave a lot of room for legitimate differences and circumstantial norms. And they are broad enough to let us apply distinct standards to distinct circumstances. […] 

Such compartmentalization is not an alternative to an integrated moral framework for our lives but an embodiment of such a framework. Properly conceived, it is a grace given to our limited selves from beyond ourselves—a reminder that we are not fully merged with the world and defined by our society’s categories, but have our own dignity and agency, shaped and provoked by distinct invitations and circumstances. And it is a way to moderate our partisan passions and to recognize the multifaceted complexity of other human beings. No one is simply a partisan. Everyone has a more layered array of identities. That’s why we can respect people and engage with them in those domains that are not set out for cultural contention but for cooperation.

Sam Adler-Bell:

Of course, many good ideas, theories of change, and histories of oppression and struggle have been generated on campuses. The wider dissemination of such stories has been a salutary hallmark of our era. I, myself, am a beneficiary of a radical education. But I have had to unlearn many of the ways of speaking I cultivated as a student radical in order to be more convincing and compelling off campus. The obligation to speak to non-radicals, the unconverted, is the obligation of all radicals, and it’s a skill that is not only undervalued but perhaps hindered by a left-wing university education. Learning through participation in collective struggle how the language of socialism, feminism, and racial justice sound, how to speak them legibly to unlike audiences, and how others express their experiences of exploitation, oppression, and exclusion — that is our task. It is quite different from learning to talk about socialism in a community of graduate students and professors.

Reformation in the Church of Science — The New Atlantis:

Fake news is not a perversion of the information society but a logical outgrowth of it, a symptom of the decades-long devolution of the traditional authority for governing knowledge and communicating information. That authority has long been held by a small number of institutions. When that kind of monopoly is no longer possible, truth itself must become contested.

This is treacherous terrain. The urge to insist on the integrity of the old order is widespread: Truth is truth, lies are lies, and established authorities must see to it that nobody blurs the two. But we also know from history that what seemed to be stable regimes of truth may collapse, and be replaced. If that is what is happening now, then the challenge is to manage the transition, not to cling to the old order as it dissolves around us.

The authors don’t attempt to say how this transition should be managed, which is probably wise. Their point, and I fear that it’s quite correct, is that the transition is happening: What counts as scientific truth is now contested in many of the same ways that what counts as religious truth was contested in the Reformation period. 

normie wisdom: 1

First post in a series 

2054 jpg

When Hugh Trevor-Roper was a young historian he became friends with with the art connoisseur Bernard Berenson. Berenson was fifty years older than Trevor-Roper, and rarely left his home outside Florence, so Trevor-Roper enlivened his octogenarian friend’s dull days with maliciously gossipy letters, especially about his colleagues at Oxford. Here is what he had to say (on 18 January 1951) about C.S. Lewis:

Do you know C.S. Lewis? In case you don’t, let me offer a brief character-sketch. Envisage (if you can) a man who combines the face and figure of a hog-reever or earth-stopper with the mind and thought of a Desert Father of the fifth century, preoccupied with meditations of inelegant theological obscenity; a powerful mind warped by erudite philistinism, blackened by systematic bigotry, and directed by a positive detestation of such profane frivolities as art, literature, and, of course, poetry; a purple-faced bachelor and misogynist, living alone in rooms of inconceivable hideousness, secretly consuming vast quantities of his favorite dish, beefsteak-and-kidney pudding; periodically trembling at the mere apprehension of a feminine footfall; and all the while distilling his morbid and illiberal thoughts into volumes of best-selling prurient religiosity and such reactionary nihilism as is indicated by the gleeful title, The Abolition of Man.

The first thing to say about this is that it’s very funny. The second thing to say is that it makes no pretense to accuracy. I’m sure Trevor-Roper knew perfectly well that The Abolition of Man is not what Lewis hopes for but what he fears, and that he does not detest literature and poetry but rather adores them. Old Hughie’s having his bit of fun.

Still, there’s no doubt that the letter reflects Trevor-Roper’s actual attitude towards Lewis, and I want to zero in on the key phrase: “a powerful mind warped by erudite philistinism.” It’s a double judgment: he detests Lewis as a philistine – but he doesn’t hesitate to credit him with “a powerful mind.” I think that’s very important, not just for understanding how Trevor Roper thought but for understanding how the intelligentsia, especially within the academy, has been orienting itself to the world for the past hundred years or so.

For Trevor-Roper, the problem with Lewis isn’t that he stupid. Trevor-Roper is perfectly aware of Lewis’s exceptional intelligence, and if pressed he might even have acknowledged that Lewis was more intelligent than he himself – certainly more profoundly learned. What Trevor-Roper despises is Lewis’s aesthetic and emotional response to the world, his moral taste – in a word, his affections, in the Augustinian and Jonathan-Edwardsian sense. Trevor-Roper was appalled by Lewis because Lewis showed that a person could be prodigiously intelligent and nevertheless in other respects be – well, a normie.

I’m going to use that as a technical term here: a normie is someone whose responses to the world, whose affections, are close to those of the average person. This is not the only way it’s used, of course: in Angela Nagle’s 2017 book Kill All Normies normies are essentially political centrists, people who accept the status quo rather than embracing revolutionary change from the right or the left. But I think a more accurate sense of the word’s connotations is outlined in a post on the Merriam-Webster “Words We’re Watching” blog that wrestles with it: “The term normie has emerged as both a noun and an adjective referring to one whose tastes, lifestyle, habits, and attitude are mainstream and far from the cutting edge, or a person who is otherwise not notable or remarkable” – but then, at the end of the post, there’s an acknowledgment that “the word has lately flattened out and is now occasionally embraced as a term of ironic self-mockery. The emergence of the term normcore, which evokes a fashion style noted for being deliberately bland and unremarkable, might have helped to neutralize normie and bring the word back into the realm of cool — however that adds up.”

A rather hand-wavy conclusion. But in essence: “Normie” began as a term of disparagement but has been claimed by (a) the committed ironists and (b) the very people against which it was originally deployed – a relatively common event in the history of disparagement, as illustrated by the history of such words as “Methodist” and “Quaker.”

But whether you use the word in a pejorative or a commendatory sense, it’s important to recognize that normieness is a matter of “tastes, lifestyle, habits, and attitude” – or, as I prefer, affections – rather than intelligence. It is hard for people who disparage normies to keep this in mind, and maybe even for the rest of us. To stick with the Inklings for a moment: early in his wonderful book about Tolkien, The Road to Middle-Earth, Tom Shippey makes the offhand comment that “Tolkien’s mind was one of unmatchable subtlety, not without a streak of deliberate guile.” When I first read those words I was somewhat taken aback, because I was not accustomed to thinking of Tolkien’s mind as a subtle one. But the more I reflected on it the more convinced I became that Shippey is correct: Tolkien’s mind is exceptionally subtle, though his tastes and affections are simple – hobbitlike, as he himself often said: he once wrote in a letter, “I am in fact a Hobbit (in all but size).”

My initial reflexive skepticism about Shippey’s claim suggests a deeply-buried sense that the normie is unreflective in comparison to the person who takes a more adversarial attitude towards the conventional; an unfortunate assumption for me to be making, since I am pretty much a normie myself – but perhaps an understandable one, since I am a scholar of modernism, and modernism is essentially, as Paul Fussell pointed out many years ago, adversarial to the norm. So all the more credit to Hugh Trevor-Roper for managing to despise Lewis as a normie while crediting him with a powerful mind. 

But the specific term that Trevor-Roper uses to describe Lewis’s orientation is not “normie” but rather “philistinism.” We’ll get into that in the next post in this series (whenever that may be). 

points that don’t need to be belabored

We know — we all know

  • That people impose standards on their Outgroup that they never impose on their Ingroup.
  • That politicians do 180º turns on any and every issue, depending on which party controls Congress or the Presidency, or on what direction the Supreme Court seems to be leaning. (A quarter-century ago, when SCOTUS was left-dominated, the right complained about “the judicial usurpation of politics”; now that the composition of the court has changed, it’s the left making precisely the same complaint.) 
  • That people forgive politicians or actors or writers they approve of for sins that they vociferously condemn when committed by their cultural enemies.
  • That people denounce “cancel culture” when someone they approve of is being hounded but call precisely the same behavior “accountability culture” when they’re hounding someone they hate. 
  • That the most inconsistent people in earth will furiously denounce their political enemies for inconsistency. 

We know all that now, we don’t need to keep pointing it out. We also know that pointing it out won’t change anyone’s behavior. So what’s next? What’s the next step, socially and politically, if a common standard for all is no longer on the table? 

Watching Scott Alexander try unsuccessfully to grapple with Lacanian thought reminds me of the summer, many many years ago, when I made the same effort. I really did give it my best shot, but in the end decided (a) this is probably total bullshit, and (b) even if it’s not total bullshit, there’s nothing here usable by me. I think that second realization was the really important one, and one that I’ve had often over the years: Sometimes it really doesn’t matter whether a body of thought is valid or not; what matters is whether I, being who I am, placed as I am, can make use of it. And there is literally not one idea in Lacan that I can use. Which is all I need to know. 

entanglements

Science gets entangled with politics; it always has and it always will. And every time it happens the reputation of science get damaged. I am of course not a scientist and cannot speak authoritatively to these matters; but I can at least point to some intellectual problems that need to be addressed. 

Two recent examples: 

  1. Jesse Singal describes a study on gender-transitioning teens that discovered that “the kids in this study who accessed puberty blockers or hormones … had better mental health outcomes at the end of the study than they did at its beginning” — or did they? Apparently not. Signal can’t be certain about this because the researchers have hidden their data, but their claims look pretty darn fishy. Even so, no one in the media is going to ask hard questions because the study says exactly what they want to hear
  2. In the New York Review of Books, a biologist and a historian review a new book on the genetics of intelligence. They don’t like the book because they think that the idea of “a biological hierarchy of intelligence” has been repeatedly “discredited” and is not supported by the arguments of the book under review. But here too a great many complex questions are oversimplified or simply ignored, as Razib Khan points out

The study on trans kids and the NYRB review share a core conviction: Scientific studies that could lead to unfavorable social outcomes by providing support for our political enemies must be denounced and their authors shunned. As Stuart Ritchie has commented on the latter contretemps,  

It’s dispiriting to think that our public debate on genetics is going to be forever hobbled by this kind of thing, and from ostensibly authoritative sources. Other fields like evolutionary biology, climate science, and immunology have their creationists, climate change deniers, and antivaxxers, but they’re not usually Stanford professors. In this area, the nuisance call — the nihilist call, encouraging us to cut off our entire scientific nose to spite our behaviour-genetic face — is coming from inside the house.

But again, this is an old story — especially in the NYRB. When E. A. Wilson’s Sociobiology came out in 1975, a group of scientists wrote to the NYRB to denounce it, and were quite explicit about why it had to be denounced: Theories like Wilson’s “consistently tend to provide a genetic justification of the status quo and of existing privileges for certain groups according to class, race or sex.” Wilson’s purpose, they said, was to “justify the present social order.” Wilson replied with a detailed refutation of their claims about him personally — “Every principal assertion made in the letter is either a false statement or a distortion. On the most crucial points raised by the signers I have said the opposite of what is claimed” — but this didn’t slow his opponents’ crusade against his ideas, because even if Wilson didn’t hold lamentable political views, others certainly do, and might well use sociobiological arguments to buttress their regressive policy ideas.

(One of Wilson’s collaborators, Bert Hölldobler, says that Richard Lewontin was quite explicit about this: “When I asked why he so blithely distorted some of Ed’s writings he responded: ‘Bert, you do not understand, it is a political battle in the United States. All means are justified to win this battle.’” That sounds suspiciously pat to me, but it certainly wouldn’t be surprising coming from Lewontin.) 

Thus, in 1979, when the NYRB published a strongly critical review by Stuart Hampshire of a later book by Wilson, the same people who had so strongly denounced Wilson four years earlier wrote to complain about Hampshire’s review. Hang on: they complained about a negative review of Wilson? Yes indeed: 

Hampshire neglected the social and political issues which are at the heart of the sociobiology controversy. Three years ago many of us wrote a letter in response to a review of E.O. Wilson’s earlier book, Sociobiology: The New Synthesis, in which we pointed out the political content of this new field. We expressed concern at the likelihood that pseudo-scientific ideas would be used once more in the public arena to justify social policy. The events of the intervening years have fully justified our initial fears. 

Again, I can’t adjudicate the scientific questions at issue here; I am making a different point, which is this: Wilson’s critics are quite explicit that what really matters to them is not the philosophical naïveté or scientific inaccuracy of Wilson’s work but its anticipated political consequences. So it seems quite likely that if Wilson’s work had been philosophically sophisticated and scientifically unimpeachable their critique would not have changed

Of course, many similar stories can be told — and have been — about Covidtide, during which “trusting The Science” was an absolute imperative: as knowledge grew what counted as The Science changed, but at any given moment you were compelled to insist that (a) The Science is infallible and (b) The Science supports people like me against our political enemies. 

There are many reasons why millions of America don’t trust The Science, including belligerence and ignorance, but if you ask me, I would say that the most important reason is illustrated by the stories above: Scientists are sometimes untrustworthy. If they want to rebuild our trust in them, then they should start with three steps: 

  1. Practice the self-critical introspection that would enable them to perceive that, because they are human beings, there are some things they very much want to believe and some things they very much want to disbelieve; 
  2. Acknowledge those preferences in public;  
  3. Show that they are taking concrete steps to guard themselves against motivated reasoning and confirmation bias. 

If scientists and science writers were to take such steps, that would be helpful for us as readers; and even better for them as thinkers and writers. 

Invasion of the Fact-Checkers – Tablet Magazine:

The pandemic would shine an especially harsh light on the role of fact-checkers as information cops for America’s power elite—and the dangers of that role. Far from identifying “dangerous misinformation,” fact checkers were instrumental in the multipronged effort to suppress inquiries into the origins of the global pandemic that has killed nearly 6 million people. In February 2020, The Washington Post chided Arkansas Sen. Tom Cotton for promoting a “debunked” “conspiracy theory” that COVID-19 had escaped from a lab. In May 2020, the Post‘s Glenn Kessler, who is a member of the IFCN advisory board, said it was “virtually impossible” for the virus to have come from a lab. Those were the facts … until a year later, when Kessler published a new article explaining how ​​the “lab-leak theory suddenly became credible.”

How to understand the epistemological process that could lead a seasoned fact-checker to do a 180 on a matter of utmost public importance in less than a year? The simple answer, which has nothing to do with Kessler’s individual character or talents, is that when it really counts, the fact-checker’s role is not to investigate the truth but to uphold the credibility of official sources and their preferred narratives. Kessler’s mind changed at the very moment when the Democratic Party machinery began charting a new course on an issue that was hurting the party at the polls.

the power of priors

Leah Price:

Remembered today as an escape artist who was one of the first Hollywood stuntmen, Houdini was equally well known during his lifetime for his campaign against spiritualism. Houdini’s repertoire of magic tricks placed him in an ideal position to spot fraud: the ectoplasm often seen to extrude from mediums’ mouths, for example, bore a suspicious resemblance to the act in which he swallowed a needle and thread and showed spectators his empty mouth, only to pull threaded needles out from between his teeth a moment later. Conan Doyle returned the compliment, charging Houdini with surreptitiously drawing on psychic powers. Since there was no natural way for a straitjacketed man to escape from a locked trunk, the only possible explanation was that he possessed an uncanny ability to turn bones into ectoplasm at will.

aspiring to realism

Adam Tooze:

Adopting a realistic approach towards the world does not consist in always reaching for a well-worn toolkit of timeless verities, nor does it consist in affecting a hard-boiled attitude so as to inoculate oneself forever against liberal enthusiasm. Realism, taken seriously, entails a never-ending cognitive and emotional challenge. It involves a minute-by-minute struggle to understand a complex and constantly evolving world, in which we are ourselves immersed, a world that we can, to a degree, influence and change, but which constantly challenges our categories and the definitions of our interests. And in that struggle for realism – the never-ending task of sensibly defining interests and pursuing them as best we can – to resort to war, by any side, should be acknowledged for what it is. It should not be normalised as the logical and obvious reaction to given circumstances, but recognised as a radical and perilous act, fraught with moral consequences. Any thinker or politician too callous or shallow to face that stark reality, should be judged accordingly.

I very much like this idea of Realpolitik not as a position but rather, properly understood, an aspiration. Often what people call their “realism” is simply their intellectual and moral laziness.

Perhaps related: I’ve always been slightly annoyed by the tagline of the New Criterion — “The New Criterion will always call things by their real names” — because it assumes that knowing the real names of things is easy. A better and more honest, if less resonant, tagline would be “We will always call things by their real names if we can figure out what they are.”

a useful distinction

C Thi Nguyen:

An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited. Where an epistemic bubble merely omits contrary views, an echo chamber brings its members to actively distrust outsiders. In their book Echo Chamber: Rush Limbaugh and the Conservative Media Establishment (2010), Kathleen Hall Jamieson and Frank Cappella offer a groundbreaking analysis of the phenomenon. For them, an echo chamber is something like a cult. A cult isolates its members by actively alienating them from any outside sources. Those outside are actively labelled as malignant and untrustworthy. A cult member’s trust is narrowed, aimed with laser-like focus on certain insider voices. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.

doing your own research

Leo Kim:

In 2016, director Dean Fleischer-Camp — known for his work on the viral hit, Marcel The Shell With Shoes On — released Fraud, a “meta-documentary” composed of a series of home videos. They follow a family with a mounting pile of debts participating in a crime spree to wipe the slate clean. Though the videos that make up the film are real, the story itself is entirely fabricated: Fleischer-Camp made the film after stumbling onto an account holding hundreds of hours of home videos. By methodically going through this vast archive, he was able to recombine these clips into a crime narrative. Suddenly, a clip of a woman spraying her carpet becomes a woman about to burn down a home.

In a Q&A following Fraud’s premiere at Toronto’s Hot Docs festival, Fleischer-Camp was called a “con artist” and “liar” by audience members who were disturbed by his working methods. These viewers took issue not with the film’s plot, but its construction. Unlike Loose Change, which purports to discover the truth in a lie, Fraud explicitly attempts the reverse, showing that one could convincingly tell any story with materials found on a site like YouTube. Ironically, it’s Fraud that, to its critics, defies the open, truth-leaning and “collaborative” ethos that people associate with the platform. 

Fleischer-Camp’s unforgivable sin is showing people who pride themselves on “doing their own research” that they don’t have the first idea what research is or how to do it, and that, as a direct consequence, they are easy marks for crooks who know how to play to their intellectual self-assessment. As Kim says in his concluding paragraph: 

YouTube is a space where the successors of Burroughs, Gysin, and Bachman meet, where author and viewer at least appear to collapse, and the possibilities of the webpage open up. The ability to engage as a viewer has never been so substantive. But this sense of engagement is double-edged: it can allow for new innovations in narrative, but also in persuasion and propaganda. The feeling of engagement is not the same thing as actually engaging. “Do your own research” is less an imperative than a strategy of flattering one’s audience: the implication is that you have done your own research simply by watching the film. Despite controlling the playback, the user is just as passive as ever.
 

Matt Yglesias:

A normal person can tell you lots of factual information about his life, his work, his neighborhood, and his hobbies but very little about the FDA clinical trial process or the moon landing. But do you know who knows a ton about the moon landing? Crazy people who think it’s fake. They don’t have crank opinions because they are misinformed, they have tons and tons of moon-related factual information because they’re cranks. If you can remember the number of the Kennedy administration executive order about reducing troop levels in Vietnam, then you’re probably a crank — that EO plays a big role in Kennedy-related conspiracy theories, so it’s conspiracy theorists who know all the details.

More generally, I think a lot of excessive worry about “misinformation” is driven by the erroneous belief that more factual information would resolve political disputes. Both David Neumark and Arin Dube know far more than you or I do about the empirical literature on minimum wage increases. Nonetheless, they disagree. It is simply a heavily contested question. Relative to Neumark, the typical progressive is wildly misinformed about this subject; relative to Dube, the typical conservative is wildly misinformed. And lots of political disputes have this quality — most people don’t know that much about it, but you can find super-informed people on both sides of the question. That’s why it’s a live debate.

useful thinkers in three kinds

Useful thinkers come in three varieties. The Explainer knows stuff I don’t know and can present it clearly and vividly. This does not require great creativity or originality, though Explainers of the highest order will possess those traits too. The Illuminator is definitionally original: someone who shines a clear strong light on some element of history or human experience that I never knew existed. (Though sometimes after reading something by an Illuminator I will think, Why didn’t I realize that before?) The Provoker is original perhaps to a fault: Ambitious, wide-ranging, risk-taking, Provokers claim to know a lot more than they actually do but can be exceptionally useful in forcing readers to think about new things or think in new ways.

Some 20th-century thinkers who have been vital for me over the years:  

  • Explainers: Charles Taylor, Mary Midgely, Freeman Dyson  
  • Illuminators: Mikhail Bakhtin, Iris Murdoch, Michael Oakeshott 
  • Provokers: Gregory Bateson, Kenneth Burke, Simone Weil 

It’s especially important not to allow the Provokers to convince you that they’re Explainers or Illuminators. This is I think the great error of Girardians: If you take Girard as an Explainer, as they do, then his influence is likely to be disastrous; but if you were to see him as a Provoker, then he could be quite helpful to you. 

Kierkegaard once wrote in his journal, “If Hegel had written the whole of his logic and then said, in the preface or some other place, that it was merely an experiment in thought in which he had even begged the question in many places, then he would certainly have been the greatest thinker who had ever lived. As it is, he is merely comic.” That is, Hegel could have been the greatest of the Provokers, but, alas, he thought he was an Illuminator. (One of the ironies here is that Kierkegaard himself sought largely to be a Provoker in his pseudonymous writings, but that work has consistently been taken as illumination. The works Kierkegaard signed with his own name, the ones in which he genuinely tried to illuminate, have been largely ignored.) Much the same could be said of Rousseau: marvelous and wonderful as a Provoker, but God help the reader who takes his purported illuminations seriously. 

Obviously one shouldn’t be too legalistic here: Some great thinkers might fulfill one role in one book, a different role in another book (Francis Bacon is the first example that comes to mind). And some of the greatest books serve to explain, illuminate, and provoke all at once — though this is quite rare, and probably no book has an equal distribution of the three virtues. (I may say more about this later, in a post on humanistic scholarship.) But I find these categories useful in helping me to know what I can reasonably expect to get out of the books I read. 

Possible topic for another post: Whether novelists and poets can be fit into these categories as well. 

A wise and useful reflection by Gabrielle Bauer on the “precautionary principle”:

Two years into this pandemic, it is high time we learn from our mistakes. We must ask ourselves, at every step, whether our response matches or exceeds the threat. We must have full permission to discuss costs and benefits out loud, without fear of censure. When we invoke the precautionary principle, it must be with discretion and deliberation — with great caution, as it were. 

An intractable problem whose intractability needs to be faced: the taking of extreme precautions in one sphere (e.g. infectious disease) will sometimes require a manifest failure of precaution in other spheres (e.g. mental health). The immediate prospect of physical danger generates intellectual myopia. 

css.php