...

Stagger onward rejoicing

Tag: science (page 1 of 1)

colonialist owls

This is a fascinating report: “Very soon, the federal government may authorize the killing of nearly a half-million barred owls in the Pacific Northwest in a desperate bid to save the northern spotted owl.” The argument appended to the report is that this proposal is unwise. 

The key passage, I think, is this: 

Many philosophers, conservation biologists and ecologists are skeptical of the idea that we should restore current environments to so-called historical base lines, as this plan tries to do. In North America, the preferred base line for conservation is usually just before the arrival of Europeans. (In Western forests, this is often pegged to 1850, when significant logging began.) But life has existed on Earth for 3.7 billion years. Any point we choose as the “correct” base line will either be arbitrary or in need of a strong defense. 

The authors don’t say this explicitly, but it seems clear that the federal campaign against the barred owl depends on a reading of human political history. The movement of the barred owl westward is analogized to the movement of Europeans into the North American continent and across it.

Without that history in mind, the increasing dominance of the barred owl over the spotted owl would be just One of Those Things that happens in nature. But by using human political history to interpret such events, the government teaches itself to see barred owls as “invasive” — like they’re on the Oregon Trail or something.

It’s silly, but it’s also one of the subtler forms that the politicization of science takes. 

the integrity of science

I haven’t forgotten about middlebrow matters, but right now my mind is on something else. Something related, though. 

Readers of Gaudy Night (1935) will recall — stop reading if you haven’t read Gaudy Night and don’t want any spoilers — that the plot hinges on an event that occurred some years before the book’s present-day: a (male) historian fudged some evidence and a (female) historian caught him at it and reported the malfeasance, which led to his losing his job. Late in the book, but before the full relevance of this event to the plot has been revealed, there’s a conversation about scholarly integrity, which I will now drop into the middle of: 

“So long,” said Wimsey, “as it doesn’t falsify the facts. But it might be a different kind of thing. To take a concrete instance — somebody wrote a novel called The Search — “

C. P. Snow,” said Miss Burrows. “It’s funny you should mention that. It was the book that the — ”

“I know,” said Peter. “That’s possibly why it was in my mind.” 

A person has been vandalizing Shrewsbury College and a copy of that novel, with certain pages torn out, has been found. The novel, by the way, appeared in 1934, around the time that Sayers began writing Gaudy Night. It would be interesting to know whether it was the direct inspiration for her story, or whether she read it after some elements were already in place. I hope to find out more about that.

And by the way, I am going to be spoiling that novel far more thoroughly than I will spoil Gaudy Night — but it’s not one that many people read, these days. 

 

“I never read the book,” said the Warden.

“Oh, I did,” said the Dean. “It’s about a man who starts out to be a scientist and gets on very well till, just as he’s going to be appointed to an important executive post, he finds he’s made a careless error in a scientific paper. He didn’t check his assistant’s results, or something. Somebody finds out, and he doesn’t get the job. So he decides he doesn’t really care about science after all.”  

“Obviously not,” said Miss Edwards. “He only cared about the post.”

Neither the Dean, who has read the book, nor Miss Edwards, who hasn’t, is quite accurate. The scholar, whose name is Arthur Miles, probably would have gotten the post even without the paper; but it’s perfectly possible that he rushed the paper, failed to be appropriately self-critical, because he knew that the vote for the Director of a new scientific institute would be coming soon. Miles doesn’t know; he can’t be sure; maybe he would’ve made the mistake anyway. But in any case, as soon as he is told that there’s a problem with his paper, he runs the numbers again, sees the error, and immediately admits that he was wrong. 


Let me pause for two digressions: 

  1. Sayers specifies what pages were torn from the book — but I don’t have access to the edition that Sayers had read, which I assume was the first hardcover edition, so I don’t know what exactly was excised, but I suspect that it was the part where Miles admits his mistake. (The whole business is a flaw in Sayers’s plot, because it’s impossible to imagine the Responsible Party having read Snow’s book and known which pages to tear out; but DLS clearly was determined to get a discussion of The Search into her own novel, so she found a way.)   
  2. As it happens, this is Snow’s most autobiographical novel: what happened to Miles also happened to him. He began his career as a chemist, and wrote a paper (published in Nature) which was then discovered to contain an embarrassing mistake — upon which he abandoned his work as a scientist and became a novelist and bureaucrat.    

Now, back to Gaudy Night

“The point about it,” said Wimsey, “is what an elderly scientist says to him. He tells him: ‘The only ethical principle which has made science possible is that the truth shall be told all the time. If we do not penalize false statements made in error, we open up the way for false statements by intention. And a false statement of fact, made deliberately, is the most serious crime a scientist can commit.’ Words to that effect. I may not be quoting quite correctly.“

Wimsey’s summary is a good one. This is indeed what the “elderly scientist,” a man named Hulme, says to him. And Miles does not disagree. What’s more on his mind, though, is the picture of his future laid out for him by another senior scientist: 

“You’ve got to work absolutely steadily, without another suspicion of a mistake. You’ve got to let yourself be patronised and regretted over. You’ve got to get out of the limelight. Then in three or four years, you’ll be back where you were; though it will be held up against you, one way and another, for longer than that. It will delay your getting into the Royal [Society], of course. That can’t be helped. You’ll have a lean time for a while; but you’re young enough to get over it.” 

Faced with this prospect, Miles realizes that he could only manage all this (“Watching the dullards gloat. Working under Tremlin. Having every day a reminder of the old dreams”) if he had a genuine devotion to science. But: “It occurred to me I had no devotion to science.”

N.B.: the point is not that the event has taken away his devotion to science, but rather, “I am not devoted to science, I thought. And I have not been for years, and I have kept it from myself till now.” The revelation of his error leads to a revelation of what had been true about him all along: “There were so many signs going back so far, if I had let myself see, if it had been convenient to see.” Indeed, it now becomes clear to him that his desire to become the director of a scientific institute — an administrative position, not one that would involve him directly in research — precisely because on some unconscious level he didn’t want to be a scientist any more: “I had thrown myself into human beings — to escape the chill when my scientific devotion ended.” 

It should be clear, then, that “he decides he doesn’t really care about science after all” is not an adequate explanation of what happens. 

But there’s also a twist in the tail of this story, which in Gaudy Night Sayers calls attention to: 

“In the same novel,” said the Dean, “somebody deliberately falsifies a result — later on, I mean — in order to get a job. And the man who made the original mistake finds it out. But he says nothing, because the other man is very badly off and has a wife and family to keep.”

”These wives and families!“ said Peter.

”Does the author approve?“ inquired the Warden.

”Well,“ said the Dean, ”the book ends there, so I suppose he does.” 

Or does he? And is that an accurate description of the case? Several facts here are relevant:

  • The man who has falsified the data, Sheriff, is one of Miles’s oldest friends.  
  • Miles got Sheriff his current job and has been guiding his research, trying to keep him on the straight and narrow — he’s a feckless fellow, and a habitual liar, but Miles had hoped that he was ready to reform.   
  • Sheriff had promised Miles, and also his own wife, that he was working on a safe project when he was in fact working on a high-risk, high-reward one — one he thought likely to lead to a prestigious position that, now that the paper has been published, he is indeed about to be offered.   
  • Miles has a sense of responsibility for Sheriff because he had hoped to hire him for a position at the aforementioned Institute, but gave up on the idea when he realized that his own position was compromised. He thinks perhaps he should have pushed harder for Sheriff anyway. 
  • Early in his career Miles had had the opportunity to consciously fudge data himself, and seriously considered it — he thought that he might eventually be found out, but only after achieving a brilliant career from which summit he could just say “Whoops, I made a mistake” — but instead abandoned the research project. He thought, though, that in the future he would have compassion for any scientist who succumbed to a similar temptation.  
  • And most important of all, Sheriff is married to Audrey, Miles’s former lover, for whom, though he himself is now happily married, he cherishes a strong and lasting tendresse — despite the fact that Sheriff basically stole her affections while Miles was abroad.  

The Search is not a great novel, but this is perhaps its best element: the faithful portrayal of Miles’s complex and ever-shifting and deeply human responses to Sheriff’s lying. (It reminds me a bit of the greatest scene of this kind I know, the moment in Middlemarch when Lydgate has to decide how to vote for the chaplaincy of a new hospital. I wrote about that thirty years ago [!!] near the end of this essay.)

On the one hand, he knows exactly what Sheriff did and why:  

I had no doubts at all. It was a deliberate mistake. He had committed the major scientific crime (I could still hear Hulme’s voice trickling gently, firmly on).

Sheriff had given some false facts, suppressed some true ones. When I realised it, I was not particularly surprised. I could imagine his quick, ingenious, harassed mind thinking it over. For various reasons, he had chosen this problem; it would not take so much work, it would be more exciting, it might secure his niche straight away. … But I must not know, half because he was a little ashamed, half because I might interfere. So [his research assistant] and Audrey must, for safety’s sake, also be deceived.

All this he would do quite cheerfully. The problem began well. … Then he came to that stage where every result seemed to contradict the last, where there was no clear road ahead, where there seemed no road ahead at all. There he must have hesitated. On the one hand he had lost months, there would be no position for years, he would have to come to me and confess; on the other his mind flitted round the chance of a fraud.

There was a risk, but he might secure all the success still. I scarcely think the ethics of scientific deceit troubled him; but the risk must have done. For if he were found out, he was ruined. He might keep on as a minor lecturer, but there would be nothing ahead. 

Miles does not excuse Sheriff at any point; he knows that the man’s dishonesty is habitual, perhaps pathological. But he also knows that Sheriff and Audrey have reached a certain accommodation in their marriage, that Audrey understands who her husband is but loves him and needs him anyway. Miles writes a letter that would expose and run Sheriff, and then, realizing that it would also ruin Audrey, … 

I shall not send the letter, I was thinking. Let him win his gamble. Let him cheat his way to the respectable success he wants. He will delight in it, and become a figure in the scientific world; and give broadcast talks and views on immortality; all of which he will love. And Audrey will be there, amused but rather proud. Oh, let him have it.

For me, if I do not send the letter, what then? There was only one answer; I was breaking irrevocably from science. This was the end, for me. Ever since I left professionally, I had been keeping a retreat open in my mind; supervising Sheriff had meant to myself that I could go back at any time. If I did not write I should be depriving myself of the loophole. I should have proved, once for all, how little science mattered to me.

There were no ways between. I could have held my hand until he was elected, and then threatened that either he must correct the mistake, or I would; but that was a compromise in action and not in mind. No, he should have his triumph to the full. Audrey should not know, she had seen so many disillusions, I would spare her this.

The human wins out over the scientific. Maybe, Arthur thinks, it always does. But Gaudy Night shows that sometimes the scientific — in the sense of a strict commitment to the sacredness of honest research — can have its own victories. And Gaudy Night also suggests that the choices might not be as stark as Snow’s story suggests. More on that in another post. 

I’m with “the bloggers”

Noam Scheiber’s report on the controversies surrounding the work of Francesca Gino is … well, it’s terrible. Let me count (some of) the ways. 

Let’s start with the title: “The Harvard Professor and the Bloggers.” Now, journalists typically don’t title their own pieces, but throughout the report Scheiber refers to the people who run Data Colada as “the bloggers.” The point seems to be to contrast a Figure of Recognized Authority (“the Harvard professor”) with her online critics (“the bloggers”) — a tactic reminiscent of the days when journalists sneered at people who sit around in their pajamas typing on their laptops. But these critics are also professors, at ESADE Business School in Barcelona, the Wharton School at Penn, and the Haas School of Business at UC-Berkeley. It’s only late in the report, after an extensive and fawning portrait of the suffering Professor Gino, that Scheiber acknowledges the academic credentials of those who have called attention to apparent anomalies in Gino’s research. But he still calls them “the bloggers.” 

Second: Scheiber writes, “Even the bloggers, who published a four-part series laying out their case in June and a follow-up this month, have acknowledged that there is no smoking gun proving it was Dr. Gino herself who falsified data.” What does “even the bloggers” mean? There’s nothing unusual or noteworthy about “the bloggers” not directly accusing Gino of dishonesty, because that’s not what they do. They point to apparent anomalies — often, inconsistencies between (a) the conclusions drawn by scholars and (b) the data they claim to be drawing on — in research papers; it is not their job to figure out how the anomalies got there. They aren’t looking for a “smoking gun” in the hands of Professor Gino. 

In general, Scheiber seems to have seen it as his job to take up Gino’s sense of outrage. He says very strange things, like “She did not present as a fraud.” Well, of course. One cannot succeed in deceiving people if one presents as a fraud. The statement is an irrelevance. Similarly, Scheiber says that Gino often provided “a plausible answer” when he questioned her. But what his questions were, what her answers were, why he found them plausible, and how all that relates to the evidence provided at Data Colada — we’re not told any of that. 

Finally: Scheiber seems not to have asked what, to me, would be the single most obvious question: Why is she suing “the bloggers”? Apparently the cause is “defamation,” but how does the think they have defamed her simply by pointing to anomalies in her published research papers? The closest Scheiber comes to approaching the issue is in this passage: 

… the bloggers publicly revealed their evidence: In the sign-at-the-top paper, a digital record in an Excel file posted by Dr. Gino indicated that data points were moved from one row to another in a way that reinforced the study’s result.

Dr. Gino now saw the blog in more sinister terms. She has cited examples of how Excel’s digital record is not a reliable guide to how data may have been moved.

“What I’ve learned is that it’s super risky to jump to conclusions without the complete evidence,” she told me. 

Nothing about this makes sense. First of all, what is “sinister” about noting a manipulation of data in an Excel sheet? If that’s wrong, what’s wrong about it? What “conclusions” did the Data Colada investigation “jump to”? And above all, even if all of her criticisms are correct, why not offer a rational refutation rather than file a lawsuit? Suing her employer, Harvard, makes obvious sense, since Harvard has suspended her from her job without pay and is seeking to revoke her tenure. Faced with similar circumstances I might also sue. But suing people for writing that the data meant to support certain conclusions seems to have been manipulated by person or persons unknown? That requires some explanation. 

Scheiber doesn’t ask any of these questions. He’s not interested in anything except a profile of a wounded person. But I agree with the lawyer for “the bloggers” who says that such a lawsuit is “a direct attack on academic inquiry.” What Gino is doing certainly looks like a straightforward attempt to intimidate into silence anyone who might ask hard questions about her research. I came away from Scheiber’s pseudo-inquiry thinking that I need to contribute to Data Colada’s legal defense fund. I don’t believe that’s what Scheiber intended. 

Space debris expert: Orbits will be lost—and people will die—later this decade | Ars Technica:

Ars: Given what has happened over the last few years and what is expected to come, do you think the activity we’re seeing in low-Earth orbit is sustainable?

Moriba Jah: My opinion is that the answer is no, it’s not sustainable. Many people don’t like this whole “tragedy of the commons” thing, but that’s exactly what I think we’re on a present course for. Near-Earth orbital space is finite. We should be treating it like a finite resource. We should be managing it holistically across countries, with coordination and planning and these sorts of things. But we don’t do that. I think it’s analogous to the early days of air traffic and even maritime and that sort of stuff. It’s like when you have a couple of boats that are coming into a place, it’s not a big deal. But when you have increased traffic, then that needs to get coordinated because everybody’s making decisions in the absence of knowing the decisions that others are making in that finite resource.

Ars: Is it possible to manage all of this traffic in low-Earth orbit?

Jah: Right now there is no coordination planning. Each country has plans in the absence of accounting for the other country’s plans. That’s part of the problem. So it doesn’t make sense. Like, if “Amberland” was the only country doing stuff in space, then maybe it’s fine. But that’s not the case. So you have more and more countries saying, “Hey, I have free and unhindered use of outer space. Nothing legally has me reporting to anybody because I’m a sovereign nation and I get to do whatever I want.” I mean, I think that’s stupid. 

It is stupid, but a familiar kind of stupid. I must have seen a dozen essays arguing that if you can find any examples of people collaborating with regard to shared goods then the tragedy of the commons argument is wrong. Which is also stupid! If we can sometimes resist the temptations to abuse any given commons, that’s not an argument that such abuse is unlikely to happen. Of course the abuse of common goods isn’t inevitable; but it is distressingly common and we should always be on the lookout for it. In space we aren’t paying sufficient attention. 

two quotations on memory holes, present and future

Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base | The American Historical Review:

Abstract

Technological and economic forces are radically restructuring our ecosystem of knowledge, and opening our information space increasingly to forms of digital disruption and manipulation that are scalable, difficult to detect, and corrosive of the trust upon which vigorous scholarship and liberal democratic practice depend. Using an illustrative case from China, this article shows how a determined actor can exploit those vulnerabilities to tamper dynamically with the historical record. Briefly, Chinese knowledge platforms comparable to JSTOR are stealthily redacting their holdings, and globalizing historical narratives that have been sanitized to serve present political purposes. Using qualitative and computational methods, this article documents a sample of that censorship, reverse-engineers the logic behind it, and analyzes its discursive impact. Finally, the article demonstrates that machine learning models can now accurately reproduce the choices made by human censors, and warns that we are on the cusp of a new, algorithmic paradigm of information control and censorship that poses an existential threat to the foundations of all empirically grounded disciplines. At a time of ascendant illiberalism around the world, robust, collective safeguards are urgently required to defend the integrity of our source base, and the knowledge we derive from it. 

Science must respect the dignity and rights of all humans — Nature

Advancing knowledge and understanding is a public good and, as such, a key benefit of research, even when the research in question does not have an obvious, immediate, or direct application. Although the pursuit of knowledge is a fundamental public good, considerations of harm can occasionally supersede the goal of seeking or sharing new knowledge, and a decision not to undertake or not to publish a project may be warranted.

Consideration of risks and benefits (above and beyond any institutional ethics review) underlies the editorial process of all forms of scholarly communication in our publications. Editors consider harms that might result from the publication of a piece of scholarly communication, may seek external guidance on such potential risks of harm as part of the editorial process, and in cases of substantial risk of harm that outweighs any potential benefits, may decline publication (or correct, retract, remove or otherwise amend already published content)

(N.B.: Nature’s policy does not address misinformation: the journal does not propose to be vigilant against falsehood, but rather to be vigilant against actual knowledge that risks harm to … well, to whatever groups Nature prefers to see unharmed.) 

incentives

Consider this an addendum to my recent post on an influential study of Alzheimer’s that looks to have featured manipulated data. Retraction Watch has been in business for quite some time now, and is likely to get busier because of the extra opportunities for dishonesty available through machine learning. This situation will continue to get worse until science — and academia more generally — begins to get serious about correcting its perverse incentives. Every scientist knows that certain kinds of results get (a) attention and (b) citations, resulting in (c) prestige for the researchers’ institutions and (d) promotions and raises and maybe better jobs elsewhere for the researchers. 

Again, this is a problem for all of academia: as I have written elsewhere, “the academic enterprise is not a Weberian ‘iron cage,’ it’s a cage made from a bundle of thin sticks of perverse incentives held together with a putty of bullshit.” But when the bullshit takes over the sciences, especially the health sciences, people die. The incentive structure has to change. 

Critical research on the causes of Alzheimer’s may have been falsified — and as a result, researchers may have concentrated too much on a hypothesis that was not as well-supported as they thought. Further investigation will tell whether that’s true, but in any event the story is a useful reminder that we shouldn’t assume that the fudging or outright falsification of scientific data is a victimless crime. In some cases — in this case, if falsified data inappropriately changed the direction of Alzheimer’s research — there could be very many victims indeed before the course of research gets corrected. 

UPDATE: David Robert Grimes: “Science may be self-correcting, but only in the long term.” 

Brain Sciences | Is Reduced Visual Processing the Price of Language?:

Abstract

We suggest a later timeline for full language capabilities in Homo sapiens, placing the emergence of language over 200,000 years after the emergence of our species. The late Paleolithic period saw several significant changes. Homo sapiens became more gracile and gradually lost significant brain volumes. Detailed realistic cave paintings disappeared completely, and iconic/symbolic ones appeared at other sites. This may indicate a shift in perceptual abilities, away from an accurate perception of the present…. Studies show that artistic abilities may improve when language-related brain areas are damaged or temporarily knocked out. Language relies on many pre-existing non-linguistic functions. We suggest that an overwhelming flow of perceptual information, vision, in particular, was an obstacle to language, as is sometimes implied in autism with relative language impairment. We systematically review the recent research literature investigating the relationship between language and perception. We see homologues of language-relevant brain functions predating language. Recent findings show brain lateralization for communicative gestures in other primates without language, supporting the idea that a language-ready brain may be overwhelmed by raw perception, thus blocking overt language from evolving. We find support in converging evidence for a change in neural organization away from raw perception, thus pushing the emergence of language closer in time. A recent origin of language makes it possible to investigate the genetic origins of language.

Reformation in the Church of Science — The New Atlantis:

Fake news is not a perversion of the information society but a logical outgrowth of it, a symptom of the decades-long devolution of the traditional authority for governing knowledge and communicating information. That authority has long been held by a small number of institutions. When that kind of monopoly is no longer possible, truth itself must become contested.

This is treacherous terrain. The urge to insist on the integrity of the old order is widespread: Truth is truth, lies are lies, and established authorities must see to it that nobody blurs the two. But we also know from history that what seemed to be stable regimes of truth may collapse, and be replaced. If that is what is happening now, then the challenge is to manage the transition, not to cling to the old order as it dissolves around us.

The authors don’t attempt to say how this transition should be managed, which is probably wise. Their point, and I fear that it’s quite correct, is that the transition is happening: What counts as scientific truth is now contested in many of the same ways that what counts as religious truth was contested in the Reformation period. 

“Another Green World,” by Jessica Camille Aguirre:

NASA has also dabbled in space agriculture. In the late Nineties, it conducted experiments at the Johnson Space Center in Houston called the Early Human Testing Initiative, enclosing volunteers in sealed chambers for up to three months at a time. In one experiment, the oxygen for a single crew member was supplied by 22,000 wheat plants. A more ambitious project to enclose four people, named BIO-plex, was planned for the early Aughts, but was ultimately shelved because of budget concerns. Still, NASA researchers have continued work on space agriculture, albeit on a more modest scale. A few years ago, astronauts succeeded in growing lettuce aboard the International Space Station in a miniature garden called Veggie.

Most recently, the China National Space Administration has collaborated with Beihang University to build Yuegong-1, or Lunar Palace 1, a sealed structure with small apartments and two growing chambers for plants. Beginning in 2018, eight student volunteers lived in the capsule, rotating in groups of four for over a year. Their diet consisted of crops they grew, including strawberries, along with packets of mealworms fed with biological waste. Like the ESA’s loop, carbon dioxide was cycled through plants, which were enriched with nitrogen from processed urine. Yet even Lunar Palace 1 fell short of being a truly closed system. While it managed to recycle 100 percent of its water and oxygen, it managed to do so for only 80 percent of its food supply.

A fascinating story about biospheres and other strategies for living in places other than the Earth.

Alexander Stern on “The Technocrat’s Dilemma”:

The technocratic response to misinformation and conspiracy theory only exacerbates the problem and further validates the most extreme reactions. Instead of responding with humility and transparency, technocrats and their media partners attempt to reassert epistemic control. They refuse to admit mistakes, they appeal to authority and credentials instead of evidence, and they attempt to shut down dissenting voices instead of taking up their challenges. They lump legitimate critiques together with the most outrageous disinformation, with the implicit message that more deference is needed, rather than more debate. As a result, their crusade for truth begins to look more and more like censorship and scapegoating from an establishment doing everything in its power to deflect responsibility for the cascading crises.

When technocrats construe misinformation as a problem of “information literacy” that must be solved by experts, they don’t just misdiagnose the ailment; they express a worldview that generates much of our information dysfunction to begin with. It is a view of misinformation that excuses cases where elites themselves have misinformed or lied. It papers over the ways technocrats have earned mistrust. It ignores the obvious problems with conceiving of truth as the remit of a special class. And it considers the public’s suspicion of technocrats not as an occasion for self-reflection, but only as another public policy problem to “solve.” 

Cf. my recent post on related issues

entanglements

Science gets entangled with politics; it always has and it always will. And every time it happens the reputation of science get damaged. I am of course not a scientist and cannot speak authoritatively to these matters; but I can at least point to some intellectual problems that need to be addressed. 

Two recent examples: 

  1. Jesse Singal describes a study on gender-transitioning teens that discovered that “the kids in this study who accessed puberty blockers or hormones … had better mental health outcomes at the end of the study than they did at its beginning” — or did they? Apparently not. Signal can’t be certain about this because the researchers have hidden their data, but their claims look pretty darn fishy. Even so, no one in the media is going to ask hard questions because the study says exactly what they want to hear
  2. In the New York Review of Books, a biologist and a historian review a new book on the genetics of intelligence. They don’t like the book because they think that the idea of “a biological hierarchy of intelligence” has been repeatedly “discredited” and is not supported by the arguments of the book under review. But here too a great many complex questions are oversimplified or simply ignored, as Razib Khan points out

The study on trans kids and the NYRB review share a core conviction: Scientific studies that could lead to unfavorable social outcomes by providing support for our political enemies must be denounced and their authors shunned. As Stuart Ritchie has commented on the latter contretemps,  

It’s dispiriting to think that our public debate on genetics is going to be forever hobbled by this kind of thing, and from ostensibly authoritative sources. Other fields like evolutionary biology, climate science, and immunology have their creationists, climate change deniers, and antivaxxers, but they’re not usually Stanford professors. In this area, the nuisance call — the nihilist call, encouraging us to cut off our entire scientific nose to spite our behaviour-genetic face — is coming from inside the house.

But again, this is an old story — especially in the NYRB. When E. A. Wilson’s Sociobiology came out in 1975, a group of scientists wrote to the NYRB to denounce it, and were quite explicit about why it had to be denounced: Theories like Wilson’s “consistently tend to provide a genetic justification of the status quo and of existing privileges for certain groups according to class, race or sex.” Wilson’s purpose, they said, was to “justify the present social order.” Wilson replied with a detailed refutation of their claims about him personally — “Every principal assertion made in the letter is either a false statement or a distortion. On the most crucial points raised by the signers I have said the opposite of what is claimed” — but this didn’t slow his opponents’ crusade against his ideas, because even if Wilson didn’t hold lamentable political views, others certainly do, and might well use sociobiological arguments to buttress their regressive policy ideas.

(One of Wilson’s collaborators, Bert Hölldobler, says that Richard Lewontin was quite explicit about this: “When I asked why he so blithely distorted some of Ed’s writings he responded: ‘Bert, you do not understand, it is a political battle in the United States. All means are justified to win this battle.’” That sounds suspiciously pat to me, but it certainly wouldn’t be surprising coming from Lewontin.) 

Thus, in 1979, when the NYRB published a strongly critical review by Stuart Hampshire of a later book by Wilson, the same people who had so strongly denounced Wilson four years earlier wrote to complain about Hampshire’s review. Hang on: they complained about a negative review of Wilson? Yes indeed: 

Hampshire neglected the social and political issues which are at the heart of the sociobiology controversy. Three years ago many of us wrote a letter in response to a review of E.O. Wilson’s earlier book, Sociobiology: The New Synthesis, in which we pointed out the political content of this new field. We expressed concern at the likelihood that pseudo-scientific ideas would be used once more in the public arena to justify social policy. The events of the intervening years have fully justified our initial fears. 

Again, I can’t adjudicate the scientific questions at issue here; I am making a different point, which is this: Wilson’s critics are quite explicit that what really matters to them is not the philosophical naïveté or scientific inaccuracy of Wilson’s work but its anticipated political consequences. So it seems quite likely that if Wilson’s work had been philosophically sophisticated and scientifically unimpeachable their critique would not have changed

Of course, many similar stories can be told — and have been — about Covidtide, during which “trusting The Science” was an absolute imperative: as knowledge grew what counted as The Science changed, but at any given moment you were compelled to insist that (a) The Science is infallible and (b) The Science supports people like me against our political enemies. 

There are many reasons why millions of America don’t trust The Science, including belligerence and ignorance, but if you ask me, I would say that the most important reason is illustrated by the stories above: Scientists are sometimes untrustworthy. If they want to rebuild our trust in them, then they should start with three steps: 

  1. Practice the self-critical introspection that would enable them to perceive that, because they are human beings, there are some things they very much want to believe and some things they very much want to disbelieve; 
  2. Acknowledge those preferences in public;  
  3. Show that they are taking concrete steps to guard themselves against motivated reasoning and confirmation bias. 

If scientists and science writers were to take such steps, that would be helpful for us as readers; and even better for them as thinkers and writers. 

corruption

From a brilliant essay by Matt Crawford:

One of the most striking features of the present, for anyone alert to politics, is that we are increasingly governed through the device of panics that give every appearance of being contrived to generate acquiescence in a public that has grown skeptical of institutions built on claims of expertise. And this is happening across many domains. Policy challenges from outsiders presented through fact and argument, offering some picture of what is going on in the world that is rival to the prevailing one, are not answered in kind, but are met rather with denunciation. In this way, epistemic threats to institutional authority are resolved into moral conflicts between good people and bad people.

John Tooby:

To earn membership in a group you must send signals that clearly indicate that you differentially support it, compared to rival groups. Hence, optimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. […] 

This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when — as is generally the case — new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member — at risk of losing job offers, one’s friends, and one’s cherished group identity. This freezes belief revision.  

Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally.  No one is behaving either ethically or scientifically who does not make the best case possible for rival theories with which one disagrees.

Elon’s plan

Charlie Stross:

Musk owns Tesla Energy. And I think he’s going to turn a profit on Starship by using it to launch Space based solar power satellites. By my back of the envelope calculation, a Starship can put roughly 5-10MW of space-rate photovoltaic cells into orbit in one shot. ROSA—Roll Out Solar Arrays now installed on the ISS are ridiculously light by historic standards, and flexible: they can be rolled up for launch, then unrolled on orbit. Current ROSA panels have a mass of 325kg and three pairs provide 120kW of power to the ISS: 2 tonnes for 120KW suggests that a 100 tonne Starship payload could produce 6MW using current generation panels, and I suspect a lot of that weight is structural overhead. The PV material used in ROSA reportedly weighs a mere 50 grams per square metre, comparable to lightweight laser printer paper, so a payload of pure PV material could have an area of up to 20 million square metres. At 100 watts of usable sunlight per square metre at Earth’s orbit, that translates to 2GW. So Starship is definitely getting into the payload ball-park we’d need to make orbital SBSP stations practical. 1970s proposals foundered on the costs of the Space Shuttle, which was billed as offering $300/lb launch costs (a sad and pathetic joke), but Musk is selling Starship as a $2M/launch system, which works out at $20/kg.

So: disruptive launch system meets disruptive power technology, and if Tesla Energy isn’t currently brainstorming how to build lightweight space-rated PV sheeting in gigawatt-up quantities I’ll eat my hat.

more on geoengineering

As a follow-up to my recent post on climate change and the various means of addressing it, see this from the Economist:

Some form of geoengineering technology, therefore, would seem inevitable if the world has any hope of meeting the Paris targets.

Despite the uptick in interest, the technologies themselves are nowhere near ready. Resistance from some scientists and environmentalists has made research in the field very difficult. In March, for example, a project in Sweden that would have tested scientific equipment to be used in future experiments to release particles into the atmosphere had to be cancelled after protests from local environmental groups. The locals argued — as have others who oppose geoengineering — that the technology being tested would distract from the more important task of reducing carbon emissions.

That is a worthwhile argument. But preventing research on geoengineering has risks too. There are plenty of technical, ethical and environmental questions to answer about these technologies: do they work as intended at scale? If they do work, who should control them? What are the unintended side-effects of all this climate-tampering? If the technologies are not properly scrutinised and governments don’t agree on rules for their proper use, what’s to prevent a rogue actor (whether a country or a billionaire) from going it alone and doing something dangerous?

Geoengineering cannot (and should not) displace the urgent need to cut global carbon emissions. But in the long-term struggle against climate change, the world will need the best information and every useful tool it can invent.

The argument that the exploration and testing of geoengineering technologies should be stopped is not “a worthwhile argument.” It’s a dumb argument. We cannot afford to put all our eggs in the emissions-reduction basket, for the simple reason that there is no good reason to believe that the world’s governments will impose the necessary constraints. We have to have a Plan B, and also Plans C, D, E, and so on. It is tragically wrong for activists to allow their desire to punish us all for our bad choices, to force us all to confront and suffer for our reckless behavior, to overwhelm the need to stop warming by whatever means available. As the article rightly says. 

Climate activists often say — Kim Stanley Robinson effectively says this in The Ministry for the Future — that the struggle against climate change is a struggle against climate injustice, and you can’t disentangle those: fighting against climate change necessarily entails dismantling the system that produced it. There are many things one might say in response to this claim, but the most obvious and to my mind irrefutable one is simply this: When faced with an enormous problem you don’t know how to solve, it’s not a good move to chain it to another enormous problem you also don’t know how to solve.  

As someone with a great sympathy for anarchism — and indeed for the Mondragon-style anarcho-syndicalism that KSR often commends — I would certainly like to see transnational capitalism dismantled. It is my fervent hope that that happens, ideally through a peaceful process of subsidiarist devolution. But we don’t know how to do that, and even if we ever figure out how to do it, the process of devolution will be very long. (The idea that you can simply sow chaos and expect something better to emerge, somehow, from that is just childish.) In the meantime, if it’s possible for the current global capitalist order to develop technologies that will ameliorate climate change, then I think that would be a very good thing indeed. 

UPDATE: This additional piece from the Economist usefully suggests the difference between, on the one hand, reckless and overbold forms of geoengineering and, on the other, more limited and responsible forms. 

UPDATE 2: One more along these lines, from Todd Myers: “On The Dispatch Podcast last week, Sarah Isgur and Jonah Goldberg expressed the hope that a future Norman Borlaug would do for reducing CO2 emissions what the original Borlaug did to feed the world. There may be a climate Borlaug out there, but it is far more likely that climate change will be solved by a million Borlaugs — small innovators whose efforts add up to big changes. The cost of innovation has declined radically since the Green Revolution of the 1960s, and we are already seeing an explosion of new carbon-reducing technologies.” 

pure speculation

Kim Stanley Robinson’s most recent novel, The Ministry for the Future, begins with a long and horrific set-piece about a massive heatwave in India, in the year 2025, that leaves perhaps twenty million people dead in a single week. The almost unimaginable death toll kick-starts a serious worldwide determination to deal with climate change; one consequence of this determination is the multinational organization that gives the book its title.

But the Ministry is only one such endeavor. Robinson devotes a lot of time — too much time, for this sometimes glassy-eyed reader — to the description of committee meetings and other workings of a vast bureaucracy, because he thinks that, boring or not, such patient work will make a difference to our climate future, if any difference is to be made. However, as he repeatedly makes clear, bureaucratic action is not the only kind of action there is — systemic inertia and global capitalism being what they are:

The disaster had happened in India, in a part of India where few foreigners ever went, a place said to be very hot, very crowded, very poor. Probably more such events in the future would mostly happen in those nations located between the Tropics of Capricorn and Cancer, and the latitudes just to the north and south of these lines. Between thirty north and thirty south: meaning the poorest parts of the world. North and south of these latitudes, fatal heat waves might occur from time to time, but not so frequently, and not so fatally. So this was in some senses a regional problem. And every place had its regional problems. So when the funerals and the gestures of deep sympathy were done with, many people around the world, and their governments, went back to business as usual. And all around the world, the CO2 emissions continued.

A new government in India, perhaps the first truly representative government in the country’s history, knows that that’s how it goes. So it begins a seven-month campaign of sending cargo planes as high aloft as they will go to release aerosol particulates meant to reflect sunlight away from the earth. Some nations protest; India doesn’t care. India sends the planes because Indians have seen up close what happens when a heat wave occurs that simply overwhelms the resources of the unprotected human body.

Does the seeding help? Probably; a little. It was, one of the pilots thinks, worth a shot no matter what.

Later in the book Robinson describes a more complex technological effort: a massive project of drilling in the Arctic and Antarctic meant to allow meltwater to escape, which in turn slows the shifting and calving of the glaciers. In a related effort, the Russians assume responsibility for dyeing the Arctic Ocean yellow to reflect heat and keep it relatively cool.

I have read many accounts today of the bluntly alarming new report by the Intergovernmental Panel on Climate Change, and only one of them (in the Economist) has mentioned technological approaches to addressing the climate crisis — approaches other than those related to the reduction of carbon emissions, which is anyway typically portrayed as a behavioral rather than a technological matter — even though a good deal of research in this field is being done. I’m sure some accounts say more, but certainly the overwhelming message from the media is simple and straightforward: We must reduce carbon emissions, and reduce them by a degree hitherto unthinkable. (And, they add, even that won’t stop significant temperature increases.)

Why so little attention to technological helps unrelated to the reduction of emissions? Well, for one thing, that reflects the emphasis of the report itself; also, that makes for a simple story, and writers of press releases and journalists alike prefer simple stories. But — and this is the meaning of my title — I speculate, I suspect, than something else is at work. Something maybe more important than simplicity.

There’s no doubt in my mind — not one iota of doubt — that we are headed for a global nightmare because of our own greed and self-indulgence. And if technological solutions emerge that slow or stop global warming, then that will mean that we get away with it. We get away with our greed and self-indulgence; we don’t pay the piper, what goes around does not, after all, come around. And that is — for those of us with a strong sense of justice, and I count myself among that number — a bitter pill to swallow. It’s precisely the same impulse that made so many of us choke on the Wall Street bailout a decade ago. They got away with it, the bastards.

Of course, just as I accepted a Wall Street bailout because I believed that it would result in less destruction and suffering than allowing the system to burn down, I would also accept — eagerly! — technological solutions that left us as greedy and self-indulgent and regardless of the future as ever but averted the loss of countess lives (human and non-human) and the destruction of countless square miles of habitat. Surely this is also true of the journalists and climate activists remaining silent about possibly ameliorative technological endeavors.

But here’s the thing: How much hope does any of us really have that the world’s governments will do the right thing? Oh, they may very neatly re-arrange the deck-chairs on the Titanic — but more than that? There ain’t a snowball’s chance in Waco circa August 2075. In his novel, Robinson imagines the emergence of a kind of chastened and de-centralized capitalism — and I want to come back to that in another post, if I have time — but, like Bill McKibben, I fear that “Robinson underestimates not just the staying power of the status quo but also the odds that when things get really bad, we will react really badly.” (KSR may be a better prophet in his anticipation, in the Mars books, of “transnational” capitalism.)

McKibben suggests that such bad reactions could include the emergence of more authoritarian strongmen, and one would have to be naïve to discount the possibility of that, but I think it’s more likely that elected politicians will just find ways to kick the can a little further down the road, again and again and again. Politicians in a democratic order only think as far as the next election — those who win such elections do, anyway — and unelected ones only think of bread, circuses, and mechanisms of intimidation. Long-term thinking about the common good is simply not a political virtue, insofar as “politics” means “gaining and keeping power.” And that is, after all, what politics means.

I, therefore, have nearly zero confidence in political solutions to our changing climate, which means that I am all the more interested in the likely emerging technologies. I wish it was easier to find out about what people are experimenting on, what they are planning. My primary fear for the medium-term future is that, in a time of particular pain, something like Robinson’s picture of a desperate Indian government acting precipitously will come about — and that the consequences will be much worse than those were. This is why I would like to hear less about the reduction of carbon emissions and more about what the scientists and engineers are planning against the Dies Irae.

orbital obliquity

SciTech Daily:

Planets which are tilted on their axis, like Earth, are more capable of evolving complex life. This finding will help scientists refine the search for more advanced life on exoplanets. […]  

“The most interesting result came when we modeled ‘orbital obliquity’ — in other words how the planet tilts as it circles around its star,” explained Megan Barnett, a University of Chicago graduate student involved with the study. She continued, “Greater tilting increased photosynthetic oxygen production in the ocean in our model, in part by increasing the efficiency with which biological ingredients are recycled. The effect was similar to doubling the amount of nutrients that sustain life.” 

“Orbital obliquity” is one of those scientific terms — like “persistence of vision” and “angle of repose” — that just cries out for metaphorical application.

All of the writers and thinkers I trust most are characterized by orbital obliquity. They are never quite perpendicular; they approach the world at a slight angle. As a result their minds evolve complex life. 


P.S. Another of those metaphor-generating terms: “impact gardening.” 

Automata, Animal-Machines, and Us

What follows is a review of The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick, by Jessica Riskin (University of Chicago Press, 2016). The review appeared in the sadly short-lived online journal Education and Culture, and has disappeared from the web without having been saved by the Wayback Machine. So I’m reposting it here. My thanks to that paragon of editors John Wilson for having commissioned it. 

1.

The last few decades have seen, in a wide range of disciplines, a strenuous rethinking of what the material world is all about, and especially what within it has agency. For proponents of the “Gaia hypothesis,” the whole world is a single self-regulating system, a sort of mega-organism with its own distinctive character. By contrast, Richard Dawkins dramatically shifted the landscape of evolutionary biology in his 1976 book The Selfish Gene (and later in The Extended Phenotype of 1982) by arguing that agency happens not at the level of the organism but at the level of the gene: an organism is a device by means of which genes replicate themselves.

Meanwhile, in other intellectual arenas, proponents of the interdisciplinary movement known as “object-oriented ontology” (OOO) and the movement typically linked with it or seen as its predecessor, “actor-network theory” (ANT), want to reconsider a contrast that underlines most of our thinking about the world we live in. That contrast is between humans, who act, and the rest of the known cosmos, which either behaves or is merely passive, acted upon. Proponents of OOO and ANT tend to doubt whether humans really are unique “actors,” but they don’t spend a lot of time trying to refute the assumption. Instead, they try to see what the world looks like if we just don’t make it.

In very general terms we may say that ANT wants to see everything as an actor and OOO wants to see everything as having agency. (The terms are related but, I think, not identical.) So when Bruno Latour, the leading figure in ANT, describes a seventeenth-century scene in which Robert Hooke demonstrates the working of a vacuum pump before the gathered worthies of the Royal Society, he sees Hooke as an actor within a network of power and knowledge. But so is the King, who granted to the Society a royal charter. And so is the laboratory, a particularly complex creation comprised of persons and things, that generates certain types of behavior and discourages or wholly prevents others. So even is the vacuum itself — indeed it is the status of the vacuum as actor that the whole scene revolves around.

For the object-oriented ontologist, similarly, the old line that “to a man with a hammer, everything looks like a nail” is true, but not primarily because of certain human traits, but rather because the hammer wants to pound nails. For the proponent of OOO, there are no good reasons for believing that the statement “This man wants to use his hammer to pound nails” makes any more sense than “This hammer wants to pound nails.”

Some thinkers are completely convinced by this account of the agency of things; others believe it’s nonsense. But very few on either side know that the very debates they’re conducting have been at the heart of Western thought for five hundred years. Indeed, much of the intellectual energy of the modern era has been devoted to figuring out whether the non-human world is alive or dead, active or passive, full of agency or wholly without it. Jessica Riskin’s extraordinary book The Restless Clock tells that vital and almost wholly neglected story.

It is a wildly ambitious book, and even its 500 pages are probably half as many as a thorough telling of its story would require. (The second half of the book, covering events since the Romantic era, seems particularly rushed.) But a much longer book would have struggled to find a readership, and this is a book that needs to be read by people with a wide range of interests and disciplinary allegiances. Riskin and her editors made the right choice to condense the exceptionally complex story, which I will not even try to summarize here; the task would be impossible. I can do little more than point to some of the book’s highlights and suggest some of its argument’s many implications.

2.

Riskin’s story focuses wholly on philosophers — including thinkers that today we would call “scientists” but earlier were known as “natural philosophers” — but the issues she explores have been perceived, and their implications considered, throughout society. For this reason a wonderful companion to The Restless Clock is the 1983 book by Keith Thomas, Man and the Natural World: A History of the Modern Sensibility 1500–1800, one of the most illuminating works of social history I have ever read. In that book, Thomas cites a passage from Fabulous Histories Designed for the Instruction of Children (1788), an enormously popular treatise by the English educational reformer Sarah Trimmer:

‘I have,’ said a lady who was present, ‘been for a long time accustomed to consider animals as mere machines, actuated by the unerring hand of Providence, to do those things which are necessary for the preservation of themselves and their offspring; but the sight of the Learned Pig, which has lately been shewn in London, has deranged these ideas and I know not what to think.’

This lady was not the only one so accustomed, or so perplexed; and much of the story Riskin has to tell is summarized, in an odd way, in this statement. First of all, there is the prevalence in the early modern and Enlightenment eras of automata: complex machines designed to counterfeit biological organisms that then provided many people the vocabulary they felt they needed to explain those organisms. Thus the widespread belief that a dog makes a noise when you kick it in precisely the same way for for precisely the same reasons that a horn makes a noise when you blow into it. These are “mere machines.” (And yes, it occurred to more than a few people that one could extend the logic to humans, who also make noises when kicked. The belief in animals as automata was widespread but by no means universal.)

The second point to be extracted from Trimmer’s anecdote is the lady’s belief that these natural automata are “actuated by the unerring hand of Providence” — that their efficient working is a testimony to the Intelligent Design of the world.

And the third point is that phenomena may be brought to public attention — animals trained to do what we had thought possible only by humans, or automata whose workings are especially inscrutable, like the famous chess-playing Mechanical Turk — that call into question the basic assumptions that separate the world into useful categories: the human and the non-human, the animal and all that lies “below” the animal, the animate and the inanimate. Maybe none of these categories are stable after all.

These three points generate puzzlement not only for society ladies, but also (and perhaps even more) for philosophers. This is what The Restless Clock is about.

3.

One way to think of this book — Riskin herself does not make so strong a claim, but I think it warranted — is as a re-narration of the philosophical history of modernity as a series of attempts to reckon with the increasing sophistication of machines. Philosophy on this account becomes an accidental by-product of engineering. Consider, for instance, an argument made in Diderot’s philosophical dialogue D’Alembert’s Dream, as summarized by Riskin:

During the conversation, “Diderot” introduces “d’Alembert” to such principles as the idea that there is no essential difference between a canary and a bird-automaton, other than complexity of organization and degree of sensitivity. Indeed, the nerves of a human being, even those of a philosopher, are but “sensitive vibrating strings,” so that the difference between “the philosopher-instrument and the clavichord-instrument” is just the greater sensitivity of the philosopher-instrument and its ability to play itself. A philosopher is essentially a keenly sensitive, self-playing clavichord.

But why does Diderot even begin to think in such terms? Largely because of the enormous notoriety of Jacques de Vaucanson, the brilliant designer and builder of various automata. Vaucanson is best known today for his wondrous mechanical duck, which quacked, snuffled its snout in water, flapped its wings, ate food placed before it, and — most astonishing of all to the general populace — defecated what it had consumed. (Though in fact Vaucanson had, ahead of any given exhibition of the duck’s prowess, placed some appropriately textured material in the cabinet on which the duck stood so the automaton could be made to defecate on command.) Voltaire famously wrote that without “Vaucanson’s duck, you would have nothing to remind you of the glory of France,” but he genuinely admired the engineer, as did almost everyone who encountered his projects, the most impressive of which, aside perhaps from the duck, were human-shaped automata — androids, as they were called, thanks to a coinage by the seventeenth-century librarian Gabrial Naudé — one of which played a flute, the other a pipe and drum. These devices, and the awe they produced upon observers, led quite directly to Diderot’s speculations about the philosopher as a less harmonious variety of clavichord.

And if machines could prompt philosophy, do they not have theological implications as well? Indeed they do, though, if certain stories are to be believed, the machine/theology relationship can be rather tense. In the process of coining the term “android,” Naudé relates (with commendable skepticism) the claim of a 15th-century writer that Albertus Magnus, the great medieval bishop and theologian, had built a metal man. This automaton answered any question put to it and even, some said, dictated to Albertus hundreds of pages of theology he later claimed as his own. But the mechanical theologian met a sad end when one of Albertus’s students grew exasperated by “its great babbling and chattering” and smashed it to pieces. This student’s name was Thomas Aquinas.

The story is far too good to be true, though its potential uses are so many and varied that I am going to try to believe it. The image of Thomas, the apostle of human thought and of the limits of human thought, who wrote the greatest body of theology ever composed and then at the end of his life dismissed it all as “straw,” smashing this simulacrum of philosophy, this Meccano idol — this is too perfect an exemplum not to reward our contemplation. By ending the android’s “babbling and chattering” and replacing it with patient, careful, and rigorous dialectical disputation, Thomas restored human beings to their rightful place atop the visible part of the Great Chain of Being, and refuted, before they even arose, Diderot’s claims that humans are just immensely sophisticated machines.

Yet one of the more fascinating elements of Riskin’s narrative is the revelation that the fully-worked-out idea of the human being as a kind of machine was introduced, and became commonplace, late in the 17th century by thinkers who employed it as an aid to Christian apologetics — as a way of proving that we and all of Creation are, as Sarah Trimmer’s puzzled lady put it, “actuated by the unerring hand of Providence.” Thus Riskin:

“Man is a meer piece of Mechanism,” wrote the English doctor and polemicist William Coward in 1702, “a curious Frame of Clock-Work, and only a Reasoning Engine.” To any potential critic of such a view, Coward added, “I must remind my adversary, that Man is such a curious piece of Mechanism, as shews only an Almighty Power could be the first and sole Artificer, viz., to make a Reasoning Engine out of dead matter, a Lump of Insensible Earth to live, to be able to Discourse, to pry and search into the very nature of Heaven and Earth.”

Since Max Weber in the 19th century it has been a commonplace that Protestant theology “disenchants” the world, purging the animistic cosmos of medieval Catholicism of its panoply of energetic spirits, good, evil, and ambiguous. But Riskin demonstrates convincingly that this purging was done in the name of a sovereign God in whom all spiritual power was believed to dwell. This is not simply a story of the rise of a materialist science that triumphed over and marginalized religion; rather, the world seen as a “passive mechanical artifact world relied upon a supernatural, divine intelligence. It was inseparably and in equal parts a scientific and a theological model.” So when Richard Dawkins wrote, in 2006,

People who think that robots are by definition more ‘deterministic’ than human beings are muddled (unless they are religious, in which case they might consistently hold that humans have some divine gift of free will denied to mere machines). If, like most of the critics of my ‘lumbering robot’ passage, you are not religious, then face up to the following question. What on earth do you think you are, if not a robot, albeit a very complicated one?

— he had no idea that he was echoing the argument of an early-18th-century apologist for Protestant Christianity.

Again, the mechanist position was by no means universally held, and Riskin gives attention throughout to figures who either rejected this model or modified it in interesting ways: here the German philosopher Leibniz (1646–1716) is a particularly fascinating figure, in that he admired the desire to preserve and celebrate the sovereignty of God even as he doubted that a mechanistic model of the cosmos was the way to do it. But the mechanistic model which drained agency from the world, except (perhaps) for human beings, eventually carried the day.

One of the most important sections of The Restless Clock comes near the end, where Riskin demonstrates (and laments) the consequences of modern scientists’ ignorance of the history she relates. For instance, almost all evolutionary theorists today deride Jean-Baptiste Lamarck (1744–1829), even though Lamarck was one of the first major figures to argue that evolutionary change happens throughout the world of living things, because he believed in a causal mechanism that Darwin would later reject: the ability of creatures to transmit acquired traits to their offspring. Richard Dawkins calls Lamarck a “miracle-monger,” but Lamarck didn’t believe he was doing any such thing: rather, by attributing to living things a vital power to alter themselves from within, he was actually making it possible to explain evolutionary change without (as the astronomer Laplace put it in a different context) having recourse to the hypothesis of God. The real challenge for Lamarck and other thinkers of the Romantic era, Riskin argues, was this:

How to revoke the monopoly on agency that mechanist science had assigned to God? How to bring the inanimate, clockwork cosmos of classical mechanist science back to life while remaining as faithful as possible to the core principles of the scientific tradition? A whole movement of poets, physiologists, novelists, chemists, philosophers and experimental physicists — roles often combined in the same person — struggled with this question. Their struggles brought the natural machinery of contemporary science from inanimate to dead to alive once more. The dead matter of the Romantics became animate, not at the hands of an external Designer, but through the action of a vital agency, an organic power, an all-embracing energy intrinsic to nature’s machinery.

This “vitalist” tradition was one with which Charles Darwin struggled, in ways that are often puzzling to his strongest proponents today because they do not know this tradition. They think that Darwin was exhibiting some kind of post-religious hangover when he adopted, or considered adopting, Lamarckian ideas, but, Riskin convincingly demonstrates, “insofar as Darwin adopted Lamarck’s forces of change, he did so not out of a failure of nerve or an inability to carry his own revolution all the way, but on the contrary because he too sought a rigorously naturalist theory and was determined to avoid the mechanist solution of externalizing purpose and agency to a supernatural god.”

Similarly, certain 20th-century intellectual trends, most notably the cybernetics movement (associated primarily with Norbert Wiener) and the behaviorism of B. F. Skinner, developed ideas about behavior and agency that their proponents believed no one had dared think before, when in fact those very ideas had been widely debated for the previous four hundred years. Such thinkers were either, like Skinner, proudly ignorant of what had gone before them or, like Wiener and in a different way Richard Dawkins, reliant on a history that is effectively, as Riskin puts it, “upside-down.” (Riskin has a fascinating section on the concept of “robot” that sheds much light on the Dawkins claim about human robots quoted above.)

4.

At both the beginning and end of her book, Riskin mentions a conversation with a friend of hers, a biologist, who agreed “that biologists continually attribute agency — intentions, desires, will — to the objects they study (for example cells, molecules),” but denied that this kind of language signifies anything in particular: it “was only a manner of speaking, a kind of placeholder that biologists use to stand in for explanations they can’t yet give.” When I read this anecdote, I was immediately reminded that Richard Dawkins says the same in a footnote to the 30th anniversary edition of The Selfish Gene:

This strategic way of talking about an animal or plant, or a gene, as if it were consciously working out how best to increase its success … has become commonplace among working biologists. It is a language of convenience which is harmless unless it happens to fall into the hands of those ill-equipped to understand it.

Riskin, who misses very little that is relevant to her vast argument, cites this very passage. The question which Dawkins waves aside so contemptuously, but which Riskin rightly believes vital, is this: Why do biologists find that particular language so convenient? Why is it so easy for them to fall into the “merely linguistic” attribution of agency even when they so strenuously refuse agency to the biological world? We might also return to the movements I mention at the outset of this review and ask why the ideas of OOO and ANT seem so evidently right to many people, and so evidently wrong to others.

The body of powerful, influential, but now utterly neglected ideas explored in The Restless Clock might illuminate for many scientists and many philosophers a few of their pervasive blind spots. “By recognizing the historical roots of the almost unanimous conviction (in principle if not in practice) that science must not attribute any kind of agency to natural phenomena, and recuperating the historical presence of a competing tradition, we can measure the limits of current scientific discussion, and possibly, think beyond them.”

climate hope

At the end of this interview, the environmental historian Jason Moore says, “Capitalism … had its social legitimacy because in one way or another it could promise development. And I don’t think anyone takes that idea seriously anymore.” Which is a very strange thing to say indeed, because economic development is the one promise that capitalism has delivered on, and massively. (This is the chief burden of the books by Deirdre McCloskey that I wrote about here and here.) In fact, and quite obviously, economic development around the world is the chief reason we have a climate crisis, because that development has ravaged our environment — and the global nature of modern capitalism means that that ravaging has been dispersed over the entire globe.

Moore agrees with my friend Wen Stephenson that nothing serious can be done to avert the oncoming climate catastrophe except a world-wide political/economic revolution. Stephenson:

The sheer depth, scale, and speed of the changes required at this point are beyond anything a mere climate movement can possibly accomplish, because such a movement is inherently unsuited to the nature of the task we face: radically transforming the political-economic system that is driving us toward climate breakdown. Given the sclerotic system in which the Green New Deal — the only proposal ever put before Congress that confronts the true scale and urgency of the climate catastrophe — is dead on arrival, mocked even by the Democratic Speaker of the House, the pretense that anything less than revolutionary change is now required amounts to a form of denial.

I am skeptical about this proposal for two reasons:

  1. The revolution would have to be global, because if it happens only in Europe or North America, or both, then global capital will simply shift its attentions and energies to other parts of the world, East and South (which is already where most of the depredations of the environment are happening). But a single, ideologically unified, worldwide political revolution is simply unimaginable.
  2. I see absolutely no reason to believe that any socialist government, local or global, will implement the changes needed to slow climate change. Socialism has a uniformly terrible record in these matters, from the Soviet Union to Chavez’s Venezuela — totally dependent for its social stability on global petrocapitalism — to this little country you may have heard of called China. I strongly suspect that that pattern will continue: when socialist policies throw a spoke into the engine of commerce, and the economy starts to collapse so that there’s less and less wealth to distribute, then socialist governments, like all others, will not hesitate to exploit the environment to become more productive. (Or will become state-capitalists like the Chinese Communist Party.)

Where does that leave us? Well, you can offer a counsel of despair, as Jonathan Franzen does. Now, he says he doesn’t despair:

If your hope for the future depends on a wildly optimistic scenario, what will you do ten years from now, when the scenario becomes unworkable even in theory? Give up on the planet entirely? To borrow from the advice of financial planners, I might suggest a more balanced portfolio of hopes, some of them longer-term, most of them shorter. It’s fine to struggle against the constraints of human nature, hoping to mitigate the worst of what’s to come, but it’s just as important to fight smaller, more local battles that you have some realistic hope of winning. Keep doing the right thing for the planet, yes, but also keep trying to save what you love specifically — a community, an institution, a wild place, a species that’s in trouble — and take heart in your small successes. Any good thing you do now is arguably a hedge against the hotter future, but the really meaningful thing is that it’s good today. As long as you have something to love, you have something to hope for.

But this is frankly to admit that all the victories are short-term and small-scale. Franzen tries not to think about what’s happening in the longer term and on the global scale.

Does anything remain? Possibly: technological fixes. Any potential fixes are fraught with uncertainty and danger, but more and more scientists are quietly hinting that they just may be our last resort. But why are those scientists being so quiet in their hinting? Largely because almost every climate activist I know of is absolutely and unremittingly hostile to any such proposals. Like my suspicions about global socialist revolution, their suspicions about technological fixes come in two varieties. The first is straightforward and reasonable: Why would we trust the very technocracy that got us into this mess to get us out?

The second one, though, is a little more complicated. I think that many climate activists hate the very idea of technological fixes because if they should happen to work that would mean that the bastards got away with it. That is, if the global capitalist elite that has soo cheerfully and brazenly and heedlessly destroyed the natural world should, at the last moment, pull a technological rabbit out of their technocratic hat that stops the worst from occurring, that would feel like the biggest miscarriage of justice ever, because a group of people who have a very strong claim to the title of Greatest Criminals in History would walk away scot-free and indeed might even be thought of as heroes. It offends one’s sense of justice so profoundly that it’s hard to root for such technological fixes to work, even if they could indeed avert the worst consequences of capitalist exploitation of the planet.

But a planet saved is better than a planet ruined. Even if in the saving the Greatest Criminals walk free.

So I am thinking a lot about the various technological means of addressing climate change. I’m looking for actions less dangerous than the great big global fixes that some of the more imaginative technocrats propose, but that also would have, at least potentially, far greater effects than the strictly local actions that Franzen recommends. Ideas in this post seem to come in twos, so here are two very promising ideas:

The first involves making plants a little better at holding carbon dioxide:

Chory believes the key to fixing that imbalance is training plants to suck up just a little more CO2, and to keep it longer. She is working on engineering the world’s crop plants to have bigger, deeper roots made of a natural waxy substance called suberin — found in cork and cantaloupe rinds — which is an incredible carbon-capturer and is resistant to decomposition. By encouraging plants to have bigger, deeper, more suberin-rich roots, Chory can trick them into fighting climate change as they grow. The roots will store CO2, and when farmers harvest their crops in the fall, those deep-buried roots will stay in the soil and keep their carbon sequestered in the dirt, potentially for hundreds of years.

The second would turn air conditioners into carbon-capture machines:

A paper published Tuesday in the Nature Communications proposes a partial remedy: Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour. Machines that capture carbon dioxide from the atmosphere — a developing fix for climate change — also depend on moving large volumes of air. So why not save energy by tacking the carbon capture machine onto the air conditioner?

Let a thousand such flowers bloom — a thousand ways to address our changing environment that are technologically feasible and highly scalable but do not require the complete transformation of the whole human order. Keep those ideas coming, scientist friends. We desperately need them.

Science and the Good

Denunciations of “scientism” are a dime a dozen these days, but this isn’t that kind of book. (It doesn’t even use the term “scientism.”) Rather, it’s a patient and careful exploration of the question, “Can science demonstrate what morality is and how we should live?” — and more than that, a genealogy of the question. Hunter and Nedelisky (H&N) are especially skillful in exploring why and how so many people came to believe that science is the only plausible “rational arbiter” of our disputes about how best to live, individually and collectively.

The core finding of the book is that the “new science of morality” is able to use science to establish our moral path only by defining morality down — by simply ignoring ancient questions about how human beings should live in favor of a kind of stripped-down utilitarianism, a philosophically shallow criterion of “usefulness.”

Here’s a key passage from early in the book:

While the new science of morality presses onward, the idea of morality – as a mind-independent reality — has lost plausibility for the new moral scientists. They no longer believe such a thing exists. Thus, when they say they are investigating morality scientifically, they now mean something different by “morality“ from what most people in the past have meant by it and what most people today still mean by it. In place of moral goodness, they substitute the merely useful, which is something science can discover. Despite using the language of morality, they embrace a view that, in its net effect, amounts to moral nihilism.

H&N are clear that very few of these “new moral scientists” believe themselves to be nihilists, or want to be. They have fallen into it inevitably because they are not equipped to think seriously about morals. Much later in the book, H&N comment that

What is confusing [about this situation] is that the language of morality is preserved. The new moral science still speaks of what we “should“ or “should not“ do – but the meaning has been changed. Normative guidance is now about achieving certain practical ends. Given that we want this or that, what should we do in order to obtain it?

The quest, then, has been fundamentally redirected. The science of morality is no longer about discovering how we are to live – though it is still often presented as such. Rather, it is now concerned with exploiting scientific and technological know-how in order to achieve practical goals grounded in whatever social consensus can justify.

For this reason, “in their various policy or behavioral proposals, the new moral scientists never fail to recommend that was sanctioned by safe, liberal, humanitarian values.“ Science done in WEIRD societies, then, faithfully reflects WEIRD values — a phenomenon that, H&N point out, proponents of the science of morality seem constitutionally unable to reflect on. They think they are just doing science, but it it really likely that the assured results of unbiased scientific exploration will unfailingly support so consistently the policy preferences of the educated elite of one of the world’s many cultures?

It’s not likely that this book will convince many proponents of the new science of morality that their quest is quixotic; such people are deeply invested in believing themselves Objective, able to achieve the view from nowhere; moreover, the depth of our social fissures really does call out for some arbiter, and the idea that science, or SCIENCE, might be that arbiter is both attractive and superficially plausible. But upon reflection — the kind of reflection offered by this book — the claims of these new scientists become harder to sustain.

One can only hope that Science and the Good will prompt such reflection. For if we must have a morality guided by science, we need that science to be better than it currently is — and that will only happen if the moral thought prompting it is better. At the very least, we need some New Scientists of Morality to turn some of the skepticism on their own assumptions and preferences that they so eagerly devote to all previous moral traditions.

the imminent collapse of an empire

Writers generally don’t get to choose the titles of their pieces, but the confusion in the title and subtitle of this report by Alexandra Kralick — Are we talking about sex or gender? I mean, it’s not like bones could tell you anything about gender — is reflected in the report itself. Sometimes it’s about “the nature of biological sex”; at other times it’s about the false assumptions that arise from gender stereotypes. Kralick weaves back and forth between the two in unhelpful ways.

On the specific question of whether sex is binary, and the contexts in which that matters, if you want clarity you’d do well to read this essay. But for the moment I’m interested in something else.

There’s a passing comment in Kralick’s essay that caught my attention: “The perception of a hard-and-fast separation between the sexes started to disintegrate during the second wave of feminism in the 1970s and ’80s.” The phrase “second-wave feminism” has been used in various and inconsistent ways, but it is typically associated with “difference feminism,” an emphasis on “women’s ways of knowing” being different than those of men. And in that sense it’s better to say that “the perception of a hard-and-fast separation between the sexes started to disintegrate” as a result of the critique of second-wave feminism as being too “essentialist” in its modeling of sexuality and gender. The most influential figure in that critique was Judith Butler, whose book Gender Trouble set in motion the discourse about gender as choice, gender as performance, gender as fluid and malleable, that we see embodied in Kralick’s essay.

So while I don’t think Kralick has the details of the history quite right, she’s definitely correct to suggest that scientists are having this conversation right now — or not so much having a conversation as making declarations ex cathedra — as a direct result of intellectual movements that began in humanities scholarship twenty-five years ago.

So for those of you who think that the humanities are marginal and irrelevant, put that in your mental pipe and contemplatively smoke it for a while.

Many years ago the great American poet Richard Wilbur wrote a poem called “Shame,” in which he imagined “a cramped little state with no foreign policy, / Save to be thought inoffensive.”

Sheep are the national product. The faint inscription
Over the city gates may perhaps be rendered,
“I’m afraid you won’t find much of interest here.”

The people of this nation could not be more overt in their humility, their irrelevance, their powerlessness. But …

Their complete negligence is reserved, however,
For the hoped-for invasion, at which time the happy people
(Sniggering, ruddily naked, and shamelessly drunk)
Will stun the foe by their overwhelming submission,
Corrupt the generals, infiltrate the staff,
Usurp the throne, proclaim themselves to be sun-gods,
And bring about the collapse of the whole empire.

Hi there scientists. It’s us.

how to evaluate a strong but disputable claim

This from John D. Cook is a great example of how to respond to strong but highly disputable scientific claims — in this case Michael Atiyah’s claim to have proven the Riemann hypothesis:

Atiyah’s proof is probably wrong, just because proofs of big theorems are usually wrong. Andrew Wiles’ proof of Fermat’s Last Theorem had a flaw that took a year to patch. We don’t know who Atiyah has shown his work to. If he hasn’t shown it to anyone, then it is almost certainly flawed: nobody does flawless work alone. Maybe his proof has a patchable flaw. Maybe it is flawed beyond repair, but contains interesting ideas worth pursuing further.

The worst case scenario is that Atiyah’s work on the fine structure constant and Todd functions is full of holes. He has made other big claims in the last few years that didn’t work out. Some say he should quit doing mathematics because he has made big mistakes.

I’ve made big mistakes too, and I’m not quitting. I make mistakes doing far less ambitious work than trying to prove the Riemann hypothesis. I doubt I’ll ever produce anything as deep as a plausible but flawed proof of the Riemann hypothesis.

Fantastic. Would that we had more people who think this way.

There’s no way we could ever carry out any experiment to test for the multiverse’s existence in the world, because it’s not in our world. It’s an article of faith, and not a very secure one. What’s more likely: a potentially infinite number of useless parallel universes, or one perfectly ordinary God?

But when you divide the brain into bitty bits and make millions of calculations according to a bunch of inferences, there are abundant opportunities for error, particularly when you are relying on software to do much of the work. This was made glaringly apparent back in 2009, when a graduate student conducted an fM.R.I. scan of a dead salmon and found neural activity in its brain when it was shown photographs of humans in social situations. Again, it was a salmon. And it was dead.

Do You Believe in God, or Is That a Software Glitch? – The New York Times. I read this immediately after yet another story lamenting the public’s inexplicable reluctance to accept the verdicts of experts.

What I learned as a hired consultant to autodidact physicists

An estimated 10 billion people will inhabit that warmer world. Some will become climate refugees—moving away from areas where unbearable temperatures are the norm and where rising water has claimed homes. In most cases, however, policy experts foresee relatively small movement within a country’s borders. Most people—and communities, cities and nations—will adapt in place. We have highlighted roughly a dozen hotspots where climate change will disrupt humanity’s living conditions and livelihoods, along with the strategies those communities are adopting to prepare for such a future.

— What Life Will Be Like on a Much Warmer Planet – Scientific American. This article + infographic from SciAm (paywalled, I think, and if so, sorry) is pretty good, but I’d love to see another post on places that will benefit from climate change.

Now, before I go any further: I think anthropogenic climate change is real and is going to be, overall, enormously destructive. I favor serious global governmental intervention to head off, if possible, the worst of it.

But it’s not going to be bad for everyone, and it would be fascinating to learn who will benefit and how. But few journalists or scientists want to tell those stories, for fear that they’ll reduce public concern.

Bradley Voytek:

The fact that I, a practicing neuroscientist, can openly admit to giving a shit about the human side of neuroscience without fearing “outing” myself as a soft thinker is in no small part due to artistry of Dr. Sacks’ blend of scientific rationality and human empathy. That’s an incredibly difficult line to walk when you’re faced with the existential reality that the very thing that makes us who we are can be changed in some way — for example by neurological trauma or injury — and can therefore change basic aspects of our perception and personality. 

Dr. Sacks, through sheer force of compassion, reminded us, as a scientific field, that the very thing that makes neuroscience most frightening — its ability to expose our humanness as being tied to our physical self — is also why it’s so important for us to pursue it. The promise of neuroscientific advancement is the reduction of suffering, and in many ways Dr. Sacks was our empathic lighthouse in the scientific storm of advancement, guiding us toward that humanistic goal.

Edward Wilson, an old master.


You need not see what someone is doing
to know if it is his vocation,

you have only to watch his eyes:
a cook mixing a sauce, a surgeon

making a primary incision,
a clerk completing a bill of lading,

wear the same rapt expression,
forgetting themselves in a function.

How beautiful it is,
that eye-on-the-object look.

— W. H. Auden, “Sext”

It follows also that this new vision of ‘natural theology’ is equally concerned, let me also state at the start here, to be flexible in a variety of ways for use in different contexts and genres, for different audiences, and by means of varying forms of communication. The term ‘apologetics’, unfortunately, tends to come with as much bag and baggage from the era of brash modernity as does its cousin ‘natural theology’; but its history of association with rationalistic brow-beating is one we need to live down. The art of giving a reasoned, philosophically- and scientifically-related, account of the ‘hope that is in us’ in a public space is a Christian duty, and it may take a great variety of forms. As discussed last time in relation to Nicholas Wolterstorff’s analysis of Thomas Aquinas’s own variety of uses for his own Five Ways, ‘natural theology’ must indeed at times be used apologetically and even polemically, when the occasion demands it: that is, if one is called to public debate in the university, in public political contestation, or in the press. There is a huge cultural interest in seeing theologians and philosophers of religion perform this undertaking in discussion with secular science, and we undermine our own credibility if we fail to take on this task with grace, clarity and humour.

But more often, I find, I am called to this task as a believer, as an academic, or as a priest, in quieter, less overt, but no less significant public contexts: in being asked intrigued questions about evolution and theology by the seeker who wanders into Ely cathedral looking for something, she knows not what; by the half-believer who wonders if science does indeed render Christianity invalid; by the generation of my children’s age for whom in so many cases the church has seemingly lost all intellectual and moral credibility; for those hoping to deepen their faith spiritually or make it more intellectually mature; and for the doubting amongst the faithful. The disposition, attentive prayerfulness and bodily grace with which these conversations must go on is especially crucial: this task is not about the soap-box, but it‟s not for the faint-hearted or defensive either. It has to be as philosophically and scientifically sophisticated as it is spiritually and theologically cogent; in short, it must not merely dazzle; it must more truly invite and allure.

— Sarah Coakley, from her 2012 Gifford Lectures (PDF)

genetic synecdoche

Together with philosopher David Wasserman, Asch wrote in 2005 that using genetic tests to screen out a fetus with a known disability is evidence of pernicious “synecdoche.” Ordinarily, synecdoche is a value-neutral figure of speech, in which some single part stands for the whole—as in the common use of “White House” to stand for the executive branch of government. But Asch and Wasserman’s meaning was more loaded: prenatal genetic tests, they argued, too often let a single trait become the sole characteristic of a fetus, allowing it to “obscure or efface the whole.” In other words, genetic data, once known, generally become the only data in the room. Taking a “synecdochal approach” to prenatal testing, Asch and Wasserman warned—in the era just prior to consumer genetic sequencing—allows one fact about a potential child to “overwhelm and negate all other hoped-for attributes.”

We won’t know what Asch would have made of 23andMe, designer babies, or broader claims for personal genomics. But her intellectual legacy only grows more relevant in the era of ever-cheaper, personalized genetic data. Asch understood that there are plenty of things technologies like prenatal genetic testing can tell us. But the choices and challenges in defining a life worth living, and living well—it may be that these aren’t technological problems at all.

But what was the actual impact of coffeehouses on productivity, education and innovation? Rather than enemies of industry, coffeehouses were in fact crucibles of creativity, because of the way in which they facilitated the mixing of both people and ideas. Members of the Royal Society, England’s pioneering scientific society, frequently retired to coffeehouses to extend their discussions. Scientists often conducted experiments and gave lectures in coffeehouses, and because admission cost just a penny (the price of a single cup), coffeehouses were sometimes referred to as “penny universities.” It was a coffeehouse argument among several fellow scientists that spurred Isaac Newton to write his “Principia Mathematica,” one of the foundational works of modern science.

Coffeehouses were platforms for innovation in the world of business, too. Merchants used coffeehouses as meeting rooms, which gave rise to new companies and new business models. A London coffeehouse called Jonathan’s, where merchants kept particular tables at which they would transact their business, turned into the London Stock Exchange. Edward Lloyd’s coffeehouse, a popular meeting place for ship captains, shipowners and traders, became the famous insurance market Lloyd’s.

And the economist Adam Smith wrote much of his masterpiece “The Wealth of Nations” in the British Coffee House, a popular meeting place for Scottish intellectuals, among whom he circulated early drafts of his book for discussion.

Space colonies. That’s the latest thing you hear, from the heralds of the future. President Gingrich is going to set up a state on the moon. The Dutch company Mars One intends to establish a settlement on the Red Planet by 2023. We’re heading towards a “multi-planetary civilization,” says Elon Musk, the CEO of SpaceX. Our future lies in the stars, we’re even told.

As a species of megalomania, this is hard to top. As an image of technological salvation, it is more plausible than the one where we upload our brains onto our computers, surviving forever in a paradise of circuitry. But not a lot more plausible. The resources required to maintain a colony in space would be, well, astronomical. People would have to be kept alive, indefinitely, in incredibly inhospitable conditions. There may be planets with earthlike conditions, but the nearest ones we know about, as of now, are 20 light years away. That means that a round trip at 10 percent the speed of light, an inconceivable rate (it is several hundred times faster than anything we’ve yet achieved), would take 400 years.

But never mind the logistics. If we live long enough as a species, we might overcome them, or at least some of them: energy from fusion (which always seems to be about 50 years away) and so forth. Think about what life in a space colony would be like: a hermetically sealed, climate-controlled little nothing of a place. Refrigerated air, synthetic materials, and no exit. It would be like living in an airport. An airport in Antarctica. Forever. When I hear someone talking about space colonies, I think, that’s a person who has never studied the humanities. That’s a person who has never stopped to think about what it feels like to go through an average day—what life is about, what makes it worth living, what makes it endurable. A person blessed with a technological imagination and the absence of any other kind.

In English we speak about science in the singular, but both French and German wisely retain the plural. The enterprises that we lump together are remarkably various in their methods, and also in the extent of their successes. The achievements of molecular engineering or of measurements derived from quantum theory do not hold across all of biology, or chemistry, or even physics. Geophysicists struggle to arrive at precise predictions of the risks of earthquakes in particular localities and regions. The difficulties of intervention and prediction are even more vivid in the case of contemporary climate science: although it should be uncontroversial that the Earth’s mean temperature is increasing, and that the warming trend is caused by human activities, and that a lower bound for the rise in temperature by 2200 (even if immediate action is taken) is two degrees Celsius, and that the frequency of extreme weather events will continue to rise, climatology can still issue no accurate predictions about the full range of effects on the various regions of the world. Numerous factors influence the interaction of the modifications of climate with patterns of wind and weather, and this complicates enormously the prediction of which regions will suffer drought, which agricultural sites will be disrupted, what new patterns of disease transmission will emerge, and a lot of other potential consequences about which we might want advance knowledge. (The most successful sciences are those lucky enough to study systems that are relatively simple and orderly. James Clerk Maxwell rightly commented that Galileo would not have redirected the physics of motion if he had begun with turbulence rather than with free fall in a vacuum.)

The emphasis on generality inspires scientific imperialism, conjuring a vision of a completely unified future science, encapsulated in a “theory of everything.” Organisms are aggregates of cells, cells are dynamic molecular systems, the molecules are composed of atoms, which in their turn decompose into fermions and bosons (or maybe into quarks or even strings). From these facts it is tempting to infer that all phenomena—including human actions and interaction—can “in principle” be understood ultimately in the language of physics, although for the moment we might settle for biology or neuroscience. This is a great temptation. We should resist it. Even if a process is constituted by the movements of a large number of constituent parts, this does not mean that it can be adequately explained by tracing those motions

The Trouble with Scientism. An incredibly important and provocative essay by Philip Kitcher.

“So we aren’t any closer to unification than we were in Einstein’s time?” the historian asked.

Feynman grew angry. “It’s a crazy question! … We’re certainly closer. We know more. And if there’s a finite amount to be known, we obviously must be closer to having the knowledge, okay? I don’t know how to make this into a sensible question…. It’s all so stupid. All these interviews are always so damned useless.”

He rose from his desk and walked out the door and down the corridor, drumming his knuckles along the wall. The writer heard him shout, just before he disappeared: “It’s goddamned useless to talk about these things! It’s a complete waste of time! The history of these things is nonsense! You’re trying to make something difficult and complicated out of something that’s simple and beautiful.”

Across the hall Murray Gell-Mann looked out of his office. “I see you’ve met Dick,” he said.

The liberal camp includes many thinkers I admire, and it has produced some of the more eloquent reflections on biotechnology’s implications for human affairs. But at least in the United States, the liberal effort to (as the Goodman of 1980 put it) “monitor” and “debate” and “control” the development of reproductive technologies has been extraordinarily ineffectual. From embryo experimentation to selective reduction to the eugenic uses of abortion, liberals always promise to draw lines and then never actually manage to draw them. Like Dr. Evans, they find reasons to embrace each new technological leap while promising to resist the next one — and then time passes, science marches on, and they find reasons why the next moral compromise, too, must be accepted for the greater good, or at least tolerated in the name of privacy and choice. You can always count on them to worry, often perceptively, about hypothetical evils, potential slips down the bioethical slope. But they’re either ineffectual or accommodating once an evil actually arrives.

Tomorrow, they always say — tomorrow, we’ll draw the line. But tomorrow never comes.

Ross Douthat

css.php