On Saturday at 1201am I turned off my social media. I have a private IG account for looking at nice pictures, and Twitter lists for news, but I’m not posting on or participating in the public internet for the next several months. I tell people on my newsletter, all the time, to tune their internet connections until they are useful and fun. The public internet stopped being fun for me some years ago, and I disconnect from it for half of each year at least. I like newsletters, blogs and RSS, podcasts, email, messaging apps and complete thoughts. The public network turned into something I don’t really enjoy or get anything out of.
The NPR.org audience has grown dramatically in recent years, to between 25 and 35 million unique visitors each month. But far less than 1% of that audience is commenting, and the number of regular comment participants is even smaller. Only 2,600 people have posted at least one comment in each of the last three months –– 0.003% of the 79.8 million NPR.org users who visited the site during that period.
But such a number also suggests how effectively some of the nastier people in our society have employed comment threads — and Twitter — as their megaphones. So if we’re going to avert an outright tragedy of the online commons, we’ll need to find ways to reduce the influence of those people to a level commensurate with their actual representation in the population. NPR’s closure of comments is a step in the right direction.
Charles Taylor explains many (most?) internet debates — and a great many others from the past two hundred years. If you ever wonder why people on Twitter (serious people, not mere trolls) can get so extreme in their policing of deviations from approved behavior, see “The Perils of Moralism,” in Dilemmas and Connections: Selected Essays (emphases mine):
Modern liberal society tends toward a kind of “code fetishism,” or nomolatry. It tends to forget the background which makes sense of any code — the variety of goods which rules and norms are meant to realize — as well as the vertical dimension which arises above all these.
We can see this above in relation to contemporary Anglo-Saxon moral philosophy, as well as in the drive to codification in liberal society. But the sources go back deeper in our culture. I want to argue that it was a turn in Latin Christendom which sent us down this road. This was the drive to reform in its various stages and variants — not just the Protestant Reformation, but a series of moves on both sides of the confessional divide. The attempt was always to make people over as more perfect practicing Christians, through articulating codes and inculcating disciplines. Until the Christian life became more and more identified with these codes and disciplines.
In other words, this code-centrism came about as the by-product of an attempt to make over the lives of Christians, and their social order, so as to make them conform thoroughly to the demands of the Gospel. I am talking not of a particular, revolutionary moment, but of a long, ascending series of attempts to establish a Christian order, of which the Reformation is a key phase…. (351)
Code fetishism means that the entire spiritual dimension of human life is captured in a moral code. Kant proposes perhaps the most moving form of this (but perhaps the capture wasn’t complete in his case). His followers today, be they Rawls or Habermas or others again, carry on this reduction (although Habermas seems to have had recent second thoughts).
Modern culture is marked by a series of revolts against this moralism, in both its Christian and non-Christian forms. Think of the great late nineteenth-century reaction in England against evangelical “puritanism” that we associate with names as diverse as Arnold, Wilde, and later Bloomsbury; or think of Ibsen; or of Nietzsche and all those who follow him, including those rebelling against the various disciplines that have helped constitute this modern moralization, such as our contemporary, Michel Foucault.
But these reactions start earlier. The code-centered notion of order and its attendant disciplines begin to generate negative reactions from the eighteenth century on. These form, for instance, the central themes of the Romantic period. Many people found it hard to believe, even preposterous, that the achievement of this code-bound life should exhaust the significance of human existence. It’s almost as though each form of protest were adding its own verse to the famous Peggy Lee song: “Is that all there is?” (353)
It turns out that the NSA’s domestic and world-wide surveillance apparatus is even more extensive than we thought. Bluntly: The government has commandeered the Internet. Most of the largest Internet companies provide information to the NSA, betraying their users. Some, as we’ve learned, fight and lose. Others cooperate, either out of patriotism or because they believe it’s easier that way.
I have one message to the executives of those companies: fight.
Do you remember those old spy movies, when the higher ups in government decide that the mission is more important than the spy’s life? It’s going to be the same way with you. You might think that your friendly relationship with the government means that they’re going to protect you, but they won’t. The NSA doesn’t care about you or your customers, and will burn you the moment it’s convenient to do so.
But it is the concept which runs alongside the constant and essentially unconsented gathering of data which is most mendacious: the contention that privacy is a luxury, a bygone; an unnecessary and even regressive notion in a technological age of openness and a hinderance to the safeguarding of a just society. It is precisely in a technological society that privacy emerges as a central, vital plank of legitimate democratic function. It might be different if openness were universal, if we could scrutinize the choices of our leaders and hold them to account; if we could get unvarnished access to information about the things we buy and the services to which we subscribe, and know the probable consequences of our decision rather than be soothed with pablums and misdirections; if we were encouraged and enabled to participate in the creation of the world, rather than sidelined as the governed, the consumers. But this is not an era of radical transparency across the board. It is not an era of growing direct democracy. It is the era of transparency for the masses, of autocracy dressed as liberty. The powerful are concealed by law and contract, and by power. The rest of us are exposed and encouraged to think of this exposure as freedom.
E-mail isn’t that different from mail. The real divide, historically, isn’t digital; it’s literary. The nineteenth century, in many parts of the West, including the United States, marked the beginning of near-universal literacy. All writing used to be, in a very real sense, secret, except to the few who knew how to read. What, though, if everyone could read? Then every mystery could be revealed. A letter is a proxy for your self. To write a letter is to reveal your character, to spill out your soul onto a piece of paper. Universal literacy meant universal decipherment, and universal exposure. If everyone could write, everyone could be read. It was terrifying.
In 1971, however, a keyboard with a vestigial @ symbol inherited from its typewriter ancestors found itself hooked up to an ARPANET terminal manned by Ray Tomlinson, who was working on a little program he’d come up with in his goofing-off time to send messages from computer to computer. Tomlinson ended up using the @ symbol as the fulcrum of the lever that ultimately ended up lifting the world into the digital age: email.
“It’s difficult to imagine anyone in Tomlinson’s situation choosing anything other than the ’@’ symbol, but his decision to do so at the time was inspired,” explains Houston on his blog. “Firstly, it was extremely unlikely to occur in any computer or user names; secondly, it had no other significant meaning for the operating system on which it would run, and lastly, it read intuitively–user ‘at’ host.”
It was as simple as that, but through this one serendipitous accident of typography, @ became the navel of the digital body we now call the Internet. It stopped being a sign for how much something cost and became a symbol of an infinite number of end points on a digital umbilicus: an origin, a destination, or some point–terminating in a human–in the system in-between.
Commentators often attempt to refute the nothing-to-hide argument by pointing to things people want to hide. But the problem with the nothing-to-hide argument is the underlying assumption that privacy is about hiding bad things. By accepting this assumption, we concede far too much ground and invite an unproductive discussion about information that people would very likely want to hide. As the computer-security specialist Schneier aptly notes, the nothing-to-hide argument stems from a faulty “premise that privacy is about hiding a wrong.” Surveillance, for example, can inhibit such lawful activities as free speech, free association, and other First Amendment rights essential for democracy.
The deeper problem with the nothing-to-hide argument is that it myopically views privacy as a form of secrecy. In contrast, understanding privacy as a plurality of related issues demonstrates that the disclosure of bad things is just one among many difficulties caused by government security measures. To return to my discussion of literary metaphors, the problems are not just Orwellian but Kafkaesque. Government information-gathering programs are problematic even if no information that people want to hide is uncovered. In The Trial, the problem is not inhibited behavior but rather a suffocating powerlessness and vulnerability created by the court system’s use of personal data and its denial to the protagonist of any knowledge of or participation in the process. The harms are bureaucratic ones—indifference, error, abuse, frustration, and lack of transparency and accountability.
A while back the Atlantic Tech Channel’s Megan Garber was entertaining us with adventures in Google’s search-autocompletion features: what she called Google Psyche. (Google gives their own explanation of the feature here, but essentially it makes suggestions for you based on the frequency of previous searches.
Again, all this is pretty funny – but it may even be useful, at least to some of us. Maybe even academics like me.
Recently I was looking for some poems by the late Michael Donaghy, and saw this:
(The “mbta” Michael Donaghy is not the poet.) These results tell me that the two most searched-for poems by Donaghy are “The Present” and “Machines”: the former, presumably, because it might plausibly be read at a wedding, but the latter because – well, because it’s just utterly beautiful, a fabulously intricate yet accessible poetic construction. “Machines” is my favorite poem by Donaghy, but Google’s algorithm suggests that it is beloved by others as well, something I didn’t suspect until I ran that search.
When I searched the name of another poet, Jane Kenyon, here’s what I saw:
It surprised me to bit to find “Otherwise” listed before what I always think of as the definitive Jane Kenyon poem, the simple but perfectly graceful “Let Evening Come”, but actually running the search for “Otherwise” revealed to me that it is often taught in American high schools.
Clearly high-school literature classes contribute heavily to Google’s algorithm, as witnessed by a search for “John Cheever,” whose most anthologized story is clearly “The Swimmer”:
Or E. B. White, whose fine (but in my view somewhat overrated) essay “Once More to the Lake” is taught so frequently that it jumps right to the top of a list from which Charlotte’s Web is completely absent:
But other factors play a role as well. If you do a search for David Foster Wallace, you’ll see that the only work of his that shows up is his
Oberlin Kenyon College commencement address “This is Water” – no Infinite Jest, no Pale King – but if you add “story” to your search string you get this:
Clearly the chief interest is in “The Depressed Person,” a 1998 story thought to be revelatory of Wallace’s own experiences with depression.
It seems to me that this information is not trivial: anyone interested in a particular author or topic could, with sufficient discipline and knowledge of how to make a screen cap, track shifts in public interest and knowledge over time. It would be like Ngrams for searches instead of published material.
Google is of course already providing us with its own accounts of its most ed-for terms, in the Google Zeitgeist; but I’m suggesting the value of tracking searches that don’t rise so high in the public consciousness, but significant to scholars and other researchers who focus on particular topics. Because I’m a literature teacher, I focused here on literature, but this kind of tracking could be done in almost any field. Digital humanists – and social scientists, and historians of science, and Lord knows who else – take note!
I propose we abandon the Internet, or at least accept the fact that it has been surrendered to corporate control like pretty much everything else in Western society. It was bound to happen, and its flawed, centralized architecture made it ripe for conquest. Just as the fledgling peer-to-peer economy of the Late Middle Ages was quashed by a repressive monarchy that still had the power to print money and write laws, the fledgling Internet of the 21st century is being quashed by a similarly corporatist government that has its hands on the switches through which we mean to transact and communicate. It will never truly level the playing fields of commerce, politics, and culture. And if it looks like that does stand a chance of happening, the Internet will be adjusted to prevent it.
Of course, the model that Zuckerberg is hoping to replace isn’t the Peel show but the search engine. If Zuckerberg gets his way, Facebook recommendations will replace Google searches as the main route by which we navigate to websites. This would hardly be a paradigm shift; more a tidying up, combining what are currently two steps into one. With a search engine, you have to know more or less what you’re looking for before you begin: there’s an implied recommendation preceding most things we type into Google. What Zuckerberg’s betting on is that those recommendations will increasingly be made online, with a direct link, so the work you now ask Google to do will already have been done.
As for the first question, why people are so unfussed by Facebook’s lack of privacy, it must in part be because the site fosters the illusion that you’re among friends. The success of Facebook implies that most people are more comfortable thinking of the internet as an extension of their offline lives, where everything and everyone they are likely to encounter is already known to them. Which is all very well, but it does mean that the overweight kids in glasses get bullied there too.