Yes, it’s understandable for conservatives to worry that if Silicon Valley censors the likes of Molyneux, it will end up censoring them. It’s sensible for them to join parts in the left in worrying about the concentrated power over information that the stewards of social-media platforms enjoy. And it’s necessary for them to recognize that the influence of redpillers and white-identitarians reflects their own failure, across the decades of movement-conservative institution building, to create something that seems more compelling to fugitives from liberalism than the Spirit of the Reddit Thread.
With all that said, though, a humane conservatism should still be able to thrive in a world where white nationalists have trouble monetizing their extremism, in which YouTube algorithms are built to maximize something other than addiction.
I’m not sure what Ross means in the last sentence I’ve quoted by “should.” Does he mean that “humane conservatism” is likely to thrive, or that if the system is fair it ought to be able to do so? I doubt the first and doubt the conditional of the second.
Here’s the situation as I see it. First, as Alexis Madrigal has recently written, the big social media companies will from now on find it less likely to take refuge in the claim that they are “merely platforms”:
These companies are continuing to make their platform arguments, but every day brings more conflicts that they seem unprepared to resolve. The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer, We’re platforms! But activists and legislators are now saying, So what? “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election,” Nancy Pelosi said in the wake of the altered-video fracas.
If you can’t plead platform neutrality, what do you do? Well, these companies being what they are, they’ll write algorithms to try to filter content. But the algorithms will often fail — after all, they can’t tell the difference between sites that promote hatred and sites that seek to combat it.
Where does that leave you? As Will Oremus points out, it leaves you with mob rule:
What should be clear to both sides, by now, is the extent to which these massive corporations are making up the rules of online speech as they go along. In the absence of any independent standards or accountability, public opinion has become an essential part of the process by which their moderation policies evolve.
Sure, online platforms have policies and terms of service that run thousands of words, which they enforce on a mass scale via software and a bureaucratic review process. But those rules have been stitched together piecemeal and ad hoc over the years to serve the companies’ own needs — which is why they tend to collapse as soon as a high-profile controversy subjects them to public scrutiny. Caving to pressure is a bad look, but it’s an inevitable feature of a system with policies that weren’t designed to withstand pressure in the first place.
Whatever should happen to humane conservatism on the internet, I don’t know what will, but as a person who is somewhat conservative and who would like to be humane, I wish I knew. In light of all the above, one thing seems nearly certain to me: If I were on a major social media service and a vocal group of that site’s users started calling me homophobic or transphobic or a white supremacist and demanded that I be banned, I would be banned.