All the main social media companies say they don’t promote hate on their platforms and take action to stop it. They each have algorithms that offer us content based on things we’ve posted, liked or watched in the past. But it’s difficult to know what they push to each user.
“One of the only ways to do this is to manually create a profile and seeing the kind of rabbit hole that it might be led down by the platform itself, once you start to follow certain groups or pages,” explains social media expert Chloe Colliver, who advised me on the experiment.
So Spring set up a fake account: a man called Barry.
Like my trolls, Barry was mainly interested in anti-vax content and conspiracy theories, and followed a small amount of anti-women content. He also posted some abuse on his profile — so that the algorithms could detect from the start he had an account that used abusive language about women. But unlike my trolls, he didn’t message any women directly.
Over two weeks, I logged in every couple of days and followed recommendations, posted to Barry’s profiles, liked posts and watched videos.
After just a week, the top recommended pages to follow on both Facebook and Instagram were almost all misogynistic. By the end of the experiment, Barry was pushed more and more anti-women content by these sites — a dramatic increase from when the account had been created. Some of this content involved sexual violence, sharing disturbing memes about sex acts, and content condoning rape, harassment and gendered violence.
As I keep saying: for the social media companies, hatred isn’t a bug, it’s a feature. It promotes engagement.