YouTube algorithms don’t turn unsuspecting masses into extremists, new study finds

“Over the years of reporting on internet culture, I have heard countless versions of [this] Story: An aimless, mostly white young man frequently interested in video games visits YouTube looking for direction or distraction and is seduced by a community of far-right creators,” wrote Kevin Roose for The New York Times in 2019. “Some young men discover far-right videos by accident, while others research them. Some travel to neo-Nazism, while others stop at milder forms of bigotry.”

Never one to recall alarmism, the daily beast published a headline in 2018 calling YouTube’s algorithm a “far-right radicalization factory” and claimed that an “unofficial network of fringe channels is dragging YouTubers down the rabbit hole of extremism”. Same MIT Technology Review sounded the alarm in 2020 about how “YouTube’s algorithm seems to direct people to alt-right videos”.

A new study led by Annie Y. Chen of the City University of New York, Brendan Nyhan of Dartmouth, Jason Reifler of the University of Exeter, Ronald E. Robertson of Stanford and Christo Wilson of Northeastern complicated these popular stories. “Using paired behavioral and survey data provided by participants recruited from a representative sample (n=1,181), we show that exposure to videos from alternative and extremist channels on YouTube is highly concentrated among a small group of people with prior high levels of gender and racial resentment,” the researchers write. “These viewers typically subscribe to these channels (causing YouTube to recommend their videos more often) and often follow external links to them. Contrary to the “rabbit hole” narrative, unsubs rarely recommend videos from alternative and extremist channels and rarely follow such recommendations when offered.”

The researchers were specific in their definition of what constitutes a “rabbit hole.” They also distinguished between “alternative” content – Steven Crowder, Tim Pool, Candace Owens – and “extremist” content – Stefan Molyneux, David Duke, Mike Cernovich. (Methodological information can be found here.)

Less than 1% – 0.6, to be exact – of those studied by the researchers were responsible for an astonishing 80% of viewing time on channels deemed to be extremist. And only 1.7% of the participants studied were responsible for 79% of the viewing time of the channels deemed to be alternative. These study participants generally found these videos by having watched similar videos before.

But how many people the researchers studied watched innocuous videos and were directed by algorithms from those videos to extremist content? This happened in 0.02% of the total video visits studied, for a grand total of 108 times. If you apply an even stricter definition of rabbit holes and exclude cases where a viewer was subscribed to a similar extremist or alternative channel and followed an algorithmic suggestion to a similar channel, rabbit holes become even rarer, accounting for 0.012 % of all video visits.

Basically, the narrative that hordes of unwitting YouTube viewers suddenly stumble upon far-right content and become fascinated by it doesn’t hold much water.

Instead, these are people who were already chosen to watch fringe videos that might be fed extremist content by the algorithm. These are the people who were already curious about Alex Jones, according to their own browsing habits, who get content about Pizzagate and how chemicals in the water supply make frogs gay and who veer towards the Cernovich or Duke types . (Duke himself was banned from YouTube two years ago.)

Many tech critics and politicians are spreading fear about algorithmic rabbits’ nests, touting these powerful algorithms as why people are becoming radicalized and calling for regulatory crackdowns to prevent this problem. But this new study provides decent evidence that the rabbit hole problem is overblown…and, perhaps the steps taken by YouTube to change its algorithm around 2019 led to the relative absence of rabbit holes. The changes “appear to have affected the spread of some of the worst content on the platform, reducing both recommendations to conspiratorial content on the platform and the sharing of YouTube conspiracy videos on Twitter and Reddit,” the researchers write. .

“Our research suggests that the role of algorithms in driving people to potentially harmful content is seemingly overstated in the post-2019 YouTube debate,” Nyhan told Reason, butpeople with extreme views find extremist content on the platform. The fact that they often search for this type of content does not mean that YouTube should escape scrutiny for providing free hosting, subscriptions, etc. to the channels in question – these are choices the company has made.”

Algorithm tweaks haven’t stopped tech journalists and politicians from frequently talking about YouTube’s radicalization, whether or not their rabbit hole criticism holds true today.

Comments are closed.