Sympathy for the devil is running out

Earlier this week, YouTube announced that it was banning the accounts of several prominent anti-vaccine voices from its platform as part of an effort to remove public health misinformation. Though YouTube, which is owned by Google, has already had a similar policy in place to remove videos peddling COVID-19 misinformation, the recent catch-all decision expands the platform’s policies to now include any misleading claims related to vaccines.

The new policy now aligns YouTube with peer tech giants Facebook and Twitter, both of which have enacted policies that allow the companies to remove posts with erroneous claims related to vaccines. And yet, these policies are not cogent solutions: Facebook remains a popular forum for people to discuss unfounded public health claims and Twitter has a generous five strikes rule before it permanently bans users for violating its COVID-19 misinformation policy.

But for YouTube, over a year and a half into the pandemic, the decision seems far too little, far too late.

For the last decade, information integrity researchers have been sounding the alarms. Social media platforms and the proliferation of anti-vaccine content are forceful factors in vaccine skepticism and refusal. And recent studies during the coronavirus pandemic highlight misinformation’s role in slowing rates of vaccinations in conservative states.

As for the type of misinformation? YouTube videos have a particular virality and segments tend to rack up tens of millions of views across platforms like Twitter and Facebook. While misinformation’s strength is that it proliferates — everywhere, all the time — its origins are connected to a few key actors. A recent report from the Center for Countering Digital Hate found that a group of 12 individuals, named the “Disinformation Dozen,” were responsible for 65% of all anti-vaccine misinformation on social media in the United States.

After the report’s publication, the White House felt the pressure to respond. In July, President Joe Biden sharply criticized tech companies for allowing the misinformation wildfire to burn. Asked by a reporter what his message was to Facebook and other platforms, Biden said: “They’re killing people.”

Only in 2021 can a dozen determined anti-vaxxers become responsible for an unimaginable tidal wave — no, hurricane — of misinformation. But more than a damning lesson in the amplifying effect on social media platforms, this report revealed the deepening cracks that exist within big tech’s regulatory and content frameworks.

Efforts to ban vaccine misinformation are, on balance, good — and, above all, needed. But initial efforts have been lackluster, especially when considering that Pandora’s box has already been ripped open — and attempts to neatly sweep the evils of misinformation, extremism, hyperpolarization and science denial back into the box are failing.

Worse — social media firms continue to tout their platforms as critical beacons of free speech and expression, as dire tools for connectivity and togetherness and as drivers of innovation and technological progress. Though this is true to an extent (and a denial of how social media has positively impacted the world, mobilized communities and created advocacy networks that otherwise would not exist would be naïve) the clear and present harms caused by these platforms are rearing their ugly head. And while these companies peddle revised policies — as strategic saves or PR pushes or something in between — we cannot sit idly by as Silicon Valley’s attempts to save face are worked out.

Instead, we need accountability — now more than ever.

The world is experiencing several pandemics at once. The coronavirus has taken the lives of over 4.5 million people worldwide, with 704,000 lives lost in the United States, alone. And the concurrent infodemic, a term coined by political scientist David Rothkopf as a blend of “information” and “epidemic,” has only exacerbated public health concerns.

In 2003, Rothkopf wrote that the SARS epidemic was “the story of not one epidemic but two, and the second epidemic, the one that… largely escaped the headlines… ha[d]implications that [were] far greater than the disease itself.” The pandemic was harder to control due to the severe crises of information because of increased reliance on the Internet. But the media landscape of 2021 is vastly different from the one Rothkopf wrote of in 2003.

The contemporary American trifecta of a health crisis, a trust crisis and an information crisis has created a near-perfect storm of events that is breeding a deeply fractured sociopolitical environment. And with any storm, we must look at the eye of the hurricane to figure out the best course of action.

With the infodemic, the nexus of it all lies in social media. And though the constant tug-of-war between the government and tech firms is never ending, accountability must be demanded. If platforms cannot internally hold themselves to account, external pressure must be mounted.

A promising bill in the House of Representatives might be the first step. The Health Misinformation Act, introduced in late July by Sen. Amy Klobuchar (D-MN) and Sen. Ben Ray Lujan (D-NM), would make social platforms responsible for the spread of misinformation on its platforms. The bill would strip the legal liability shield that large tech firms rely on for immunity from lawsuits under Sec. 230 of the Communications Decency Act. The bill would alter Sec. 230’s language to revoke immunity in cases where health misinformation is created and promoted through the platform’s algorithm during declared national public health crises.

This attempt to hold platforms accountable marks a dramatic turn from the status quo, where these companies are legally protected from the inordinate amounts of user-created content — no matter how false — their platforms host.

Free speech advocates believe that this carveout would cast a chilling effect on free expression. And tech firms fear that any threat to their prized shield would create unfounded consequences on the open Internet. But, Big Tech, in touting small policy changes like the one YouTube announced this week and in reiterating their platforms’ strength as an integral tool for good, is attempting to elicit empathy to fuel a policy of apathy.

But sympathy for the devil is running out.

Earlier this week, Facebook whistleblower Frances Huagen testified before Congress about the social network’s threat to democracy. And though Mark Zuckerberg took to the platform to post a 1,300-word defense in an attempt to undermine her claims — the Facebook CEO floundered. Huagen recommended a slate of changes to rein in the company, including an overhaul of Sec. 230, to hold the platform, and others, accountable.

Of Huagen’s hours-long testimony, the most damning revelation was that Facebook is self-aware of the harm it causes. Huagen said that the platform “amplifies division, extremism and polarization,” and that the company’s own research confirms these dangers.

But instead of acting to mitigate this harm, it chooses to look the other way.

The war on information in the digital age can often feel nebulous and out-of-reach. But the infodemic is very real, and its consequences are life and death. Passing the Health Misinformation Act, among other pieces of congressional legislation, is just one step on the path to checking social media platforms, reeling in the spiraling public health consequences of misinformation and creating a more resilient society.

As Adrienne LaFrance writes in The Atlantic: “The freedom to destroy yourself is one thing. The freedom to destroy democratic society is quite another.”

Leave a Reply

Discover more from USC Global Policy Institute

Subscribe now to keep reading and get access to the full archive.

Continue reading