Press "Enter" to skip to content

The First Amendment Security Blanket

Prior to January 24th 2021, if you were to open up Spotify and click on the “Podcasts & Shows” tab, “The Joe Rogan Experience” would take up the entire top half of your screen. This Spotify exclusive is the largest podcast in the world, where notoriously controversial comedian Joe Rogan has smoked marijuana with Elon Musk and talked politics with Bernie Sanders. The show has launched Spotify into the podcast world, allowing the company to generate revenue through content other than music. But, Spotify is no longer advertising their top podcast in the way it used to. Instead, they are tasked with justifying why the show should remain on their platform. 

FINANCES OVER FACTS

On January 24th, Neil Young, a folk-rock artist, gave Spotify an ultimatum in a now deleted letter. He told Spotify that they either had to remove his music or remove Joe Rogan’s podcast. “I am doing this because Spotify is spreading fake information about vaccines – potentially causing death to those who believe the disinformation being spread by them,” he wrote. 

This is in direct response to Joe Rogan’s episode on December 30th where he and Robert Malone, a virologist who worked on the production of mRNA vaccines, discussed theories and promoted baseless claims on the COVID-19 pandemic and vaccine for over three hours. Despite his medical background, Malone claims the world is “hypnotized by the mainstream media and Tony Fauci” to believe that COVID-19 vaccines are necessary. He compared vaccines to Nazi medical experiments. Joe Rogan did not push back. He agreed with Malone, questioning the “trend” of vaccines, and boasting his unvaccinated status. 

These baseless claims faced backlash on social media, but no real action was taken until Neil Young took the first step. Following his lead, musicians Joni Mitchell, India Arie, and Nils Lofgren demanded their music to be taken down as long as Rogan was on the platform as well. Even the White House criticized Spotify, calling the Joe Rogan Experience a “menace to public health.” 

But, ultimately, Spotify is another profit-driven corporation which would much rather lose the fans of small artists than the over 200 million dollars it pulls from The Joe Rogan Experience. Because they could face significant consequences if this movement extends to artists who are more popular and generate more money, they knew some action needed to be taken. As the controversy gained attention via social media, Spotify lost an estimated 2 billion dollars in market value. To mitigate criticism and prevent any more financial depletion, CEO Daniel Ek published a blog post where he committed to “do more to provide access to widely-accepted information from the medical and scientific communities.” Following this, Spotify announced that podcasts discussing COVID-19 would now come with content advisories, linking consumers to fact-based information. 

 THE MISINFORMATION PANDEMIC 

While misinformation is certainly not novel, the pandemic has exacerbated and revealed just how egregious it is. With an increased dependency on cell phones for social interaction, users are more reliant on social media as their source of information. And with COVID-19 being so paramount yet unfamiliar, individuals are susceptible to believing propaganda which fits their narrative. According to a Pew Research Center study from 2021, more than 86% of adults said they get their news from a smartphone. While accurate information is certainly available on mainstream platforms, the “clickbait” is what gets attention and disseminates quickly. This increased engagement generates more profit for companies, so platforms have little interest in policing misinformation. Companies seem to care less if information is true or false, but whether viewers engage. And because these platforms rarely face the repercussions of the disasters which follow the spread of misinformation, they have no incentive to risk their profit. This is how misinformation is able to infiltrate genuine news sources. 

We have especially seen the dangers of fast spreading lies when a person in power is able to express false information on social media. Donald Trump, using Twitter, was able to convince millions of Americans, nearly ⅓ of the US population, that the 2020 election was fraudulent, resulting in the destruction of Capitol and murder of police officers. Similarly, Joe Rogan, having an estimated 11 million listeners per episode, holds a great deal of power in encouraging what can exacerbate a public health catastrophe and a never ending pandemic. 

As disasters ensue and criticism increases, private entities are increasingly being held responsible for the disinformation originating from their platform. This has allowed for some improvements, for example, in the aforementioned “content warning” on anything which discusses COVID-19 on Spotify. Instagram, Twitter, and Facebook have similar “fact checking” warnings which appear on posts which have potentially inaccurate information. But these can easily be ignored or closed out with a simple swipe. While fear of losing engagement and profit is likely the center of reluctance to entirely remove dangerous content, companies rely on the First Amendment to protect their decision.  

SPEECH VS REACH

Defining what “freedom of speech” permits has been an ongoing debate since it was introduced in the Constitution. There are obvious, intrinsic exceptions, such as libel, slander, and incitement. But even these terms are up for interpretation and vary depending on one’s personal beliefs. The proliferation of social media has increased conversations on what should be permitted, as individuals are held less accountable when hiding behind a screen.  

There is a distinct difference between “censorship” and “content moderation.” Censorship is the suppression of speech and ideas, which silences minority viewpoints and can be damaging to marginalized groups. Content moderation, on the other hand, allows private companies to establish community guidelines. This includes flagging harmful misinformation and prohibiting hate speech, both of which are crucial in protecting public safety.

 In the US, claims of “censorship” are applied recklessly to any instance where an individual feels prohibited from abusing their words. When Twitter banned Donald Trump, they weren’t infringing on free speech, but removing his reach, which incited violence in the January 6th insurrection. Similarly, if Spotify were to remove Joe Rogan from their platform, they are simply removing the ability for him to reach their audience out of caution for the public good. 

As we cannot count on corporations to prevent the reach of misinformation, policy efforts may be necessary. California is in the process of introducing a bill which targets doctors and medical websites which spread inaccurate information about Covid-19. But, in the U.S., it is notoriously difficult to pass legislation which restricts any sort of speech, even if it is dangerously untrue. Not only do these policies induce outrage from citizens, but the government remains loyal to the idea that our nation is built upon the public exchange of ideas. In the case of United States v. Alvarez in 2012, the Supreme Court struck down the Stolen Valor Act as a violation of free speech. This law intended to punish individuals who made false claims about their own military service, but the justices believed that this issue should be resolved in free and open debate rather than by policy interference. The government remedy to COVID-19 misinformation follows a similar philosophy: the response to the circulation of untrue information is to speak out in response. 

This contention between censorship and misinformation has been felt across the world, and demand for content moderation legislation has reached heightened levels. In Germany, for instance, the Network Enforcement Act passed in 2017 requires social media platforms to remove content that is “clearly illegal.” This targeted the resurgence in far-right violence, specifically Holocaust denial and Nazi propaganda. In 2020, amidst feelings that the law was too lenient, Germany passed an amendment to improve the ability to identify illegal speech on social media by giving the Federal Office of Justice supervisory powers. It also requires social media platforms to not just remove violent hate speech, but also report it to the police. The Network Enforcement Act has been criticized as unconstitutional with regard to free speech. But Germany is still identified as one of the most free countries as restricting offensive and dangerous speech is not deemed as infringement on the right to expression. And most importantly, restricting threatening language on social media is necessary to prevent the reach of these dangerous sentiments. 

Legislation similar to Germany’s Network Enforcement Act would not be well received in the United States. And as long as the government removes itself from speech related issues as it does now, it seems that the burden to minimize “fake news” falls on the private sector. But, while companies hold an obligation to prevent the spread of disastrous information on their website, they should not necessarily be held accountable to every message from every creator on their platform. It is tenuous to expect them to police each piece of content. In the instance of Joe Rogan, he is ultimately a comedian putting out an entertainment podcast. Some responsibility must fall on the listener to identify misinformation and critically examine the content they come across online, and on the creators of content to be responsible in their dissemination of information. Preventing misinformation from destroying our society requires mediation from the government and accountability of platforms and their users. But most importantly, we must reframe the meaning of “freedom of speech” so companies and creators cannot rely on the First Amendment to protect them from prioritizing profit at the expense of the public wellbeing.

Featured Image Source: iStock

Comments are closed.