Night News Editor

Mitchell Chapman is The Eagle’s night news editor. He has been with The Eagle since 2016. He is a former editor of The MCLA Beacon and was a Berkshires Week intern in 2017.

Musk Twitter

A Twitter logo hangs outside the company's San Francisco offices. About 13 percent of Americans regularly get their news from Twitter.

Elon Musk’s takeover of Twitter led many to reflect on the role social media plays in their lives and perhaps more importantly explore what standards social media companies should be held to.

According to the Pew Research Center, nearly seven in 10 Americans use social media, with about half of Americans getting their news from social media “often” or “sometimes.”

The consequences of lax or misguided stewardship of social media platforms are clear, as they serve as an important intermediary between a large portion of users and the rest of the web, often serving as a one-stop homepage of sorts. According to Pew Research, about a third of Americans get their news regularly from Facebook, with the second largest social media site for news being YouTube, which 22 percent of Americans use to regularly get their news from. About 13 percent of Americans regularly get their news from Twitter.

When these sites push fake news stories or other misinformation or disinformation, it can warp people’s sense of reality and prompt them to act on dangerous lies. The Jan. 6 insurrection on the U.S. Capitol was prompted by former President Donald Trump’s unsubstantiated claims that the 2020 presidential election was stolen. As a consequence, Trump was banned on many social media platforms, including Facebook — which will revisit his ban earlier next year “to assess whether the risk to public safety has receded” — and Twitter, which recently reinstated his account under the leadership of Musk in a misguided attempt to restore free speech on the platform.

Both platforms’ approaches to handling Trump highlight the issues with social media companies being left to regulate themselves, primarily that it leads to inconsistent standards across platforms. By banning Trump and subjecting his account to continued review on the basis of its threat to public safety, Facebook has drawn an important line as to what will not be tolerated on its platform. In his public remarks, Musk seems to not want Twitter to censor speech unless it falls outside the protection of the law.

“By ‘free speech,’ I simply mean that which matches the law,” Musk wrote in an April Tweet. “I am against censorship that goes far beyond the law.”

He has also noted that speech that is “destructive to the world” should also be subject to penalties. His reinstatement of Trump’s account makes it unclear what the standards for Twitter will actually be, especially because previously Twitter had permanently banned his account “due to the risk of further incitement of violence.” The manner in which Trump’s Twitter account was reinstated also highlights the drastically different standards self-regulation of social media platforms allows, as Facebook delegated the status of his account to a team of professionals via its oversight board, whereas Musk let some Twitter users decide whether or not Trump’s account should be reinstated in a public poll — a move that has been widely criticized as unprofessional and irresponsible.

It’s worth noting that for all of Musk’s talk of free speech, the First Amendment does not give users the right to post whatever they want on social media. They are platforms owned by private companies, and their governing documents usually come in the form of their terms of service that outline community standards, which social media companies have broad freedom to shape as they want. With that being said, there are key areas in which community standards from social media companies converge and diverge.

A study by the Carnegie Endowment for International Peace last year did a deep analysis of the community standards of 13 social media companies, including Twitter and Facebook, recognizing them as the “de facto regulators of online speech” that have “substantial autonomy and responsibility to draw their own lines between acceptable and unacceptable content.” It notes that most platforms broadly ban harmful user content on which “society as a whole has achieved greater agreement” — spam, “terrorist-related” speech as well as “contraband, copyright violations, violent threats, and impersonating specific individuals” — but they differ on approaches to certain issues like disinformation and misinformation.

“Norms and laws about other areas — for example, false information and hate speech — are much more globally diverse and contested,” it reads. “Platform community standards seem to reflect this reality. ... The number, nature, and scope of these special cases vary greatly from platform to platform.”

The study infers a correlatory relationship between societal consensus and platform consensus on “legitimate and illegitimate discourse,” suggesting that “societies may need to develop more settled boundaries between legitimate and illegitimate discourse before platforms converge on common ways of handling many influence operations.”

However, in a world where some still believe “alternate facts” and bogus conspiracy theories like we’ve seen in recent election cycles and throughout the pandemic, finding “societal consensus” on what even constitutes a fact can often feel like an uphill battle. It is clear, though, that the flow of false information on social media must be addressed, as it can have grave real-world consequences if left unattended.

Precisely what the standards for social media platforms should be is an existential question we must grapple with seriously. The negative effects of misinformation, disinformation and digital echo chambers are well documented, but they can be mitigated with the proper guardrails.

Mitchell Chapman is The Eagle’s weekend news editor, as well as a columnist.