Don't miss the big stories. Like us on Facebook.  

(Circuits)

That gust of wind you felt coming from Silicon Valley on Wednesday morning was the social media industry’s tentative sigh of relief.

For the past four years, executives at Facebook, Twitter, YouTube and other social media companies have been obsessed with a single, overarching goal: to avoid being blamed for wrecking the 2020 U.S. election as they were in 2016, when Russian trolls and disinformation peddlers ran roughshod over their defenses.

So they wrote new rules. They built new products and hired new people. They conducted elaborate tabletop drills to plan for every possible election outcome. And on Election Day, they charged huge, round-the-clock teams with batting down hoaxes and false claims.

So far, it appears those efforts have averted the worst. Despite the frantic (and utterly predictable) attempts from President Donald Trump and his allies to undermine the legitimacy of the vote in the states where he is losing, there have been no major foreign interference campaigns unearthed this week, and Election Day itself was relatively quiet. Fake accounts and potentially dangerous groups have been taken down quickly, and Facebook and Twitter have been unusually proactive about slapping labels and warnings in front of premature claims of victory. (YouTube was a different story, as evidenced by the company’s slow, tepid response to a video that falsely claimed that Trump had won the election.)

The week is young, of course, and there’s still plenty of time for problems. Election-related disinformation is already trending up — some of it targeted at Latinos — and will only increase as votes are challenged in the courts and conspiracy theorists capitalize on all the uncertainty to undermine confidence in the eventual results.

But the platforms’ worst fears haven’t yet materialized. That’s a good thing and a credit to the employees of those companies who have been busy enforcing their rules.

At the same time, it’s worth examining how Twitter, Facebook and YouTube are averting election-related trouble, because it sheds light on the very real problems they still face.

For months, nearly every step these companies have taken to safeguard the election has involved slowing down, shutting off or otherwise hampering core parts of their products — in effect, defending democracy by making their apps worse.

They added friction to processes, like political ad buying, that had previously been smooth and seamless. They brought in human experts to root out extremist groups and manually intervened to slow the spread of sketchy stories. They overrode their own algorithms to insert information from trusted experts into users’ feeds. And as results came in, they relied on the calls made by news organizations like The Associated Press rather than trusting that their systems would naturally bring the truth to the surface.

Nowhere was this shift more apparent than at Facebook, which for years envisioned itself as a kind of post-human communication platform. Mark Zuckerberg, the company’s chief executive, often spoke about his philosophy of “frictionless” design — making things as easy as possible for users. Other executives I talked to seemed to believe that ultimately, Facebook would become a kind of self-policing machine, with artificial intelligence doing most of the dirty work and humans intervening as little as possible.

But in the lead-up to the 2020 election, Facebook went in the opposite direction. It put in place a new, cumbersome approval process for political advertisers and blocked new political ads in the period after Election Day. It throttled false claims and put in place a “virality circuit breaker” to give fact-checkers time to evaluate suspicious stories. And it temporarily shut off its recommendation algorithm for certain types of private groups to lessen the possibility of violent unrest.

All of these changes did, in fact, make Facebook safer. But they also involved dialing back the very features that have powered the platform’s growth for years. It’s a telling act of self-awareness, as if Ferrari had realized that it could only stop its cars from crashing by replacing the engines with go-kart motors.

“If you look at Facebook’s election response, it was essentially to point a lot of traffic and attention to these hubs that were curated by people,” said Eli Pariser, a longtime media executive and activist who is working on Civic Signals, a new project that is trying to reimagine social media as a public space. “That’s an indication that ultimately, when you have information that’s really important, there’s no substitute for human judgment.”

Twitter, another platform that for years tried to make communication as frictionless as possible, spent much of the past four years trying to pump the brakes. It brought in more moderators, revamped its rules and put more human oversight on features like Trending Topics. In the months leading up to the election, it banned political ads and disabled sharing features on tweets containing misleading information about election results, including some from the president’s account.

YouTube didn’t act nearly as aggressively this week, but it has also changed its platform in revealing ways. Last year, it tweaked its vaunted recommendation algorithm to slow the spread of so-called borderline content. And it started promoting “authoritative sources” during breaking news events to prevent cranks and conspiracy theorists from filling up the search results.

All of this raises the critical question of what, exactly, will happen once the election is over and the spotlight has swiveled away from Silicon Valley. Will the warning labels and circuit breakers be retired? Will the troublesome algorithms get turned back on? Do we just revert to social media as normal?

(BEGIN OPTIONAL TRIM.)

Camille François, chief innovation officer of Graphika, a firm that investigates disinformation on social media, said it was too early to say whether these companies’ precautions had worked as intended. But she conceded that this level of hypervigilance might not last.

“There were a lot of emergency processes put in place at the platforms,” she said. “The sustainability and the scalability of those processes is a fair question to ask.”

Pariser said that the platforms’ work to prevent election interference this year raised bigger questions about how they will respond to other threats.

(END OPTIONAL TRIM.)

“These platforms are used for really important conversations every day,” Pariser said. “If you do this for U.S. elections, why not other countries’ elections? Why not climate change? Why not acts of violence?”

These are the right questions to ask. The social media companies may have gotten through election night without a disaster. But as with the election itself, the real fights are still ahead.

This article originally appeared in The New York Times.


TALK TO US

If you'd like to leave a comment (or a tip or a question) about this story with the editors, please email us.
We also welcome letters to the editor for publication; you can do that by filling out our letters form and submitting it to the newsroom.