Facebook faced its own rebellion amid the Capitol riot

On Jan. 6, Trump supporters stormed into the U.S. Capitol, fighting police and forcing legislators to flee, but there was an insurrection taking place within the largest social media company in the world.

Facebook faced its own rebellion amid the Capitol riot

Facebook engineers were working in California thousands of miles away to improve internal controls that would limit misinformation and inciteful material. Some of the emergency actions were reversed after the 2020 election. They included banning Trump and freezing hate speech-related comments. They also removed the "Stop the Steal!" rallying cry. And they empowered content moderators to be more assertive by declaring the U.S. a "Temporary, High Risk Location" for any political violence.

Facebook was also frustrated by its incoherent and halting response to the U.S.'s rising extremism.

One employee posted on an internal message board, "Haven’t we had enough to figure out how manage discourse without enabling violence?" during the Jan. 6 chaos. "We have been fueling this fire for years and it shouldn't surprise that it's now outof control."

This question hangs over Facebook today as regulators and Congress investigate Facebook's role in the Jan. 6 riots.

New internal documents provided by former Facebook employee-turned-whistleblower Frances Haugen provide a rare glimpse into how the company appears to have simply stumbled into the Jan. 6 riot. It became apparent that the social network, despite years of scrutiny for failing to adequately police its platform, had not seen how riot participants spent weeks vowing on Facebook -- to stop Congress certifying Joe Biden’s victory.

These documents support Haugen's claim Facebook placed its profits and growth ahead of public safety. This is the most detailed window to date into how Facebook's conflicting impulses to protect democracy and safeguard its business fought in the weeks and days leading to the Jan. 6 attempted coup.

This story was based partly on disclosures Haugen made the Securities and Exchange Commission and which were provided in redacted form to Congress by Haugen’s legal counsel. The Congress received the redacted versions from a group of news organizations, which included The Associated Press.

Facebook's "Break The Glass" emergency measures, which were put into place Jan. 6, were basically a set of options that could be used to stop the spread of violent or dangerous content. These emergency measures were first implemented by the social network in the lead-up to the bitter 2020 elections. According to an internal spreadsheet that analyzed the company's response, 22 of these measures were removed at some time after the election.

Haugen stated that "as soon as they won the election, they turned them off or changed the settings back back to the way they were before," in an interview with 60 Minutes.

BuzzFeed reported Jan. 6 that an internal Facebook report criticized the company's "piecemeal” approach to the rapid growth and misinformation source of "Stop the Steal" pages and violent and inciteful comment.

Facebook claims that the situation is more complex and that it calibrates its controls to quickly respond to spikes of hateful or violent content. It did this on Jan 6. Facebook claimed it is not responsible for the actions taken by the rioters, and that stricter controls would have been detrimental.

Spokeswoman Dani Lever said that Facebook took signals from its platform and information from law enforcement into consideration when deciding whether to phase in or out safety measures. "When these signals changed, so did our measures."

Lever stated that some of the measures were in place well into February, while others are still active today.

Some employees were dissatisfied with Facebook's handling of problematic content before the Jan. 6 riots. In 2020, one employee quit the company, leaving a lengthy note claiming that Facebook was restricting promising tools that were backed by solid research for "fears about public and policy stakeholder reactions" (translation: worries about negative reactions from Trump supporters and investors).

"Similarly (but even more concerning), have I seen already built and functioning safeguards being stripped back for the same reasons," said the employee, whose identity is not revealed.

Facebook conducted extensive research before the 2020 campaign and found that its algorithm could be dangerous in spreading misinformation to radicalize users.

A 2019 study entitled "Carol’s Journey to QAnon-A Test User Study on Misinfo & Polarization Risikos Encountered Through Recommendation System" described the results of an experiment that was conducted using a test account to represent the views of a 41-year-old North Carolina woman who is a strong conservative, but not extremist. The test account was created using Carol Smith as a fake name. It indicated that Carol preferred mainstream news sources such as Fox News, followed humor group that mock liberals, and was a fan Melania Trump.

The study revealed that within a day, Facebook's page recommendations had become "quite troubling and polarizing." The algorithm started recommending extremist content by day 2. This included a QAnon-linked page that the fake user did not join, as she was not naturally drawn to conspiracy theories.

The test subject's feed contained "a barrage" of graphic, extreme, conspiratorial, and graphic content a week later. These included posts that resurrected the false Obama birther lie as well as links between the Clintons and the murder of an Arkansas state senator. Many of the content was promoted by dubious organizations from overseas or administrators who have a history of violating Facebook's rules regarding bot activity.

These results led the researcher (whose name was withheld by the whistleblower) to recommend safety precautions, including removing conspiracy references, disabling "top contributer" badges for misinformation commenters, and lowering the number of followers needed before Facebook verifies an administrator's identity.

The response from other Facebook employees who had read the research was almost unanimously positive.

"Hey! "Hey! "Do you know of any concrete results from this?"

Facebook stated that the study was one of many examples of their commitment to continuously improving and studying its platform.

A second study, "Understanding Harmful Topic Communities," was submitted to congressional investigators. It discussed how like-minded people who embrace a borderline topic/identity can create "echo chambers" that spread misinformation that can normalize harmful attitudes, encourage radicalization, and even justify violence.

Examples of such dangerous communities include QAnon, hate groups promoting theories about a race war, and hate groups.

The study concluded that offline violence and harm are more likely when like-minded people work together to support each other.

Examples of like-minded individuals coming together in federal prosecutors' charging documents against those who are alleged to have stormed Capitol include the examples provided by federal prosecutors.

Prosecutors claim that a leader of Oath Keepers militia used Facebook to discuss the formation of an "alliance" with other extremist groups, such as the Proud Boys and coordinate plans ahead the riot at Capitol.

According to court records, Kelly Meggs wrote on Facebook: "We have decided that we will work together and shut down this s---t down." Kelly Meggs was described as the leader for the Florida chapter Oath Keepers according to authorities.