In what’s been a challenging period for Facebook, in which it’s been confronted with various issues related to political misinformation, conspiracy theories, anti-vax content, hate speech, management of its moderation staff, questions around regional data restrictions, antitrust investigations, and more. Within this, and amid the ongoing restrictions of the COVID-19 pandemic, Facebook has continued to implement new measures and systems to combat its most pressing issues, as best it can.
But has it done enough? And as more of these issues are identified, a key question that many are now asking is: ‘Is Facebook good for society?’
The answer, increasingly, seems to be no, but in what may be a more telling insight, a recent internal survey at The Social Network found that the company’s own staff, who have generally been positive about the platform’s impact, are now beginning to raise more questions about its broader effects.
As reported by BuzzFeed News:
“[Facebook’s] semi-annual “Pulse Survey,” taken by more than 49,000 employees over two weeks in October, showed workers felt strained by office shutdowns and were continuing to lose faith that the company was improving the world. Only 51% of respondents said they believed that Facebook was having a positive impact on the world, down 23 percentage points from the company’s last survey in May and down 5.5 percentage points from the same period last year.”
That’s a significant drop, and while, as noted, that has been declining for some time, the numbers show that even those within Facebook, with intimate knowledge of not only the decisions made, but why they’ve been implemented, are now questioning the company’s true motivations.
Hints at the same have been slowly leaking from the company over the past year – back in September, Facebook engineer Ashok Chandwaney quit the company over its failure to address concerns around racism, disinformation and incitements to violence on the platform.
“Facebook is choosing to be on the wrong side of history,” Chandwaney noted in an open letter announcing his decision.
Various other Facebook staffers have also quit citing concerns over the company’s policy decisions, while former Facebook director of monetization Tim Kendall, in testimony before Congress, highlighted significant flaws in the company’s motivations and approach, noting that:
“Facebook and their cohorts worship at the altar of engagement and cast all other concerns aside, raising the voices of division, anger, hate and misinformation to drown out the voices of truth, justice, morality, and peace.”
Which has been one of the key concerns leveled at Facebook, that it has no real interest in cracking down on dangerous and divisive hate speech because it sparks debate and interaction, and that engagement is what keeps people coming back.
Even if they’re angry. Even if such spills over into real-world incidents.
And there are clear examples of this – take QAnon, for example, a dangerous group of conspiracy spreading peddlers of misinformation, of various forms, which have for years been linked to numerous cases of real-world violence. An internal investigation conducted by Facebook earlier this year, and leaked by NBC News, found that the platform had provided a home for thousands of QAnon groups and Pages, with millions of members and followers.
Experts had been warning Facebook about QAnon and the risks it posed since 2016, but only now, only last month, did Facebook finally act and take steps to eliminate QAnon completely from its networks.
Why did it take so long? The warnings have been there for years, but Facebook refused to act.
Anti-vaxxers, too, have been a noted concern on Facebook for a long time, with The Social Network finally beginning to take stronger action against such in March last year. Hate speech on the platform is another major issue, which sparked the July Facebook ad boycott, lead by civil rights leaders. Thus far, Facebook has refused to update its policies around such.
Given the various considerations, it’s difficult to conclude anything other than Facebook refuses to take action because it profits from engagement.
Indeed, in his further comments to Congress, Kendall said that:
“There’s no incentive to stop [toxic content] and there’s incredible incentive to keep going and get better. I just don’t believe that’s going to change unless there are financial, civil, or criminal penalties associated with the harm that they create. Without enforcement, they’re just going to continue to be embarrassed by the mistakes, and they’ll talk about empty platitudes… but I don’t believe anything systemic will change… the incentives to keep the status quo are just too lucrative at the moment.”
Engagement, says Kendall, drives all decisions at Facebook, with no consideration for the potential harms caused. Indeed, Facebook CEO Mark Zuckerberg has repeatedly noted that he would prefer the company not to make moderation decisions, instead letting its users decide what’s acceptable, under the umbrella of free speech.
As per Zuckerberg’s speech to Georgetown University in October last year:
“I believe in giving people a voice because, at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history – where from all of our individual voices and perspectives, we can bring the world closer together.”
Which highlights the flaw in Zuckerberg’s approach, that he errs on the side of optimism, as opposed to looking at the reality.
Is that because of Zuckerberg’s stated belief in free expression, or is it because Facebook, his company, benefits from the same?
For its part, Facebook has repeatedly trotted out the line ‘we do not benefit from hate‘.
But is that true? And when you look at these various elements and concerns on balance, should Facebook be doing more to address such concerns?
Is Facebook good for society?
It seems to be a key question which will see even more scrutiny, once again, in the wake of the US Election. But clearly, even those involved in the company’s decisions are starting to ponder its broader impacts.