Thursday, July 26, 2018

Facebook Faces The Expensive Truth Behind Hate And Deception


This morning’s (Thursday, July 26) headline in the Washington Post documents the horrible day Facebook stocks had on Wednesday, tanking 24 percent and wiping out billions (with a b) of dollars in shareholder paper wealth (Zuckerberg himself lost several billions, which, to me, is a lot, but maybe for him it’s not more than decimal dust). By Thursday evening, Facebook stock had taken the greatest single-day dive--$100 billion, or 19 percent—of any stock in Wall Street history. Market analysts were nearly unanimous in pointing to Facebook’s ongoing privacy and security issues, as well as the exodus of Facebook users, as key factors in the company’s stock decline.

According to the Thursday morning Post story, reported by Elizabeth Dwoskin and Haley Tsukayama, Facebook’s value plummeted “…after sales growth did not meet expectations — and the social media giant said it would slow even further in the months ahead. The numbers suggest that the political and social backlash against Facebook, and its costly response to it, is starting to affect the business.

After a brief review of Facebook’s quarterly profits ($5.1 billion), the story continued, “We run the company for the long term, not just for this quarter,” said Mark Zuckerberg, Facebook’s founder and CEO, referring to the losses on a call with investors. The stock slide Wednesday afternoon wiped out as much as $150 billion of Facebook’s value, and Zuckerberg himself lost billions of dollars.”

Years from now, when Facebook is either gone or has transmogrified into the “next big thing,” and Zuckerberg has shuffled off this mortal coil unable to take his billions with him, business professors and historians will take the sentence, The numbers suggest that the political and social backlash against Facebook, and its costly response to it, is starting to affect the business, and use it to offer a partial, but testable, theory about the company’s demise.

Will those future teachers and historians say Facebook ultimately failed because the user base abandoned a platform where hate speech, personal attacks, and rampant trolling could not be controlled? Will they theorize that Facebook/Zuckerberg would not set a barrier for hate speech that was so onerous to haters and trolls that they would be the ones to stay while everyday Facebook users skedaddled as part of the #DeleteFacebook campaign? Will the theory hold that no matter how many moderators Facebook hired in 2018 (many hired in response to the Europeans’ crackdown on Facebook), hate speech would still find its way into the platform, like excessive ground water seeping into the tiniest cracks of a high-rise foundation, finding its way into the heart of the building and eventually rotting the structure from within?

Or is the whole hate speech issue so intractable that no practical, non-draconian measure will purge haters and hurters from Facebook? Is it possible that the history of the fall of Facebook will demonstrate that any attempt to regulate hateful language will fall victim to the petard disguised as free speech?

In the abstract, Facebook/Zuckerberg has no obligation or social contract to reduce or eliminate hateful speech on the platform. A corporate mission statement, European sanctions, or Congressional testimony notwithstanding, the platform is, in its initial state, self-selecting and binary; we can choose to use it or not. On or off. If we are offended by certain forms of speech which run counter to our set of beliefs, manners, sensibilities, or expectations, we have options ranging from direct confrontation or rebuttal, to unfollowing, unfriending, reporting to FB, or walking away from FB and finding another means of communicating with people with whom we choose to associate. We can write a blog about it, or a newspaper article, or a letter to Zuckerberg himself (see below). Or we can do nothing. On or off. Completely binary. That’s all a computer really does. No matter how complex the coding, how brilliant the concept, how intricate the algorithms, it’s all 0s and 1s…open the gate, close the gate…on or off. Use Facebook or don’t use Facebook. A billion people do, six billion people don’t.

Again, still in the abstract, Facebook/Zuckerberg cannot be a perfect arbiter of myriad shades of hate speech because hate speech is too often nuanced just enough to allow some of it to slide past the hate-defining fiats of justice. It can be heartbreakingly painful and unfair (as it is to Noah Pozner’s parents), but it’s not always “fire in the theater” under the law. I know what hate speech is; I can identify it, single it out, decry it, raise it up for the multitudes to see and shame it.

But in the real world where human perceptions and misperceptions differ by nanometers of emotional separation, my hate speech is not necessarily your hate speech. Maybe it’s your “I don’t really care for that” speech, or your “I wasn’t raised like that,” speech, or your, “It’s their right to say it” speech. But it’s not your hate speech; at best, it’s your “inconvenient” speech. 

You know it makes me uncomfortable; you know it has the capacity to hurt, even seriously harm—even incite physical rage or murderous acts—but, hey, it’s still has not risen to your threshold of hate. And because your bar is set where my bar is not, and because neither of our bars is real and defined or codified (despite Facebook’s Community Standards, see below), all Facebook/Zuckerberg can do is acknowledge the existence of your bar and my bar as if they were on the same level, or on no level at all.

It’s as if you and I and everyone who uses Facebook were trapped in a social media version of the Heisenberg principle: As far as Facebook is concerned, our exact positions (our intentions or our perceptions) are unknowable and uncertain to Facebook itself because when we are using Facebook, some of us see hate as a particle of something bad, and some of us see hate as a wave of something not bad. How we arrive at our definitions is not quantifiable. Facebook, in the abstract, just sees the words, not my understanding of them, not your understanding of them. Facebook cannot know intent. Facebook cannot perceive. And haters, deniers, and producers of fake news love those corporate disabilities.

Does a Facebook moderator have the capacity to intuit the dark motives of a hater or scheming denier who enters the platform under a false flag, only to throw off the disguise and scream obscenities, hurl verbal excrement across our screens, and steal away into the ether of ambiguity or anonymity?  The possibility of such ambiguity and false flags is built into Facebook’s Community Standards.

Here is Facebook’s own take on hate speech and voice:

Hate Speech: “We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.”

“We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below.” “Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. Similarly, in some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way. When this is the case, we allow the content, but we expect people to clearly indicate their intent, which helps us better understand why they shared it. [Italics mine] Where the intention is unclear, we may remove the content.” “We allow humor and social commentary related to these topics.”

Voice: “Our mission is all about embracing diverse views. We err on the side of allowing content, even when some find it objectionable, unless removing that content can prevent a specific harm. Moreover, at times we will allow content that might otherwise violate our standards if we feel that it is newsworthy, significant, or important to the public interest. We do this only after weighing the public interest value of the content against the risk of real-world harm.” [Italics mine]

I can’t be the only person who sees clearly the disconnect between the hard-line eschewing of hate speech, and the too-fuzzy permission to “err on the side of allowing content…some find…objectionable? How are those standards and definitions working out for the parents of Sandy Hook shooting victim, Noah Posner?

In their July 25 open letter to Mark Zuckerberg, Leonard Pozner and Veronique De La Rosa, Noah’s parents, wrote, in part,

“Facebook plays a mammoth role in exposing the world’s masses to information. That level of power comes with the tremendous responsibility of ensuring that your platform is not used to harm others or contribute to the proliferation of hate. Yet it appears that under the guise of free speech, you are prepared to give license to people who make it their purpose to do just that.”

Pozner and De La Rosa continued: “In your recent interview with Kara Swisher of Recode, you were asked why Facebook would allow an organization to post a conspiracy theory claiming that the Sandy Hook massacre was staged. While you implied that Facebook would act more quickly to take down harassment directed at Sandy Hook victims than, say, the posts of Holocaust deniers, that is not our experience. In fact, you went on to suggest that this type of content would continue to be protected and that your idea for combating incendiary content was to provide counterpoints to push ‘fake news’ lower in search results. Of course, this provides no protection to us at all.” 

Noah’s parents are far from outliers in the hate speech issue. They, like millions of other Facebook users, are seeing what happens when a technology-dependent organization, founded on a novel idea and with an initially-limited scope, exceeds any humanly-moderated or humanly-written algorithmic boundaries intended to regulate individual intentions and perceptions. Hence the Post article’s sobering observation: The numbers suggest that the political and social backlash against Facebook, and its costly response to it, is starting to affect the business.

Look again at the two snippets I italicized earlier from Facebook’s Community Standards:

“When this is the case, we allow the content, but we expect people to clearly indicate their intent, which helps us better understand why they shared it.  Where the intention is unclear, we may remove the content.”
“Moreover, at times we will allow content that might otherwise violate our standards if we feel that it is newsworthy, significant, or important to the public interest. We do this only after weighing the public interest value of the content against the risk of real-world harm.”

Phrases like, “…we expect people to clearly indicate their intent,” and, “…if we feel that it is newsworthy, significant, or important to the public interest…” beg for parsing because they are freighted with human judgement (we expect, if we feel, only after weighing…) that simply cannot be applied across a platform judging the actions of more than one billion human beings, each completely unique in their own intentions and perceptions.

Absent the will to make a fundamental change—to take a huge risk—Mark Zuckerberg and the thousands (tens of thousands) of men and women Facebook employs and is continuing to hire as moderators, are incapable of cleansing the social media site of haters, deniers, trolls, and willful proponents and purveyors of human pain and suffering. It’s not that the company couldn’t at least try to block, forcefully, Facebook abusers. But half-measures are not going to work. Pushing certain unlikeable groups into lower Facebook niches is not a solution; it’s just testimony pablum for a Congress populated by too many clueless, aging, curmudgeons to whom social media is something their young staffers handle.

Zuckerberg has it in his power to make an unambiguous statement and follow-on plan to limit—if not eliminate—access to Facebook by any individual or entity who plot and execute what Noah Pozner’s parents have already experienced. It’s not a free speech issue. It’s just not. It’s a moral, ethical, and humanitarian issue over which Zuckerberg and Facebook, not the U.S. Constitution, have complete authority if only they had the will. And when Zuckerberg or others use the shields of free speech and soft-definitions of individual license to avoid making a profit-jeopardizing, but humanely imperative, decision to crack down on the haters, abusers, and fake-newsers, that is an acknowledgment that neither he, nor his board, nor his shareholders are willing to evolve socially and sensibly and come to terms with a subject much larger than themselves.

Facebook either stands for something, or it stands for nothing, and when it comes to hate and cruelty and deception, there is no middle ground upon which to stand. This is not an abstract notion; this is the reality of our time, illuminated by our frustration, paid for by millions of innocent targets of hate. This social brutality via electrons has got to stop. In this defining moment for Facebook, hate in all its forms must be ushered forcefully through the door leading to ignominy and obscurity. I will be more than happy to lock the door behind it.

1 comment:

  1. As always, excellent piece. There are days I get so angry & fed up with the lies and junk I see on FB, I swear I'm going to shut it down. But mostly I am grateful to be in touch with friends from all walks of my life, friends that I would NEVER be able to keep up with daily in "non-Facebook" ways. It allows me to connect with people I truly care about & miss from work or childhood. It's also a way to rejoice with others about good news, and sadly, to comfort and grieve when the news is not good. Truthfully, I would miss it on many levels, but sometime it scares me...

    ReplyDelete