Most of the company’s reviewers are low-paid contractors overseas who spend on average of just a few seconds on each post. A National Public Radio investigation last year found that they inconsistently apply Facebook’s standards, echoing previous research by other outlets.
Zeynep Tufekci, an associate professor at University of North Carolina who studies online speech issues, said that Facebook designed Live to notify your friends automatically about a live feed — something guaranteed to appeal to publicity seekers of all sorts.
“It was pretty clear to me that this would lead to on-camera suicides, murder, abuse, torture,” she said. “The F.B.I. did a pretty extensive study of school shooters: The infamy part is a pretty heavy motivator.”
Facebook has no intention of dialing back its promotion of video, including Live, telling investors on a conference call Wednesday that it would continue to rank it high in users’ news feeds and add more advertising within live videos and clips.
Advertising is Facebook’s lifeblood, accounting for most of the company’s revenue and profit. In the first quarter, the company earned $3.1 billion, up 76 percent from the previous year.
Debra Aho Williamson, an analyst with the research firm eMarketer, said that all the negative publicity about Facebook’s problems with horrific content and fake news appears to have hurt user satisfaction levels. Adding more content monitors is aimed at reassuring Facebook’s 1.94 billion users, she said.
“If people feel safe on Facebook, they will be more engaged and will use it more often,” Ms. Williamson said. “And if they use it more often, there will be more inventory for advertising.”
The company is trying to strike a balance between censorship and free speech. Facebook video has been used to share millions of personal stories and to document events of immense public interest, such as a series of police shootings of unarmed black men that sparked a national conversation about race and law enforcement.
Although there is little question that live-streamed murder does not belong on the service, the company has come under fire when it has stopped violent broadcasts like Korryn Gaines’s fatal standoff with police in Maryland last year.
“All policies need to recognize that distressing speech is sometimes the most important to a public conversation,” said Lee Rowland, a senior staff attorney at the American Civil Liberties Union who works on free speech issues.
She said that the decision to hire more moderators can only help the company make better judgments, especially about live events where fast decisions can be critical. “Humans tend to have more nuance and context than an algorithm,” Ms. Rowland said.
But Ms. Rowland said Facebook must also be more clear to the public about its rules on making those calls.
Mr. Zuckerberg called the recent episodes of violence “heartbreaking” and said the company wanted to make it simpler and faster for reviewers to spot problems and call in law enforcement when needed.
In the conference call with investors, he said that artificial intelligence tools would eventually allow reviewers to do a better job of reviewing content. “No matter how many people we have on the team, we’ll never be able to look at everything,” he said.
Facebook is not the only internet company to wrestle with these problems. Google has struggled with similar issues involving its YouTube video service and an automated advertising system that sometimes places marketers’ ads next to questionable content.
Philipp Schindler, Google’s chief business officer, said in an interview this week that like Facebook, his company believed the internet was so vast that machine learning had to work hand-in-hand with human reviewers to improve vetting.
”We don’t think the problem over time should involve humans, because of the scale of the problem,” he said. “But we are definitely using humans. We have invested pretty heavily in humans because they are training the machine learning.”
Continue reading the main story