Facebook says posts with graphic violence rose in early 2018

Facebook says posts with graphic violence rose in early 2018

But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.

However, it said that most of the 583m fake accounts were disabled "within minutes of registration" and that it prevents "millions of fake accounts" on a daily basis from registering.

837 million pieces of spam were removed in Q1 2018, all of which were found and flagged by Facebook's systems before anyone even reported it.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late previous year. Which is to say, this doesn't mean that 0.22% of the content posted on Facebook contained graphic violence; just that the graphic content posted accounted for 0.22% of total views.

Though Facebook extolled its forcefulness in removing content, the average user may not notice any change.

Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them. The rest came after Facebook users flagged the offending content for review.

More news: First lady 'doing really well' after kidney procedure, Trump says

Facebook has released its Community Standards Enforcement Report which details the actions the firm has taken against content that's not allowed on its platform such as graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.

This was up by three quarters from 1.1m during the previous quarter because of improvements in Facebook's ability to find such content using photo detection technology. "We use technology, combined with people on our teams, to detect and act on as much violating content as possible before users see and report it".

And backing up the company's AI tools are thousands of human reviewers who manually pore over flagged content, trying to determine if it violates Facebook's community standards.

Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report. "Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, seven to nine views were of content that violated our adult nudity and pornography standards".

Facebook says AI has played an increasing role in flagging this content. Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight. And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

Related Articles