How many fake accounts did Facebook quash so far this year?


Now it has released the numbers in a Community Standards Enforcement Report.

The report details Facebook's enforcement efforts from October to March and covers hate speech, fake accounts and spam, terrorist propaganda, graphic violence, adult nudity and sexual activity.

Though this amount of transparency from Facebook is hugely acknowledged, it goes on to show the sheer extent of misinformation, fake accounts, and abusive content, the company is now dealing with.

Facebook took down 837 million pieces of spam in Q1 2018, almost 100% of which it found and flagged before anyone reported it.

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

While the 583 million fake Facebook accounts and their removal is perhaps the biggest takeaway from this report, the company pointed out how the metrics of flagging and removal had improved when compared to previous quarters - such as improvements in photo detection technology that can detect both old and newly posted content. It believes about 3-4 percent of active Facebook accounts on the site in Q1 were still fake.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late past year. While Facebook's facial recognition software might have evolved in the long run, it could only flag 38 percent of hate speeches that have been spread across its platform.

More news: Lois Lane of 'Superman' films, dead at 69
More news: US Diplomat, Involved in Car Crash, Leaves Pakistan
More news: 'Last Man Standing' gets second life at Fox

Facebook's detection technology "still doesn't work that well" in the hate speech arena and needs to be checked by the firm's review workers, Mr Rosen said.

Similarly, the company has removed 837 million pieces of spam.

Facebook on Tuesday unveiled for the first time a transparency report that shows an increasing number of posts identified as containing graphic violence in the first of quarter of 2018.

We're often asked how we decide what's allowed on Facebook - and how much bad stuff is out there. "We tend to find and flag less of it, and rely more on user reports, than with some other violation types".

"In addition, in many areas - whether it's spam, porn or fake accounts - we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts".

"Today's report gives you a detailed description of our internal processes and data methodology". This means, the social media company still relies on its users and reviewers to check up on hate speeches and it will take some time for its AI to learn sarcasm and detect abusive hate speeches.