A tablet screen displays the logo of the social networking site Facebook through a magnifying glass, in Bogota, on March 22, 2018. (LUIS ACOSTA / AFP)
SAN FRANCISCO - Facebook on Tuesday unveiled for the first time a transparency report that shows an increasing number of posts identified as containing graphic violence in the first of quarter of 2018.
"Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter," the report said.
Facebook's transparency report said the growth was a possible result of a higher volume of graphic violence content shared on Facebook in the first three months of this year
It said the growth was a possible result of a higher volume of graphic violence content shared on Facebook in the first three months of this year.
Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.
The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier.
Facebook, the world's largest social media company, said it has recently developed metrics as a way to review the content shared on its platform, and the transparency report reviewed the content posted in the community during the period from October 2017 through March 2018.
The content audited included graphic violence, hate speech, adult nudity and sexual activity, spam, terrorist propaganda (ISIS, al-Qaeda and affiliates), and fake accounts.
Facebook took action against 2.5 million pieces of content in the first quarter, up 56 percent over the previous quarter.
It also took action on 837 million pieces of content for spam, 21 million for adult nudity or sexual activity, and 1.9 million for promoting terrorism.
A total of 583 million fake accounts have been disabled in Q1 2018, down from 694 million in the first quarter of 2017, according to the report.
"We estimate that fake accounts represented approximately 3% to 4% of monthly active users on Facebook during Q1 2018 and Q4 2017," the report said.
Facebook CEO Mark Zuckerberg said in a post also on Tuesday that his company is employing artificial intelligence tools to remove spam before users report it.
"Most of the fake accounts were removed within minutes of being registered," he said, adding that his top priorities this year are to keeping people safe and developing new ways to improve governance.