Is Facebook Doing Enough - Carlos GaminoBy Carlos Gamino

Facebook has definitely come under fire lately for a series of scandals involving personal information, but what are they doing about keeping people who are on the site safe from objectionable content?

In a recent blog post, the social media giant said it published the guidelines its review team uses to decide what’s allowed to stay on the site and what’s too objectionable to leave up – and those guidelines are followed each time someone objects to something they see on a timeline or a news feed.

Facebook also released a report on how much content they remove that has to do with graphic violence, nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. For example, when it comes to graphic violence, 27 of every 10,000 posts displays graphic violence that’s bad enough to remove from the site. According to the report, Facebook claims it acted on “bad content” 85.6 percent of the time before any users reported it; the remaining 14.4 percent of the time, Facebook’s algorithms didn’t detect it and it had to be reported by users.

This doesn’t refer to the fake news that experts believe could have affected the 2016 presidential election. Facebook has unveiled reporting tools that allow users to draw the company’s attention to fake news, which it sometimes removes or warns other users about.

Mark Zuckerberg, Facebook’s CEO, admitted that the organization didn’t do enough to protect its users. Moving forward, he said, “We need to take full responsibility for the outcome of how people use [our] tools.” He also said that by the end of this year, Facebook will employ 20,000 “digital security employees” to help preserve user data.

What Do You Think?

Does Facebook do enough to protect its users, or would you like to see more effort from the $100-billion company? I’d love to hear your thoughts, so please share them on my Twitter feed… or on Facebook.

Carlos Gamino