Select Page

Facebook Users Look For Answers As Company’s AI Goes Haywire After Moderators Were Sent Home

Facebook Users Look For Answers As Company’s AI Goes Haywire After Moderators Were Sent Home

By Chris White

Facebook said Tuesday a bug in the company’s anti-spam system that is randomly and mistakenly flagging user content is unrelated to any changes in the workforce due to coronavirus.

Twitter users tweeted images of a warning they received from Facebook suggesting their content violated company policies against spam. The content was flagged due to a bug rather than a lack of human oversight caused by social distancing, according to one Facebook security official.

“We’re on this – this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We’re in the process of fixing and bringing all these posts back. More soon,” Guy Rosen, Facebook’s vice president of safety and integrity, said in a tweet addressing the complaints.

Rosen was responding to a tweet Tuesday night from Facebook’s former head of security, Alex Stamos, who said from his vantage point the problem looks like “an anti-spam rule at FB is going haywire.”

Stamos added: “We might be seeing the start of the ML going nuts with less human oversight.” He also reminded people on Twitter that Facebook sent home their content moderators on Monday over concerns related to the coronavirus.

Facebook spokesman Andy Stone directed the Daily Caller News Foundation to Rosen’s tweet for further explanation.

Facebook decided that my posting of this Times of Israel article is spam. (It’s not spam.)

— Mike Godwin (@sfmnemonic) March 17, 2020

This is all over Facebook. @alexstamos do you know the explanation? So many people posting similar messages and distrust spreads very fast…

— Tamsin Shaw (@ProfessorShaw) March 17, 2020

All of @jdforward content has been taken down from @Facebook — from our institutional pages, from individual pages/organic shares — just as news breaking of #coronavirus in Hasidic neighborhoods in Brooklyn. People got messages saying we “violated community standards” 1/2

— Jodi Rudoren (@rudoren) March 17, 2020

Twitter and Google’s YouTube were among the big tech companies to announce Monday that their artificial intelligence tools will now be taking on more responsibility for content moderation due to social distancing.

“We’re working to improve our tech,” Twitter noted in a statement, adding that “this might result in some mistakes.” Big tech companies often blame artificial intelligence system for mistakenly nixing or impacting user content that does not in any way violate Twitter’s policies.

Twitter, for instance, suggested in April 2019 that the auto system was partially to blame for the suspension of a pro-life group.

“When an account violates the Twitter Rules, the system looks for linked accounts to mitigate things like ban evasion,” a company spokeswoman told the Daily Caller News Foundation in April 2019. “In this case, the account was mistakenly caught in our automated systems for ban evasion.”

The spokeswoman was referring to an account called “Unplanned,” which promoted a movie about a former abortion clinic director who became pro-life. The system is designed to suspend so-called sock-puppet accounts connected to a profile that violated company policies, according to the spokeswoman.

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact

Facebook Users Look For Answers As Company’s AI Goes Haywire After Moderators Were Sent Home is original content from Conservative Daily News – Where Americans go for news, current events and commentary they can trust – Conservative News Website for U.S. News, Political Cartoons and more.

About The Author

Leave a reply

Your email address will not be published.