Meta AI Under Fire Over ‘Vile’ Child Exploitation Accusations as Users Report Mental Distress, Lost Livelihoods

Several Instagram users have accused tech giant Meta of wrongly banning their accounts for alleged violations of child sexual exploitation policies, describing the experience as emotionally devastating and deeply damaging to their mental health and livelihoods.

Users said they were abruptly locked out of their accounts after Meta’s artificial intelligence system flagged them for breaching its child protection rules an allegation they strongly deny and say was never clarified.

“I’ve lost endless hours of sleep, felt isolated. It’s been horrible, not to mention having an accusation like that over my head,” said David, a user from Aberdeen, whose Instagram, Facebook, and Messenger accounts were disabled without clear explanation on June 4.

His appeal was denied, leaving him in limbo until his case was raised by journalists with Meta on July 3. Within hours, his account was reinstated with an email apology stating, “We’re sorry that we got this wrong.”

David’s case is not unique, as reported that over 100 people have come forward claiming to be wrongly banned by Meta, with more than 27,000 people signing a petition accusing the platform’s moderation system of unjustly flagging users and offering little recourse.

For Faisal, a London-based student and digital artist, the ban cut off a growing stream of income from Instagram commissions. “They falsely accused me of a crime I never committed. It damaged my mental health,” he said. His account was also restored only after media intervention.

Another user, Salim, called the situation “outrageous,” highlighting the near-total silence from Meta during appeals. “AI is labeling ordinary people as criminal abusers. Appeals are largely ignored,” he said.

Meta declined to comment on individual cases or confirm whether its automated systems were responsible for the recent wave of bans. 

However, the company maintains that it uses a combination of AI and human review to detect harmful content and refers verified cases of child exploitation to the US-based National Center for Missing and Exploited Children (NCMEC).

While Meta claims there’s no widespread issue, internal recognition may be emerging. In South Korea, a National Assembly committee chair recently disclosed that Meta acknowledged the possibility of wrongful suspensions in her country.

Social media researcher Dr. Carolina Are from Northumbria University mentioned that Meta’s lack of transparency makes it difficult to determine what’s going wrong. “They often don’t explain what triggered the deletion. It leaves people in the dark.”

Users say the emotional toll has been severe ranging from loss of income and irreplaceable memories to fear of reputational damage, especially with accusations of such a serious nature.

Meta has not confirmed whether these wrongful bans could appear on background checks or what steps are being taken to improve its appeals process.

For now, affected users say the damage has already been done.

Erizia Rubyjeana

The post Meta AI Under Fire Over ‘Vile’ Child Exploitation Accusations as Users Report Mental Distress, Lost Livelihoods appeared first on Arise News.

Leave a Reply

Your email address will not be published. Required fields are marked *