Meta Ordered To Remove AI-generated Deepfakes

Meta faces pressure from the Oversight Board to overhaul policies as deepfake technology outpaces content moderation
Meta Ordered To Remove AI-Generated Deepfakes

Meta, the parent company of Facebook and Instagram has been directed by its Oversight Board to remove AI-generated pornographic images of public figures. The decision followed two cases involving photos of women, one Indian and one American, according to reports.

The Board found Meta's existing policies on such content to be vague and ineffective. Specifically, the guideline on 'derogatory sexualised photoshop' was deemed inadequate for addressing the rising issue of AI-generated deepfakes, according to reports.

To enhance its approach, the Board recommended that Meta reclassify this content under the Adult Sexual Exploitation policy and rename it 'Non-Consensual Sexual Content'. Additionally, it suggested replacing the terms 'derogatory' and 'photoshop' with more accurate and inclusive language that better reflects the evolving methods of image manipulation.

The Oversight Board expressed concerns about Meta's handling of user reports, highlighting instances where complaints were quickly closed without a thorough review. The Board underscored the urgent need for effective action, given the severe impact of deepfake intimate images on victims.

The ruling highlights the challenge social media platforms face in keeping up with rapid advancements in AI technology. As deepfake technology becomes more accessible, platforms must update their policies and enforcement strategies to protect users from the harmful effects of such content. The Board's decision marks a step in holding digital companies accountable for preventing the spread of non-consensual sexual content.

Also Read

Subscribe to our newsletter to get updates on our latest news