Can OpenAI's Sora Intensify Misinformation Worries In Media?

Deepfakes have been wreaking havoc for quite some time, possibly even over a decade, by enabling fraud, scams and identity theft. However, it wasn’t until 2023 and 2024 that they really made headlines. We saw several big-ticket incidents involving celebrities like Bollywood actor Rashmika Mandanna, cricketing legend Sachin Tendulkar and global pop icon Taylor Swift. With AI-powered tools now readily available to almost anyone, deepfakes are causing even more chaos than ever before.

Now, with the emergence of Sora, an AI model developed by OpenAI capable of generating immaculate minute-long videos from mere text prompts, there is a larger double-edged sword in the trajectory of technological innovation. While Sora is unmistakably a part of remarkable advancements in artificial intelligence and creative applications we are seeing since the past two years, it also raises profound concerns regarding the proliferation of deepfakes and the propagation of disinformation, which have been prevalent already without the latest AI model.

Is Sora A Threat Right Now?

The straightforward response is: not yet. Presently, OpenAI’s Sora AI model is undergoing rigorous testing by red teams, which are focused performing adversarial “misinformation, hateful content, and bias” testing. However, another crucial question comes to fore: when will it be accessible to the public? The answer to this question will unveil Sora’s true impact. Given OpenAI’s track record, one can anticipate efforts to expedite the testing phase and aim for completion within months rather than a year-long timeline.

OpenAI to its credit has been also forthcoming with the necessity of focusing on safety while building AI systems that may have huge repercussions on society. The company says it is developing tools to detect misleading content, such as a detection classifier specifically designed to identify videos generated by the Sora model. Further, the company says plans are in place to incorporate C2PA metadata in the future if the model is deployed in an OpenAI product.

Drawing on existing safety methods utilised in products like DALL-E 3, OpenAI is also reportedly leveraging techniques to prepare for Sora’s deployment. For instance, text classifiers will screen and reject input prompts violating usage policies, while robust image classifiers review generated video frames to ensure compliance with guidelines before user viewing.

However, even with these measures in place, OpenAI acknowledges the inherent unpredictability of how people will utilise its technology. As stated by OpenAI, it cannot foresee all the positive applications nor all the potential abuses. This admission, especially from a company at the cutting edge of AI technology, stresses the potential for alarming scenarios, particularly concerning the proliferation of misinformation and disinformation campaigns in image/video content that blur the lines between reality and fabrication. 

According to the WEF report, India, with nearly 50 per cent internet penetration, is #1 in "National risk perceptions in the context of upcoming elections" in 2024. 

Source: World Economic Forum’s The Global Risks Report 2024

Speaking on the issue of deepfakes, Ivana Bartoletti, Global Chief Privacy & AI Governance Officer at Wipro, says, “Considering that more than 60 countries are entering election mode this year, it is vital that we remain vigilant. Deepfakes have become accessible to everyone, posing a significant risk as these manipulations allow the creation and dissemination of realistic audio and video content featuring individuals saying and doing things they never actually said or did. The consequences extend beyond the digital realm, as online disinformation and coordination can spill over into real-world violence.”

Bartoletti stresses on the importance of companies taking responsibility for combating deepfakes and disinformation to ensure public safety. She suggests measures such as investing in advanced detection technologies, collaborating with experts to develop debunking methods and promoting media literacy and critical thinking among the public. 

Meanwhile, Nilesh Tribhuvann, founder and managing director, White & Brief, Advocates & Solicitors, feels addressing misinformation from generative AI tools requires a multi-faceted approach and he suggests four-pronged solution.

Aaron Bugal, Field CTO – Asia Pacific and Japan at Sophos, opines that digitally signed videos can serve as a means to verify the trustworthiness of content. He explains, “Much like how certificates are used to validate website security and email communications, the same could be used for validating digital media. As technology evolves and deepfake production times shrink and quality vastly improves, a point may come where it’s impossible to distinguish a deepfake from real recorded content; therefore, validating content as true using a signing or verification process is needed.”

Source: World Economic Forum’s The Global Risks Report 2024

Also Read

Subscribe to our newsletter to get updates on our latest news