The Future of Video Manipulation and AI-Enhanced Disinformation

In all the buzz around OpenAI’s unveiling of its AI video generator, Sora, a feature they mention only in passing might hold the most significant implications for the future of AI video and disinformation. This feature allows Sora to “take an existing video and extend it or fill in missing frames” and it could present unprecedented challenges for EU video content regulators in the broader fight against disinformation.

The Power of Sora’s Video Extension Capabilities

Sora’s ability to seamlessly extend or modify existing videos opens up a can of worms and almost infinite possibilities for creating deceptive content. Imagine a video of a political speech that starts with genuine footage but transitions into an AI-generated ending where the politician appears to say something entirely fabricated. Or picture real footage of a terrorist attack altered to implicate an AI-generated person of a different ethnicity as the perpetrator. These scenarios illustrate the potential for what might be called “time-shifted AI videos”. These are videos that are a blend of real footage and fake footage, combined in a way that could make them nearly impossible to distinguish from really real videos.

OpenAI’s Sora Technical Report

This feature’s brief mention in OpenAI’s Sora technical report sheds a tiny bit of light on the extensive capabilities of this technology:

“All of the results [released so far] show text-to-video samples. But Sora can also be prompted with other inputs, such as pre-existing images or video. This capability enables Sora to perform a wide range of image and video editing tasks – creating perfectly looping video, animating static images, extending videos forwards or backwards in time, etc.”

That last bit is the kicker: “extending videos forwards or backwards in time”. It hints at Sora’s potential to rewrite video narratives by manipulating footage. This capability could enable bad actors to create highly convincing disinformation by altering the context or content of videos in ways that are difficult to detect.

The development of Sora’s advanced video editing features could represent a leap in the evolution of disinformation. Traditional fake news and deep fakes can often be identified by inconsistencies or factual inaccuracies. But they could be superseded by these highly realistic, AI-altered videos. This ability to extend or modify real videos could be weaponised to create content that is not only misleading but also extremely difficult to identify.

An AI generated image created in OpenAI sora platform, it depicts a woman walking down an asian city street at night.
Source: OpenAI https://openai.com/index/sora/

Challenges and the Path Forward for EU Video Content Regulators

With OpenAI expanding its operations in Dublin and actively recruiting engineers to build a large scale video infrastructure platform, the regulatory landscape in Ireland and across the EU is going to have to adapt to keep up. Video content regulators will need to develop new strategies and tools to identify and mitigate the impact of time-shifted AI videos. This task will be particularly daunting given the seamless nature of these manipulated videos.

Addressing the challenges posed by Sora’s capabilities will require a multifaceted approach. The development of technological solutions and tools capable of identifying subtle manipulations in video content will be crucial. The collaboration between tech companies, regulators, and independent researchers in order to stay ahead of emerging disinformation techniques will also be key. It will also be vital to raise awareness about the potential risks of AI-generated disinformation and to educate the public on how to critically evaluate video content. 

OpenAI’s Sora has the potential to revolutionise video generation and editing, offering remarkable creative possibilities. However, it also introduces significant risks in terms of disinformation. We will need new tools, comprehensive strategies, and strong regulations to ensure that these exciting AI video technologies are used responsibly.

Get in touch