Sora, OpenAI’s new text-to-video app, is meant to be a social AI playground where users can create imaginative AI videos of themselves, their friends, and celebrities, inspired by the ideas of others.
The app’s social structure, which allows users to adjust whether their likeness can appear in other people’s videos, appeared to address the most pressing issue around consent when it comes to AI-generated videos when it was released last week.
But with Sora sitting at the top of the iOS App Store with over 1 million downloads, experts are concerned that the internet could be flooded with historical misinformation and deepfakes of deceased historical figures who cannot consent or opt out of Sora’s AI models.
The app can generate short videos of deceased celebrities in conditions they have never experienced before in less than a minute. Examples include Aretha Franklin making soy candles, Carrie Fisher trying to balance on a slackline, Nat King Cole ice skating in Havana, and Marilyn Monroe teaching Vietnamese to schoolchildren.
This is a nightmare for people like attorney Adam Streisand, who at one time represented several celebrity estates, including Monroe’s estate.
“The challenge to AI is not the law,” Streisand said in an email, noting that California courts have long protected celebrities “from the reproduction of images and sounds by AI.”
“The question is whether a non-AI judicial process that relies on humans can play an almost five-dimensional game of whack-a-mole.”
Sora’s videos range from the absurd to the entertaining to the confusing. Beyond celebrities, many of Sora’s videos display convincing deepfakes of manipulated historical moments.
For example, NBC News was able to produce realistic videos of President Dwight Eisenhower confessing to accepting millions of dollars in bribes, British Prime Minister Margaret Thatcher claiming that the “so-called D-Day landings” were exaggerated, and President John F. Kennedy announcing that the moon landings were “not a triumph of science, but a hoax.”
The ability to generate such deepfakes of non-consensual deaths has already sparked complaints from families.
In an Instagram Story posted Monday about Sola’s video featuring Robin Williams, who died in 2014, Williams’ daughter Zelda wrote, “If you have any common sense, please don’t do this to him, to me, to anyone. It’s ridiculous, it’s a waste of time and energy, and believe me, that’s not what he wants.”
Bernice King, Martin Luther King Jr.’s daughter, wrote of X, “I agree with you about my father. Please stop.” Dr. King’s famous “I Have a Dream” speech has been continually manipulated and remixed on the app.
George Carlin’s daughter said in a post on Blue Sky that his family is “doing our best to combat” deepfakes about the late comedian.
A Sora-generated video depicting “horrific violence” involving famous physicist Stephen Hawking also skyrocketed in popularity this week, with many examples circulating on X.
An OpenAI spokesperson told NBC News, “While we strongly believe in free speech when it comes to portraying historical figures, we ultimately believe that public figures and their families should be able to control how their likenesses are used. In the case of recently deceased public figures, authorized representatives and estate owners can request that their likeness not be used in Sora’s cameo.”
OpenAI CEO Sam Altman said in a blog post last Friday that the company will soon “give rights holders even more control over character generation,” referring to a wider variety of content. “We’re very excited about this new kind of ‘interactive fan fiction,’ and we’re hearing from many rights holders that we think this new kind of engagement will create a lot of value for them, but that they would like the ability to dictate how their characters can be used, even if they aren’t used at all.”
As OpenAI’s policies for Sora rapidly evolve, some commentators argue that the company’s quick actions and disruptive approach are purposeful and demonstrate the app’s power and reach to users and intellectual property owners.
Liam Mayes, a lecturer in Rice University’s media studies program, believes that increasingly realistic deepfakes could have two important social effects. First, “people you trust will become victims of all kinds of fraud, large and powerful corporations will use coercion, and nefarious actors will undermine the democratic process,” Mays said.
At the same time, the inability to identify deepfakes from real videos may reduce trust in authentic media. “We may see a loss of trust in all kinds of media organizations and institutions,” Mays said.
As founder and chairman of CMG Worldwide, Mark Rosler has managed the intellectual property and licensing rights of more than 3,000 deceased entertainment, sports, history and music luminaries, including James Dean, Neil Armstrong and Albert Einstein. Rosler said Sora is just the latest technology that raises concerns about the preservation of the figurine’s heritage.
“Abuse exists and will continue to occur, as it has always done with celebrities and their valuable intellectual property,” he wrote in an email. “When we started representing deceased individuals in 1981, the Internet did not yet exist.”
“New technologies and innovations help keep alive the legacy of the many historic and iconic figures who shaped and influenced our history,” Rosler added, noting that CMG will continue to represent the interests of its clients in AI applications like Sora.
To distinguish between real videos and Sora-generated videos, OpenAI has implemented several tools that allow users and digital platforms to identify Sora-generated content.
Each video contains metadata, which is invisible signals, visible watermarks, and behind-the-scenes technical information that describes the content as being generated by AI.
But some of these layers are easily removable, said Sid Srinivasan, a computer scientist at Harvard University. “Visible watermarks and metadata deter casual abuse with some friction, but are easy enough to remove and won’t stop more determined attackers.”
Srinivasan said invisible watermarks and associated detection tools are likely the most reliable approach. “Eventually, video hosting platforms will likely require access to such detection tools, and there is no clear timeline for broader access to such internal tools.”
Wenting Zheng, assistant professor of computer science at Carnegie Mellon University, echoed this view, saying, “To automatically detect AI-generated material on social media posts, it would be beneficial for OpenAI to share tools that track images, audio, and video with platforms to help people identify AI-generated content.”
When asked to elaborate on whether OpenAI was sharing these detection tools with other platforms such as Meta and X, an OpenAI spokesperson referred NBC News to a general technical report. The report does not contain such detailed information.
Ben Colman, CEO and co-founder of deepfake detection startup Reality Defender, said some companies are relying on AI to detect AI output to better identify real footage.
“Humans are flawed and wrong, and we miss what we can’t see and what we can’t hear, even if we’re trained on this, like some of our competitors,” Coleman said.
In Reality Defender, “AI is being used to detect AI,” Colman told NBC News. AI-generated videos may be more realistic to you and me, but AI can see and hear things that we can’t.
Similarly, McAfee’s Scam Detector software “listens to the audio of the video, detects an AI fingerprint, and analyzes it to determine whether the content is real or generated by AI,” said Steve Grobman, McAfee’s chief technology officer.
But Grobman added: “New tools are making fake videos and audio seem more authentic all the time, and one in five people say they or someone they know has already been a victim of a deepfake scam.”
The quality of deepfakes also varies by language, as current AI tools for commonly used languages such as English, Spanish, and Chinese are much more powerful than those for less commonly used languages.
“As new AI tools emerge, we are regularly evolving our technology and expanding beyond English to cover more languages and contexts,” Grobman said.
Concerns about deepfakes have made headlines before. Less than a year ago, many observers were predicting that deepfakes would be prevalent in the 2024 election. This turned out to be largely untrue.
But until this year, AI-generated media such as images, audio, and video were nearly indistinguishable from real content. Many commentators say the model released in 2025 is particularly close to the real thing, threatening the public’s ability to discern authentic human-generated information from AI-generated content.
Google’s Veo 3 video generation model, released in May, was described at the time as “frighteningly accurate” and “dangerously lifelike,” leading one critic to wonder, “Are we doomed?”
