Master Your AI Video Presence with Sora

▼ Summary
– Sora now allows users to control how and where their AI-generated deepfake versions appear in the app, addressing user concerns.
– The update is part of broader weekend changes to stabilize Sora and manage chaos in its feed, which functions like a TikTok for deepfakes.
– Users can restrict their AI doubles from political content, specific words, or contexts, and add preferences like wearing certain accessories.
– Despite safeguards, past AI tools have been bypassed for harmful content, and Sora’s watermark has already been skirted, raising reliability concerns.
– Since launch, Sora has contributed to AI-generated content proliferation, with loose controls leading to issues like mocking videos of OpenAI’s CEO.
Getting a handle on your digital twin just became more straightforward. Sora, OpenAI’s platform for creating short AI-generated videos, has rolled out new user controls designed to give people greater authority over their virtual likenesses. This move arrives as a wave of synthetic content begins to saturate online spaces, prompting developers to address growing public apprehension.
The latest features are part of a series of weekend updates focused on stabilizing the application and managing the unpredictable nature of its content feed. Often described as a TikTok for deepfakes, Sora enables users to produce ten-second clips of virtually anything, including AI-generated replicas of themselves or others, complete with voice. OpenAI refers to these digital personas as “cameos,” though some observers view them as a potential catalyst for widespread misinformation.
According to Bill Peebles, the leader of the Sora team at OpenAI, users can now impose restrictions on how their AI-generated doubles are utilized within the app. You could, for instance, block your digital self from featuring in politically charged videos, prohibit it from using specific language, or even stop it from appearing in scenes involving a particular item, like a disliked condiment.
Adding a layer of personalization, OpenAI staffer Thomas Dimson mentioned that users can also set preferences for their virtual counterparts. This could involve instructing your AI double to consistently wear a specific piece of clothing, such as a hat proclaiming you the “#1 Ketchup Fan” in every video it stars in.
While these new safeguards are a positive step, the track record of AI systems suggests determined individuals may find ways to bypass them. Previous instances involving chatbots have shown they can sometimes be manipulated into providing dangerous information. The platform’s existing safety measures have already been tested; its watermarking system, for example, has proven relatively easy to circumvent. Peebles acknowledged this vulnerability, stating the company is actively “working on” improving that particular feature.
Peebles also confirmed that Sora’s development will continue to “hillclimb on making restrictions even more robust,” with plans to introduce additional user control mechanisms in the future.
In the short time since its launch, the app has contributed to the internet’s growing collection of AI-generated content. The initial, more permissive cameo controls, which essentially offered a simple yes or no for visibility to groups like mutual followers, approved contacts, or “everyone”, proved to be a significant issue. The platform’s most prominent involuntary participant, OpenAI CEO Sam Altman, perfectly illustrated the potential for misuse. His likeness has been featured in a host of satirical videos depicting him in various absurd scenarios, from committing petty theft to rapping and even barbecuing a fictional character.
(Source: The Verge)