Once again, tech demos are giving us trust issues.

Demo Derby

As the saying goes: never trust a tech demo.

Back in February, ChatGPT creator OpenAI revealed Sora, its new text-to-video AI. Though Sora still isn't yet available for public use,  the announcement was a success — the product sent Silicon Valley buzzing, and several purportedly Sora-generated clips even went fairly viral.

But at least one of those viral Sora clips, a roughly two-minute video titled "Air Head" — which was so impressive that we blogged about it at the time — had a bit more human mitigation than OpenAI initially suggested.

As Patrick Cederberg, a creative director at Shy Kids, the production studio that actually created the clip, recently told FXGuide, "Air Head" required quite a bit of post-production FX magic to achieve the impressive final project — an important revelation, considering that OpenAI presented it without any disclaimer that extra editing was required.

Sunny

The short is pretty charming. The two-minute video follows a yellow balloon-headed man named Sunny, whose disembodied voice provides an inspiring narration about embracing our big ideas and what makes us different as he and his helium cranium traverse cities, office spaces, meadows, and more.

But as Cederberg told FXGuide, creating Sunny wasn't as easy as inputting a prompt and pressing a button. Some of Sora's interprations of the Sunny character were — to put it bluntly — freakish, with the bot embedding a nightmare-fuel human visage into the balloon. Sora would also sometimes depict the balloon in the wrong color, which for some scenes meant that Shy Kids had to isolate and re-color the balloon in post using Adobe AfterEffects.

This notably occurred in a scene in which Sunny is seen chasing after his floating balloon head through a public park. The balloon was originally depicted as red, not yellow. That same scene also required that Shy Kids eliminate more unwanted artifacts, including a maniquin-esque head that Sora had given Sunny's human body.

Speed was also an issue. Sora tends to generate videos in a slow-motion style, Cederberg said — and getting its outputs to flow like a conventional movie or video clip would was apparently no small undertaking.

"There was quite a bit of adjusting timing," Cederberg told FXGuide, "to keep it all from feeling like a big slowmo project."

Powerful Yet Flawed

Overall, Cederberg told FXGuide that "getting to play" with Sora was "interesting."

"It's a very, very powerful tool," he added, "that we're already dreaming up all the ways it can slot into our existing process."

But like other generative AI programs, Sora is still unpredictable, and clearly still has its flaws. (Cederberg, for his part, told FXGuide that "control" over Sora's outputs "is still the thing that is the most desirable and also the most elusive at this point.")

And though OpenAI's demos might've had you thinking otherwise, it seems that Sora still has a long way to go before it's generating studio-ready movie clips without some good old-fashioned human intervention.

More on Sora: In Cringe Video, OpenAI CTO Says She Doesn’t Know Where Sora’s Training Data Came From


Share This Article