
A still from the AI remake of The Wizard of Oz
When Dorothy Gale was picked up by a twister from her family’s Kansas farm 86 years ago, she was taken to a magical place called Oz. It turns out, Dorothy’s journey isn’t complete, and soon she will be landing in another magical place: The Sphere in Las Vegas.
At the start of its Google Cloud Next conference yesterday, Google Cloud pulled the green curtain back from one of its latest endeavors: using generative AI to help remake The Wizard of Oz in super high-def to fit Sphere’s massive, curved screen. The movie is expected to debut in the landmark Vegas theater on August 28.
Google Cloud gave a group of Next attendees a sneak peek at snippets from the movie, dubbed The Wizard of Oz at Sphere, as well as the AI that is making it work. The cinematic effort was massive and unique, in that the film was created entirely out of AI, but relied entirely on the source material. No additional scenes or lines are being added to the remake of Victor Fleming’s original 1939 movie.
The remake is a joint effort of Sphere Studios, Magnopus, Warner Bros. Discovery, and Google. The companies have been working for years to figure out how to make it. From the moment the remake was conceived, there were questions about how to do it.
“We talked about doing it in different ways,” says Jane Rosenthal, the Academy and Emmy Award-nominated producer who is a producer on the remake. “We realized that we really needed to do it with AI.”
One of the biggest challenges was figuring out how to fit the original film, shot in a 4×3 format, on Sphere’s massive 160,000 square-foot curved screen, which is the world’s highest definition screen.
While Fleming’s original was the first to be shot using a Technicolor 35mm motion-picture camera–and was the third film in history to be colorized–the original film lacks the visual resolution necessary for today’s high-definition screens.
The solution: Use Google generative AI models including Gemini, Veo, and Imagen, to upscale the original’s celluloid-based images into ultra-high 16k resolution required by the Sphere’s 268 million pixels.
The second big challenge was filling out the scenes to fit the Sphere’s wide-angle and immersive view.
The original 102-minute film featured many close-up shots, which was necessary given the limited resolution of the day’s cameras. However, simply expanding the original movie’s 4×3 close-up shots on the Sphere’s massive 240-foot-tall wraparound screen would not only fail to take advantage of the one-of-a-kind $2.3-billion venue, but it just wouldn’t look very good either.
The solution: a new technique called “outpainting” that enables the filmmakers to fill-in more of the movie set than what made it into the final film.
This work required painstaking work on the part of the filmmakers, who needed to pay close attention to items in the movie, such as Dorothy’s red slippers, that we know exist but didn’t appear in every screen. The folks remaking the movie went back through the original filmmaker’s notes and outtakes to expand the field of view in the most accurate manner possible.
Outpainting also relied heavily on Google’s AI models to “fill in” the additions to the scenes. The filmmakers fine-tuned Google AI models to “learn” specifics about the characters, everything from the Tin Man’s lurching gait to details of Dorothy’s freckles. The models then generated video consistent with the outpainting-based expansion of the original film.
It wasn’t always clear whether the AI-based remake would work. “When you have innovation like this, you don’t always know where it’s going to go,” says Jim Dolan, executive chairman and CEO of Sphere Entertainment. “You have to be able to take a leap of faith. What you’re going to see in The Wizard of Oz at Sphere is clearly a leap of faith.”
Google’s top Deepmind researchers also weren’t sure whether it would work. As the company’s representatives shared in yesterday’s preview at the Sphere, they were developing the technology and the techniques to remake the film on the fly, working without a net. They worked persistently through the technical challenges, however, and may have created a new film genre along the way, which would be a fitting footnote to the film’s legacy.
“The models, they’re wildly innovative,” Steven Hickson, a Google DeepMind researcher on the project, says in a blog post. “We’d find something we can’t do, we think it’s impossible, and then a month later we’re like, actually, maybe we can do that.”