AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Google Gemini Omni video model debuts with early demos

Originally published on: May 12, 2026
▼ Summary

– A new video generation model called “Omni” is appearing in Gemini, described by Google as a tool for remixing videos, editing in chat, and using templates.
– Metadata suggests “Omni” is an extension of Google’s existing Veo video generation model.
– A demo of a professor writing trigonometric proofs on a chalkboard produced a fairly realistic video, handling text well despite some obvious flaws.
– A second demo of two men eating spaghetti at a seaside restaurant also yielded realistic results, passing a version of the “Will Smith test.”
– Google has not officially announced “Omni,” but may reveal more at I/O 2026, following its commitment to video generation after OpenAI discontinued Sora.

A fresh video generation model appears to be surfacing within Gemini, and early demos suggest it’s already turning heads. Dubbed “Omni,” the tool is producing clips that feel surprisingly polished, even if some digital fingerprints remain.

Google has long leaned on Veo for AI-powered video creation, but Omni seems to be something new. One Gemini user received a prompt to “Create with Gemini Omni,” with Google describing it as “our new video generation model. Remix your videos, edit directly in chat, try a template, and more.” The exact relationship between Omni, Veo, and the broader Gemini ecosystem isn’t yet clear, though metadata hints that Omni may be an extension of Veo.

Still, the output speaks for itself. One demo fed the model a prompt about a professor writing out a trigonometric proof on a traditional chalkboard, explaining each step. The resulting video handles the on-screen text with impressive clarity and delivers a fairly realistic classroom scene. Yes, there are still obvious tells,those subtle glitches that give away the AI origin,but the overall execution is strong.

A second test referenced the infamous “Will Smith eating spaghetti” benchmark. The prompt described two men at an upscale seaside restaurant, one a distinguished African-American man in his 50s, both approaching a circular table with fine linens and cutlery to share a plate of spaghetti. The output again landed on the impressive side, handling the complex scene composition and natural movement without falling apart.

The user who triggered these demos also noticed a “usage” tab appear. Those two prompts consumed 86% of the daily quota on an AI Pro plan, though some Gemini Flash usage that same day ate into the remainder. This aligns with Google’s recent hints about introducing more explicit usage limits for its AI services.

Google hasn’t officially announced Gemini Omni yet. But the company has stated that “video’s here to stay,” doubling down on the technology after OpenAI’s decision to shutter its Sora video generation model earlier this year. With I/O 2026 just weeks away, that event seems the most likely stage for a formal unveiling of Google’s next move in AI video generation.

(Source: 9to5google.com)

Topics

video generation 95% gemini omni 93% google veo 88% ai model testing 85% usage limits 82% will smith test 80% Generative AI 79% openai sora 75% i/o 2026 73% mathematical proofs 70%