AI & TechArtificial IntelligenceDigital MarketingMENA Tech SceneNewswireStartupsTechnology

Idomoo Strata: First AI Model for Layered Video Creation

▼ Summary

– Idomoo is launching Strata, a foundation AI model that generates video as separate, editable layers for elements like text and animation, unlike standard models that create a single flat file.
– This layered approach directly challenges diffusion-based video generators, aiming to integrate AI video into professional workflows where independent adjustment is standard.
– Strata operates within Idomoo’s Lucas AI agent, enforcing brand guidelines by analyzing a company’s content to apply consistent design, narrative, and assets to generated videos.
– The model enables sophisticated personalization by allowing real-time data injection into specific layers, enhancing Idomoo’s existing service of creating personalized videos for major enterprise clients.
– The technology is currently in early access with select customers, represents a shift to proprietary AI for the company, and its “first for layered video” claim is pending independent assessment.

A fundamental shift is underway in how artificial intelligence creates video content. While current diffusion-based video generators produce a single, uneditable file, a new approach is emerging that directly addresses the needs of professional production. Idomoo, an Israeli enterprise specializing in personalized video, has introduced Strata, a foundation model it describes as the first AI built specifically for layered video output. This technology moves beyond generating static pixels to instead create a structured, editable composition, challenging the architectural norms that have limited AI’s role in professional workflows.

Today’s dominant AI video models all share a critical limitation: they output a flat video file. This means any edit, whether changing text, adjusting an animation, or swapping a background, requires essentially starting the generation process over from scratch. This rigidity has prevented these tools from integrating into professional environments, where video is always constructed from separate, independently adjustable layers for elements like graphics, footage, and audio.

Idomoo’s Strata model aims to solve this by generating what the company terms a production-ready video blueprint. Instead of a final pixel render, Strata produces separate, editable layers containing typography, animation, motion paths, and synchronized audio. As explained by Idomoo’s co-founder and CTO, Danny Kalish, the model generates structure rather than just pixels. This architectural difference is key; standard diffusion models blend all elements into a single tensor, baking relationships into the pixels themselves. Strata is designed to solve a different computational problem, defining the full composition’s placement, contrast, movement, and timing across all layers simultaneously.

The model is a core component of Lucas, Idomoo’s AI video agent, which operates on the company’s existing video platform. A significant technical capability is its brand awareness function. Lucas can analyze a company’s approved content to extract a Brand DNA profile covering design, narrative, and assets. Strata then applies these guidelines,enforcing specific typography, motion cues, color values, and tone of voice,to every video generated. The goal is to move beyond the template-based workarounds common in many AI video products, where content is forced into preset layouts, often resulting in a recognizable visual compromise. Strata designs a unique, custom blueprint for each project.

This layered architecture also unlocks more sophisticated video personalization. Because the output is structured, individual data fields,such as names, account details, or product images,can be injected into specific layers in real time. This capability is central to Idomoo’s existing business, which serves major clients like JPMorgan Chase, Verizon, and American Airlines for personalized customer communications and marketing. Strata elevates this personalization by operating at the foundational composition level, not merely as an overlay on a pre-rendered clip.

The company is managing the launch cautiously. An early access version is currently being tested by several of its largest enterprise customers and is available through the Lucas AI Video Agent. Idomoo has not yet disclosed which clients are participating, what benchmarks the model has been tested against, or how its output quality compares to standard diffusion models. Its claim of being the “first foundation model purpose-built for layered video” is its own assertion and remains unverified by independent assessment. The underlying Strata technology is currently patent pending.

This launch marks a notable strategic shift for Idomoo. Until now, the company’s platform documentation stated it used off-the-shelf AI models. With Strata, it is repositioning from a company that applies AI to video to one building foundational AI for video. Founded in 2007 and based in Ra’anana, Idomoo has raised $27 million in funding. Its nearly two decades of experience building a personalized video platform for enterprises has provided it with a substantial repository of structured video production data, a key asset for training a specialized model like Strata. The true measure of its architectural promise will become evident as early access testing transitions to full production deployments.

(Source: The Next Web)

Topics

layered video generation 98% ai video models 96% diffusion model limitations 93% video personalization 92% enterprise video platform 90% brand consistency 88% professional video workflows 87% generative ai foundation 86% ai video agent 84% template elimination 82%