Google Veo 3 Now Generates Videos from Images

▼ Summary
– Google added an image-to-video generation feature to its Veo 3 AI video generator via the Gemini app, expanding its capabilities.
– The feature was previously available in Google’s Flow tool, launched in May at the I/O developer conference.
– Veo 3-powered video generation is now accessible in over 150 countries but is limited to Google AI Ultra and Pro plan users with a three-creations-per-day cap.
– Users can create videos by uploading a photo and adding sound descriptions, with options to download or share the output.
– Over 40 million videos have been created using Veo 3, and all outputs include visible and invisible watermarks for AI identification.
Google’s Veo 3 now lets users transform still images into dynamic videos through its Gemini app, marking another leap in AI-powered content creation. This capability, initially introduced in Google’s Flow tool earlier this year, has expanded globally with over 150 countries gaining access in recent weeks.
Currently, the feature is exclusive to subscribers of Google AI Ultra and Google AI Pro plans, allowing up to three video generations per day without rollover. To create a clip, users simply select the “Videos” option within the prompt box, upload an image, and optionally describe desired audio effects. The resulting videos can be downloaded or shared directly.
Since its debut seven weeks ago, more than 40 million videos have been generated across Gemini and Flow. To maintain transparency, every Veo 3-produced video carries a visible “Veo” watermark alongside an invisible SynthID digital watermark, part of Google’s effort to distinguish AI-generated media. Earlier this year, the company also introduced tools to detect SynthID-marked content, reinforcing its commitment to responsible AI use.
The integration of image-to-video generation underscores Google’s push to make advanced AI tools more accessible while addressing growing concerns around digital authenticity. As these features evolve, they promise to reshape how users interact with visual content.
(Source: TechCrunch)