Google’s Nano Banana AI Makes Photos Unreliable

▼ Summary
– Google has released Nano Banana Pro, an advanced AI image model that makes it increasingly difficult to distinguish real photos from AI-generated ones.
– The Pro model improves realism by eliminating previous AI image flaws like blurriness or overly smooth textures.
– Nano Banana Pro is a paid subscription service costing $19.99, though users get two free daily generations built on Gemini 3 Pro.
– The technology includes minimal safeguards but uses SynthID digital watermarking and visible watermarks for identification.
– Experts express concern about the ease of creating realistic fake images of public figures and the need for better AI image labeling.
Distinguishing between authentic photographs and artificial intelligence creations is growing more challenging with the launch of Google’s advanced Nano Banana Pro AI image model. This upgraded system builds on the capabilities of its predecessor, delivering a degree of realism that makes it tough for the average viewer to detect digital fabrication.
Content creator Jeremy Carrasco recently shared with NBC News that people are likely being tricked by AI-generated photos already. “You will be fooled by an AI photo, and you probably already have been but didn’t know it,” he remarked. Carrasco pointed out that earlier tells, such as unusual blurring, an overly glossy appearance, or unnatural smoothness, have largely been resolved in this new version.
Built on the powerful Gemini 3 Pro architecture, Nano Banana Pro is a subscription-based service available for $19.99 per month. However, Google does offer a limited free tier, allowing up to two image generations daily without charge. The accessibility and sophistication of this tool represent what experts are calling a significant “escalation” in synthetic media.
A major concern revolves around the potential misuse of personal likenesses. While each generated image includes SynthID, Google’s invisible digital watermark, along with a visible watermark, there are minimal barriers preventing users from replicating the appearance of public figures or private individuals. Carrasco emphasized the risks, stating, “The idea that anyone can become a ‘Photoshop pro’ overnight and use these celebrities or politicians’ likeness is obviously frightening.”
Independent tests by PetaPixel using the free version of Nano Banana Pro revisited a popular experiment from March 2024, where iconic historical photos were recreated by AI. The results demonstrated a remarkable leap in output quality. On platforms like X and Reddit, users have been actively posting their own creations, sparking both amazement and scrutiny.
One notable post from X user Sid compared images produced by the base Nano Banana model against the Pro version. The Pro model achieved far greater realism, though sharp-eyed observers noted minor anatomical inaccuracies, such as a bartender’s fingers being positioned incorrectly. Similarly, a Reddit thread showcased the model’s ability to generate convincing depictions of well-known tech leaders, further illustrating its advanced capabilities.
As AI-generated imagery becomes nearly indistinguishable from real photos, the need for clear labeling or reliable detection methods grows more urgent. Although Gemini 3 includes a feature that allows users to ask whether an image is AI-generated, it remains uncertain whether people will consistently use such tools during casual social media browsing. This technological progress underscores a pressing question: in a world flooded with synthetic media, how can we trust what we see?
(Source: PetaPixel)





