A photo trend recently going viral on social media has turned heads—and raised serious concerns. Google Gemini’s Nano Banana AI editing tool is transforming selfies into retro-Bollywood style saree portraits, complete with golden hues, cinematic lighting, flowing sarees, and expressive visuals. But one user, @jhalakbhawani, noticed something chilling: the AI-generated image showed a mole on her left hand—something she has in real life, but not visible in the original photo she uploaded. How did the AI “know”? This incident has triggered debate over facial data usage, hidden identity inference, and whether tools like Gemini cross privacy boundaries.
What Is Google Gemini Nano Banana?
- Nano Banana is a feature or mode under Google Gemini’s image editing tools. It allows users to upload selfies and apply styles or prompts (e.g. “vintage saree portrait”) to generate a new stylized image.
- It’s gone viral: thousands of users are trying the trend, posting edited images on Instagram and elsewhere.
- The edits are visually strong, but some users say they are “too real,” with details that surprise (e.g. the mole case) raising questions about what data is being used/trained upon.

The “Mole Mystery” & Identity Concerns
- In the reported case, the user uploaded a photo wearing long sleeves; mole was not visible. Yet, the AI edit added the mole in the result. She found this “creepy,” hinting at possible use of prior data about her (or inference).
- Experts say such detail could come from training datasets, or from metadata, or even facial recognition techniques, rather than from that specific image alone. It could also be a “hallucination” by the AI—but when those hallucinations align with user’s actual features, it raises alarm.
- This kind of inference, or ability of AI to “infer unshown attributes,” raises concerns regarding identity protection, privacy intrusion, and possible misuse (deepfakes, impersonation).
Privacy & Data Use Concerns
Facial Data & Inference
- Faces are biometric data. Laws in many jurisdictions treat biometric identifiers as sensitive. If AI tools collect, store or infer facial traits beyond what you uploaded, issues of consent, notice, and data minimization arise.
- In India, biometric data is increasingly regulated (e.g. under data protection bills, though currently legal framework is still evolving). There is concern whether Gemini’s policies sufficiently disclose how image data is used. Reports say Google allows users to disable some data collection/training, but many users may not be aware.
Data Retention, Training & Third Parties
- Uploaded images may be stored temporarily for processing; but questions arise: Are they stored longer? Are they used to train models? Are they shared with third parties?
- The lack of clarity around these processes contributes to user distrust. Legal frameworks around data protection often demand transparency, informed consent, and ability to delete or opt out.
Safety, Deepfakes & Misuse
- When identity features are known or inferred, there is risk of deepfake creation, impersonation, fraud, reputational harm.
- Even non-malicious “creepy” edits (like mole appearance) can lead to psychological discomfort or privacy harm.
Legal & Regulatory Angles
Relevant Laws & Liability
- Privacy / Data Protection Laws: Though India does not yet have a fully enacted data protection law (as of last available), draft bills treat biometric data and facial recognition, inference, etc., as sensitive personal data. Uploading faces and having them used for purposes not clearly disclosed could violate these norms in future legal regimes.
- IT Act, 2000 / IT Rules may apply for unlawful usage of images or misuse of personal data, especially with regard to “sensitive personal data”, though their scope may be limited.
- Consumer Protection: If users were misled or not clearly informed, or if the tool promises fun edits but causes privacy harms, issues of misrepresentation could emerge.
Consent, Notice & Accountability
- Key legal requirements: users should be clearly informed what data are collected (images, biometric features, metadata), how they are used (just for edit, or for training/inference), where they are stored, whether third parties access them, and for how long.
- There should be mechanisms for opting out (not uploading, or not data-training), deleting data, and clear policies.
- Accountability: If harm arises (identity theft, misuse, deepfake), who can be held liable? The platform (Google), model trainers, or third parties using the images?
What Google Says & What Users Can Do
- Google claims safety features: invisible digital watermarks (SynthID) and metadata tags; official policies that allow certain privacy settings; promises to comply with regional laws.
- But experts warn: watermarks alone are not enough; users often don’t know how to verify them. Also, policies may change, and defaults may be set to on (data collection) unless users opt out.
Tips for Users
- Avoid uploading highly sensitive images (unnecessary nudity, identification cards, bare identity).
- Strip metadata / EXIF data from images before upload.
- Understand privacy settings; disable data-training sharing if option exists.
- Use the official Google Gemini app / platform; avoid third-party look-alikes or scam sites.
- Be cautious about what prompts you use; avoid revealing personal attributes via images.
Ethical & Societal Implications
- Bias and Representation: AI models may incorporate societal biases (skin tone, gender stereotypes). If image editing reflects or amplifies these, it can reinforce discrimination.
- Identity vs Creativity: Where is the line between creative freedom and invasive inference? Trends like saree edit or vintage portraits may seem harmless, but when details not present are inferred, users feel their private self is known more than they thought.
- Need for Regulation: The pace of AI development outstrips regulation. Ethical AI and legal guardrails are lagging behind.
Conclusion: balancing fun & privacy
Google Gemini’s Nano Banana trend is fun, visually attractive, and undeniably viral. But the mole mystery and identity inference issues show that even lighthearted AI photo editing carries privacy risks. For users, being aware, cautious, and informed is essential. For regulators, precisely defined rules around facial data, consent, data usage, and accountability are urgently needed. As AI tools become more pervasive, trust depends not just on safety features, but on transparency, choice, and respect for identity.
To read more Indian Laws and news, visit Legal Guide India