Cyble’s Senior Director and Head of Solutions Engineering, Ankit Sharma, on Safe User Data Handling in Ghibli-Style AI Art
Ankit Sharma highlighted the importance of being cautious while using AI-generated tools due to potential privacy risks. He emphasized the need for users to be aware of the implications of sharing personal data online. Sharma also discussed the increasing prevalence of AI-powered image generation tools like ChatGPT's Ghibli filter. The conversation shed light on the need for individuals to understand and protect their privacy in the digital age. The interview with The Hans India provided insights into the challenges posed by AI technologies in terms of privacy concerns.

AI-Powered Art Filters and Privacy Risks
AI-powered art filters have taken the internet by storm, allowing users to transform their images into stunning Ghibli-style artwork. While these tools showcase the magic of artificial intelligence, beneath their charm lie serious privacy risks.
Cyble’s Senior Director and Head of Solutions Engineering, Ankit Sharma, recently spoke with The Hans India about the growing privacy risks associated with AI-powered image generation. He highlights that users often upload personal images without considering how these platforms handle, store, or share their data. Without clear policies, images could be retained, repurposed for AI training, or even exposed to security breaches.
Beyond privacy, the rise of deepfakes and synthetic media raises concerns about identity theft and biometric fraud. Cybercriminals could exploit stylized images to create fake profiles, manipulate authentication systems, or spread misinformation. As AI-generated content evolves, so do its risks. Users must stay cautious, and companies must enforce strict security measures, transparent data practices, and automated image deletion to prevent misuse.
Potential Privacy Risks of Uploading Images to AI Tools
AI-powered image filters may seem harmless, but they come with inherent privacy risks. The biggest concern is data retention—if the platform stores images after processing, it creates an attractive target for cybercriminals. Even if the company has no malicious intent, weak security controls could lead to leaks or unauthorized access.
Another issue is unintended AI training. Some tools refine their models using user-generated images, potentially feeding biometric data into facial recognition systems without explicit consent. This raises concerns about profiling, surveillance, and data misuse. Users should also be wary of third-party integrations that could expose images to less secure environments, increasing the risk of breaches.
Concerns Over Deepfakes and AI-Generated Content
While Ghibli-style images may seem innocent, they still contain enough facial data to be misused. Cybercriminals can build deepfake datasets using modified AI images, enabling impersonation scams, synthetic identity fraud, or even AI-generated avatars that mimic real people. Manipulated AI-generated images can fuel misinformation campaigns, damage reputations, or be used in extortion attempts.
Exploitation of AI-Generated Images by Cybercriminals
Bad actors can use AI-generated images for various fraudulent activities including social engineering attacks, bypassing facial recognition, manipulation for blackmail, and synthetic identity fraud. With AI-generated content becoming more convincing, organizations and individuals must remain vigilant about where their images are uploaded and how they might be repurposed.
Measures for Secure Handling of User Data in AI Image Generation
Security should be built into the AI image-generation process from the ground up. Critical safeguards include real-time processing with no storage, end-to-end encryption, strict access controls, clear user consent policies, and routine security audits. By prioritizing privacy-first AI design, companies can give users peace of mind while enjoying creative tools like the Ghibli filter.
Ensuring Deletion of User Images After Processing
A secure AI system should follow a zero-retention policy unless users explicitly request storage. Organizations should automate image deletion, give users control over their data, enforce third-party compliance, and conduct regular privacy audits to verify data handling practices. Building trust with the audience requires proactive protection of user data.
About Ankit Sharma
Ankit Sharma is the Senior Director and Head of Solutions Engineering at Cyble Inc. He manages a global team of solutions engineers and architects, driving business growth and supporting Cyble Sales through expertise in Program Delivery Management, Technical Sales, and Key Account Management. Ankit specializes in data security, privacy, data governance, compliance management, and cloud security.