AI caricature trend: is it safe?

Md. Zahidur Rabbi
Md. Zahidur Rabbi

A new social media trend encouraging users to generate AI-powered caricatures of themselves is spreading rapidly across platforms including Facebook, Instagram, and LinkedIn. The trend typically involves uploading a photograph and prompting an artificial intelligence (AI) tool to create an illustrated version of the user based on their profession and personal background.

Many of the images depict animated representations of individuals in work settings, such as classrooms, offices or clinics, often incorporating details about clothing, tools and workplace environments. While similar visuals can be produced using a range of AI tools, including Google’s Gemini, a significant number of viral posts have been generated using OpenAI’s ChatGPT using prompts such as: “create a caricature of me and my job based on everything you know about me.”

Some users have embraced the trend as a form of digital self-expression. Baijeed Ahamed Protik, an officer in wealth and retail banking at Standard Chartered Bank (SCB), said, “I’m drawn to AI self-representation because it lets me explore how identity, intelligence, and creativity are expressed in digital spaces.”

To produce more personalised results, users frequently provide detailed prompts that include information about their occupation, daily routines and lifestyle. Some share data about where they work, the nature of their responsibilities and aspects of their family life. The images have been widely shared by professionals and students. Some freelancers and technology workers seek to present a creative version of their identity online through sharing these photos.

Maizul Islam, an Associate Specialist at Ollyo said, “AI has grown significantly, and I’ve noticed that social platforms often push AI-driven content for better reach. I followed the trend to stay relevant while maintaining balance. It does support my digital branding to some extent, but it’s not the core of my professional identity.”

Photos generated with ChatGPT and shared by users on social media.

However, cybersecurity experts warn that combining detailed biographical information with photographs can create a rich digital profile that may be exploited in social engineering or identity impersonation schemes. Information such as job titles, employers, daily schedules and recognisable locations could make fraudulent messages appear more credible.

“The biggest risk is not the illustration itself, but everything people reveal to obtain it. When someone shares details about their work, their family, or their routine, they are unknowingly providing information that can be used for highly targeted fraud or identity impersonation,” said Leandro Cuozzo, Security Analyst from Kaspersky’s Global Research and Analysis Team. “In this context, the cumulative exposure of personal data can become a gateway to social engineering attacks, identity theft, or personalised scams,” he said.

Many users underestimate how much data is stored when interacting with AI platforms. In addition to the final image, services may retain the original photograph, written prompts, usage history and technical data such as IP addresses, depending on their privacy policies. According to OpenAI’s privacy policy, content submitted to tools like ChatGPT – including uploaded photos and written prompts – may be stored and used to improve its services, unless users opt out through specific settings. 

Google similarly notes that interactions with its Gemini app may be retained and, in some cases, reviewed to enhance system performance. In practice, that means images, prompts and technical data can remain on company servers beyond the moment a caricature is generated. Privacy terms vary by subscription type and region.

There is also widespread confusion about memory features in AI tools. Disabling memory in AI models typically does not prevent the system from processing information contained in a prompt. When a user writes, “create a caricature of me and my job based on everything you know about me,” the model generates a response based on the information provided in that conversation, any memory features the user has enabled, and patterns learned during training. Turning off memory affects long-term personalisation, not the immediate handling of uploaded images or written prompts. Platforms still process the data in real time, and retention is determined by broader privacy policies rather than the memory toggle alone.

In recent weeks, some users have gone beyond professional caricatures, uploading AI-generated images of entire families, including newborn children, without apparent consideration of where those images might reside. Although these illustrations may appear harmless, the original photographs are uploaded to the AI provider’s servers, where they can be stored, processed and retained for system monitoring or service improvement. Children’s images are particularly sensitive because they contribute to a long-term digital footprint over which the child has no control. Unlike a casual social media post, submitting an image to an AI system involves transferring data directly to a company’s infrastructure, where retention policies – not parental intentions – determine how long it persists.

The popularity of the caricature trend highlights the growing role of generative AI in youth culture and online self-expression. However, users must exercise caution and avoid sharing excessive personal details. 

How to stay safe

Below are a few tips that might come in handy for users generating these types of AI caricatures:

  • Avoid sharing employer names or specific locations.
  • Do not disclose daily schedules.
  • Use generic prompts instead of detailed biographies.
  • Avoid uploading high-resolution ID-style photos.
  • Review AI platform privacy policies.