As technology rapidly improves, AI generated celebrity images and likenesses seem to be becoming more common. But have you ever wondered whether the celebs have a say in what's shared? And how can we protect our own likenesses from being used in AI creations?
BBC Bitesize Other Side of The Story has investigated the rights and rules regarding our AI future.
Why are AI images so popular online?
When a series of Instagram stories were posted to Brooklyn Beckham’s official account, one of them alleged that his mother, Victoria Beckham, had “hijacked” the first dance at his wedding.
The moment went viral, with many different memes shared online. Some images showed Victoria doing the splits, a dance move, and various poses in front of her son and daughter in law. These were proven to be AI-generated, but it didn’t stop the memes from trending, with one post gaining 1.6m views on X.
While relatively harmless, these images pose important questions regarding AI ownership and what is or isn’t allowed in terms of using someone’s likeness.

What rights do we have over our images?
Other Side of the Story spoke to an AI expert called Henry Adjer, who works with businesses on AI policies, as it continues to evolve. When asked what rights we have over our own image, the answer was a little complicated.
Currently, the UK has no specific AI laws. While discussions are being held, older laws are being used instead. One of these laws is the General Data Protection Regulation, commonly known as GDPR, which manages how user data is processed and handled.
Others include the Online Safety Act, which removes harmful content from websites and the Passing Off law, which stops people from creating products too similar to an established company product or person. Singer, Rihanna, once won a legal dispute under 'Passing Off' with UK clothing brand Topshop who had sold clothes using her image, without her permission.
The problem Henry mentioned is that these laws, while helpful, are not AI specific. “GDPR provides you with data privacy over images of you, of your actual identity, not AI generated versions of you.” GDPR will however look to stop how private data – such as genuine photographs – are used to train AI in creating counterfeits.

Is our data being used to train AI?
Meta, as an example, owns popular apps such as WhatsApp, Instagram and Facebook. Meta introduced a new policy in 2025, allowing their AI to train itself with all public data on their websites of anyone over the age of 18.
Any public photos or messages can be used to improve their AI service and the ads users are shown. They cannot go through private data, such as direct messages, but if your data is publicly shown on the profile of another user over the age of 18, Meta AI might use it.
Other companies have rolled out similar AI training, where users must find out and then opt out if they don’t want their data stored on the company website to fuel AI.
The problem Henry mentioned is with copyright. Copyright gives people a legally protected right to own and manage something for a certain number of years, such as a book, or a film. But without AI laws, there is less potential for copyright.
Henry explained that in the UK, if someone captures footage of you in a public space, “that is treated as fair game.” Likewise, sometimes this data gathering might be helpful, “…if you’re using an app with a map in it, it’s much more likely to be useful in some respects if you share your location data.”
So, as Henry says, “…the idea of sharing some personal data to improve app experiences is not unique to AI. What is unique to AI, is the amount of data that is quite personal that might be shared if you agree, for example, for your entire chat logs to be shared over time, filed and recorded.”
What could AI training result in?
Henry stressed that he wasn’t aware of any company using AI in this way, but there is the possibility of AI being used to create a ‘friend-like’ experience to then advertise products us.
Henry said: “The more you share, thinking more can be done with it to help, the question is more: do you trust the company that is taking your data to actually help you and have your best interests in mind?”
How can we remain media literate and savvy in a post-AI world?
The best way to stay certain in a world of AI, according to Henry, is by admitting how quickly you can become uncertain and how quickly things can change.
Rules which worked for one image, one day, may not apply the following week. AI is increasingly becoming closer in look to the genuine article.
By continuously researching and using trusted resources available, we can remain critical of what we see online and where it has come from.
This article was published in February 2026

Not sure if the news you’re seeing on social media is true or false? Can you always tell if the things you see online are real or fake? Learn how to get the other side of the story with our quizzes, videos and explainers.
Why are AI fruit and vegetables on TikTok telling us what to eat?
‘Yelling’ AI-generated foods are giving advice on everything from kitchen habits to personal hygiene


