Solve the Story Episode 2: The Noise
The fake video of Misha is spreading fast and the online abuse is growing. Hundreds of people are posting comments and judging her, with some demanding the skatepark be shut down.
Misha feels anxious and overwhelmed, but Sam thinks something strange is going on.
As they dig deeper, they discover that some of the comments look suspicious and they start to question whether or not they are from real people.
Can you work out what’s real, what’s fake, and who might be behind the online noise?
SAM: Previously on Solve The Story…
MISHA: What’d you say?
AARON: It’s up to a thousand views.
MISHA: Delete it and back off.
I never said that.
It doesn’t even sound like me.
What’d you say?
AARON: You’re the only person who could have done this, Sam.
SAM: It weren’t me, I swear.
MISHA: Well, how’d they get the footage?
SAM: I think whoever’s done this has put somebody else’s head on my body.
I could reverse image search it.
Look, I’ve done my reverse image searches and I think I’ve found something… Where is it?
Look here.
You can buy this pic online from a photo library, so we already know it’s 100% fake.
The weird part is the guy that you punch.
His picture is a stock image too, that’s been face swapped onto my body.
MISHA: Look at these comments.
They’re brutal.
I can’t believe all these people hate me so much.
Like, they don’t even know me.
SAM: That’s the thing though.
I don’t think half of them are real people, you know?
MISHA: What do you mean?
SAM: Well, I think they’re fake as well.
Just bots posting over and over again.
Remember when Aaron paid that guy to post hype comments on his videos so they went viral?
It’s pretty much that, just all done by a computer instead.
MISHA: How would you know?
SAM: Here’s another challenge for you.
Have a look at these comments and see if you can spot any signs that would suggest some of them have been posted by bots.
Look at all these repeated comments, yeah?
Weird links, next-level language.
That one’s posted at 2:33am.
And then that, one minute later.
That one doesn’t even make sense.
There’s loads more.
All of them, exactly 60 seconds apart.
MISHA: And they’re all from different accounts.
SAM: Oh yeah, yeah, I checked.
And most of these accounts weren’t even a thing two days ago.
They’ve been created just to post these comments on this specific video.
Look at that profile picture.
It’s clearly another image off the internet, and the only post is the link to the video.
MISHA: I don’t get how people are allowed to do that.
Like, it’s ruining my life.
SAM: Misha, it happens all the time on that.
These bot comments will just ramp up hate using emotional language you feel like you’ve got to respond to, and then real people think it’s alright to do the same thing.
AARON: Hey.
What you found?
SAM: Nothing yet.
AARON: Nothing?
I thought you said you found something.
MISHA: We found someone set up a botnet to post thousands of fake comments.
AARON: What? How would someone do that?
MISHA: I don’t know, Aaron.
Maybe because someone was annoyed with me last week.
AARON: You think I did this?
SAM: You have sort of pulled a stunt like this before, mate, paying for fake hype.
AARON: Yeah, but that was different.
Let me see the comments.
Wow, there’s… there’s a lot.
SAM: What are you looking for exactly?
AARON: Something to prove that I have nothing to do with this.
Here we go.
Look at this.
SAM: Okay, we need your help again.
Head to the Other Side of the Story website and watch the “How to identify bot accounts on social media” video.
Then use your new skills to find another clue.

Episode takeaways
In this episode, you will learn how to:
- Understand what bots are and how they are used on social media.
- Spot fake accounts by looking at usernames, profile pictures, and posting patterns.
- Recognise emotionally manipulative language used to provoke reactions online.
- Think about how online hate can quickly escalate and affect someone’s mental health.
Key points to think about:
- Why do people use bots to increase attention online?
- How comments can change or influence how we perceive a situation?
- Why might fake comments encourage real people to join in?
- What should you do if you’re being targeted by online abuse?

How to: Identify bot accounts on social media
Seen an account, post, or comment that doesn’t feel quite right? It might not be a real person.
Some bots are harmless, like chatbots that answer questions. But others are designed to trick people, spread misinformation, or flood social media with extreme opinions.
In this video, you’ll learn three simple ways to check whether an account might be run by a bot.
(MUSIC)
James: How to identify bot accounts on social media.
Noticing an account, post, or comment that feels a bit off?
Here's how to check whether it's real or generated by a bot.
Some bots are harmless, like a chat bot, but others are used to trick people, spread fake news, or promote products you don't need.
Check when the account was created.
Look for signs of when the account started, such as the join date on a profile, or how far back the posts go.
If the account is brand new and posting nonstop, that's a red flag.
Examine the posts.
Take a close look at what the account is posting.
Does it constantly share the same link, copy and paste the same emotional messages, or post generic content repeatedly?
Bots and fake accounts often rely on repetitive, overly dramatic, or attention-grabbing posts to provoke reactions.
Assess the profile.
Check if the account feels like a real person.
Be cautious of usernames loaded with random numbers, generic or no profile picture, and minimal bios.
A lack of personal photos, unique content or genuine interactions is a strong indicator that the account may not be real.
If an account ticks these three boxes, it's probably a bot.
The safest move is to block and report it.
Blocking stops the account from contacting you and reporting alerts the platform so they can investigate and take action.
To do this, tap the three dots on the profile or message, then select block or report.
(MUSIC)
If you’re unsure if an account is real or not:
- Check when the account was created.
- Examine the posts.
- Assess the profile.
If you’re still unsure, the safest thing to do is block and report it. Blocking stops further contact, and reporting helps the platform take action to protect others.

Classroom film: Critical thinking skills
You will probably see a wide range of content every single day – from videos and memes to AI generated text, but how can you tell who and what to trust? It’s important you use critical thinking skills when navigating online content. This means you should check sources and evidence, and look for signs of manipulation or missing context, allowing you to recognise trustworthy information and make informed decisions about what you read, watch and share online.
Watch this video to find out more about critical thinking skills
You can find more teacher notes and classroom resources on our Bitesize for Teachers Solve the Story page.
JAMES: How do you know who and what to trust online?One minute you're reading a news article, the next you're scrolling through comments, memes, and AI-generated advice, all from completely different creators.
It's a lot.
But there are skills and tools you can use to critically analyse online sources, check what you see, stay safer online, and be a responsible digital citizen.
(MUSIC)
Let's start with a challenge. On the screen, you'll see several comments from different users about a single event. Your task is to discuss which of these voices you would trust, and more importantly, why. What clues do you use to decide if the post is reliable? Take a few minutes to debate this in your groups.
(MUSIC)
Deciding who to trust is tricky. Every comment and post you see has been generated either from a person or a computer system, and a fundamental skill is learning to question it before you trust it.
Some online comments share real opinions, while others try to mislead, sell something or push an agenda. Understanding and spotting the difference is key.
This is more important than ever with the rise of Artificial Intelligence, or AI. Generative AI can be used to create incredibly realistic fake images or offer harmful advice that sounds convincing. This means your critical thinking is a key part of staying safe online.Being able to spot AI-made or out of context images helps you avoid being misled by what you see online.
So, what are the strategies for verifying information? First, cross-reference what you see. If you read a shocking claim, your first move should be to check if you can find the same story from at least two other reliable, well-known sources. If you can't, that's a major red flag.
Second, analyse the source itself for clues to its trustworthiness. Look at the design. Does it seem professional or is it full of ads and spelling errors? Pay attention to the tone and language. Is it calm and objective, or is it emotional and clearly trying to make you angry or scared?
Now for your second task, you're going to become digital investigators. Take a look at these images. Some of them have been used out of context and others have been generated by AI. Your mission is to investigate their origin. You could use a reverse image search or look for clues in the image itself. Where did it first appear? Has its meaning been twisted as it's been shared? Are there any clues in that image that suggest it may have been generated by AI?
(MUSIC)
Being a critical consumer of media is an active job. It's about asking questions, not just passively scrolling and believing everything you see or read online.
You now have the tools. Question the source, be extra cautious with user comments and AI content, and always cross-reference to check and verify.
The next time you're online and something doesn't look or feel right to you, remember the most important question you can ask. Who do I trust here, and why? That question is your best defence in the digital world.


