My mother handed me her phone a couple days ago. "Do you think this is true?" she asked, her finger hovering over a video about new curfew laws coming to California, featuring Denzel Washington's take on the matter. I'm so proud that she's learned to ask this question now instead of immediately sharing. But as she waited for my response, I noticed she had already watched the entire video.
We listened together for one second, just one second. The voice was obviously synthetic, that familiar AI-generated voice in almost every low effort youtube ad. I checked the video details and YouTube had marked it as significantly AI-generated content. We didn't need to hear another word.
We are the first generation tasked with teaching our parents how to navigate the world. When a good part of that world exists on the internet, we have to make sure they stay safe and don't fall for scams. I've written before about how all AI videos have become essentially spam at this point, specifically targeting people over a certain age. I've tried to come up with better ways to help my community recognize videos designed to deceive them, and I think I've finally spotted the pattern.
Here's a generation that was taught that intelligent people take their time to form an opinion. But their adversaries are taking full advantage of that very trait. The problem isn't that they're asking whether something is true, that's commendable. The problem is that they're consuming all of the content before making that decision.
They watch the full video. They read the entire article. They listen to the complete AI-generated audio. Then they try to evaluate whether the content makes sense. They share it with their peers and wait for feedback. They read through the comments, which usually support whatever the video claims. The critical mistake they're making is not dismissing the content the moment they realize it may be AI-generated.
For example, when you hear that familiar voice you've come to associate with AI-generated videos, there's no reason to continue watching. You can be forgiven if it's a funny video or a meme. But when that voice is discussing an important subject? Dismiss it immediately. There's no reason to weigh the pros and cons when the premise is already wrong.
If the voice is generated, dismiss it. If the image itself looks synthetic, dismiss it. If the thumbnail has yellow and red text, dismiss it (you won't be able to unsee this pattern now). If the information being discussed in these AI videos is true and actually impactful, serious news outlets will cover it. If there's a curfew in Los Angeles, you won't hear about it first from a YouTuber based in Bangladesh who only posts videos with ethereal thumbnails.
I used to tell my mother not to watch serious news on channels with just a handful of subscribers. Unfortunately, AI scammers now have hundreds of thousands of subscribers, if not millions.
This isn't to say all AI is bad, we can argue about that another time. It's simply to say that serious information doesn't come packaged in AI-generated content. At the first sign that a video is spam, don't try to understand the message it's sharing. Just dismiss it.
My mother nodded as I handed her phone back. One second was all we needed. She has learned something more valuable than how to fact-check a video. She has learned when not to waste her time trying.

Comments(2)
Dylan :
"We are the first generation tasked with teaching our parents how to navigate the world."
This is a really good point. However, as was the case with our parents when they were teaching us, teaching is a skill that not everybody is good at.
I have spoken to my own mother about AI but the mechanism I used was to convey that AI is at times indistinguishable from reality——sometimes there aren't clear tells. And I think this may have been the wrong approach because it sows a general distrust without a solution.
I like your solution of directly pointing out the identifiable clues as it gives them the tools to engage in the world instead of just disconnecting from it. But then I worry about a time when those clues will no longer be there and I think that it would be better if they just disconnected and/or stuck solely to trusted sources. I would rather my parents read the BBC, NYTimes etc. even with all their warmongering because I know that their distortions still have at least a kernel of truth in them.
The NYTimes reports on Iran saying there are protests: they may get the number of dead protestors wrong but you can trust that there were protests. AI will generate "news" about a protest that never happened. The problem may not be the Internet but instead Facebook and YouTube.
Ibrahim Diallo author :
Hey @Dylan, you are right. Just relying on queues to tell if something is AI is not reliable. Even the weird looking fingers are becoming a thing of the past.
Just like you, I direct my parents to these "trusted" news site for the same reasons. You can read an entire BBC article and scrutinize it, but we can't do the same from @randomuser1234.
Ironically, I'm glad to know that I am not the only one fighting this fight.
Let's hear your thoughts