When OpenAI released the first version of Sora, I was excited. For years, I'd had this short story sitting on my hard drive, something I'd written long ago and always dreamed of bringing to life as a short film. The only problem was I didn't have the expertise to shoot a movie, and my Blender 3D skills are rusty for lack of use. But Sora promised something different. I could upload my sketches, input my script, and generate the film in my mind. The creative barrier had finally been lifted.
But reality was a bit different from the demos. No matter what I tried, my outputs never looked anything like what OpenAI showcased. The scenes were adjacent to what I wanted, not what I actually needed.
It wasn't just Sora. I tested Runway ML, experimented with Veo, and tried to keep my spending reasonable. Every model generated the same kind of thing: something that "looked good" in a superficial way. They excelled at creating cliché scenes, the kind of generic image that checks all the technical boxes. But creating something that could fit into a coherent narrative, something with intention and specificity? That was nearly impossible.
When Sora 2 launched, I started right where I left off. Maybe this time would be different. The videos are more realistic than ever, but the main problem remains unchanged. These tools aren't struggling because they can't generate scenes or dialogue, they sure can. The issue is that they generate what I've come to call "AI Videos," and that's a distinct category with its own aesthetic fingerprint.
The New Uncanny Valley
Think about how you instantly recognize certain types of content. If I described a video to you right now: fast-paced, someone talking directly to a screen with multiple jump cuts, a ring light's circular reflection visible in their eyes, their bedroom visible in the background. You would instantly say "TikTok video." The format is hard to miss these days.
AI-generated videos have developed their own unique look. There's a visual quality that marks them, a subtle wrongness that your brain picks up on even when you can't articulate exactly what's off. It's the new uncanny valley, and I feel an intense revulsion whenever I encounter it. I'm not alone in this reaction either. In my small circle of friends and colleagues, we've all developed the same instinctive aversion.
I'm starting to feel the same revulsion for YouTube shorts even when they are created by real people. The reason is, well, YouTube has been secretly using AI to alter real videos, making authentic content start to look artificially generated. You will notice people's faces look smoothed or sharpened, and that happens without the creator's knowledge or consent. The line between real and AI-generated content is blurring from both directions.
So if these videos trigger such a negative response in many viewers, where can AI-generated content actually thrive? The answer: with spammers, scammers, rage-baiters, and manipulators.
These bad actors are having a field day with AI video tools. A couple months ago, I wrote about AI Video Overviews, I speculated that Google might eventually start using AI-generated videos as enhanced search results, synthesizing information from multiple sources into custom video summaries. That remains speculative. But for harmful content? That's not speculation, it's happening right now, at scale.
The Primary Victims
The main targets are older adults. My parents and their peer group are constantly sharing AI-generated videos in their group chats and on social media. Even as I write this, my mother just sent me a video showing Denzel Washington giving life advice, entirely fabricated, of course.
There is a wide variety of content. Health misinformation, sensational fake news, videos claiming Obama has embraced Islam, an elderly Tai Chi master giving dubious health tips, Trump supposedly reversing or doubling down on positions he never actually took. The specific claims change daily, but the pattern remains constant.
These videos spread like wildfire despite our collective, repeated efforts to educate people. I've spent countless hours explaining why these videos are fake. In my community group chat, I've explained the telltale signs multiple times, like the little cloud icon with eyes that appears on AI-generated content (sora watermark). I've shared practical tips: if a video seems too good or too shocking to be true, search for the information on Google to verify it. Nothing seems to stick.
Some days, according to these videos, we're at war. Other days, entire cities have burned down, or a tsunami has devastated Los Angeles. You can't debunk faster than this information spreads.
When you find these videos on YouTube, scroll down to the comments. You'll see real people engaging seriously with fabricated content, offering heartfelt responses to synthetic personas, debating points that were never actually made by the people shown in the videos. I get phone calls from family overseas, people reach out, or share what they think is happening right at my doorstep. When I ask where they heard it? They forward a whatsapp video.
There's no easy solution to this crisis.
AI video technology has found its audience, just not the audience the marketing materials promised. These tools weren't really designed to help people like me overcome technical limitations and bring our stories to life. They were made to enable those who want to manipulate, deceive, and exploit people for engagement, profit, or ideology.
I've tried to find legitimate, beneficial use cases for AI video generation. I've thought about educational applications, accessibility features, and experimental art projects. Maybe they exist in theory, but in practice, I keep coming back to the same conclusion.
Right now, every AI video I encounter is harmful. Every single one, without exception.
Either it's directly harmful like spreading misinformation, impersonating real people, manipulating vulnerable viewers. Or it's indirectly harmful by training us to accept a synthetic reality where nothing can be trusted and everything must be questioned. Even the "harmless" AI videos contribute to a broader erosion of trust in visual media.
This technology is devastatingly effective for the purposes bad actors have found for it. The creative barrier I hoped to overcome remains in place. But now there's a new barrier too. The barrier of trust. And that one might be much harder to rebuild.

Comments
There are no comments added yet.
Let's hear your thoughts