What Does It Really Mean to Be an AI Skeptic?

I’m Not a Luddite
Fund this Blog

So, I call myself an AI skeptic. But honestly? It's more accurate to say I’m a technology skeptic in general. Before you picture me churning butter by candlelight, boycotting smartphones, or mailing handwritten letters, let me clarify: skepticism isn’t rejection. It’s a pause, a thoughtful question: Does this actually solve a real problem, or is it just the latest shiny new hype cycle?

I’ve seen this movie play out before, many times over.

Remember the HTML5 Badge Era?

Rewind to the early 2010s. Developers proudly slapped "HTML5" badges on their websites, displaying them like membership cards to an exclusive, forward-thinking club. "I’m an HTML5 guy!" they’d proclaim, often with a hint of smugness. But what did that really mean for the rest of us?

For my team, it often meant wrestling with spotty browser support while a significant chunk of our clients were still happily, or begrudgingly, using Internet Explorer. Our job wasn’t to wear digital badges or chase fleeting trends; it was to deliver working, reliable products that users could actually, well, use. So, we waited. We strategically added shims where absolutely necessary, avoided unnecessary chaos, and truly embraced HTML5 only when it actually made practical sense and provided tangible benefits. (Spoiler alert: It eventually did become indispensable. But timing mattered immensely.)

html5 badge

Fast-Forward to Today’s "AI Badges"

Right now, the AI landscape feels strikingly similar to the HTML5 era around 2012. Everyone, from startups to established enterprises, seems to be rushing to declare themselves "AI-first," "AI-powered," or "AI-native." It's as if any form of skepticism is an act of heresy against the technological gods. But here’s my grounded take:

My History of Skipping the Hype

My skepticism isn't born of cynicism, but rather a pragmatic understanding of technology adoption cycles. I've consciously skipped several "must-have" trends over the years:

If any of these had genuinely become essential, industry-standard tools for delivering value to users, I would have joined in. Gladly. But trying to adopt and integrate every fleeting trend "just in case" is not only exhausting but also incredibly expensive in terms of time, resources, and mental overhead.

What My Skepticism Actually Means:

My approach to new technology, especially AI, boils down to a few core principles:

  1. Prioritize the problem, not the tech. Before even considering AI, ask: What specific problem are we trying to solve? Does AI genuinely offer the most efficient, effective, and reliable solution? Or is it a glittery, expensive distraction that complicates things more than it helps?
  2. Wait for the dust to settle. Let the early adopters, the brave pioneers, battle the inevitable bugs, discover the hidden pitfalls, and help shape the best practices. I'll adopt when tools are stable, documentation is robust, and reliable support ecosystems exist. This isn't about being last; it's about being smart.
  3. Trust, but verify. AI output, whether it's code, content, or analysis, should be treated like an intern’s first commit: with thorough scrutiny, rigorous testing, and an understanding that while it can be helpful, it’s not infallible.

Someday, the integration of AI might be as fundamental and ubiquitous as electricity or the internet. If that happens, if AI becomes as vital as oxygen for building truly impactful products, then yes, I’ll be first in line for my AI oxygen tank. But today? I'm watching the market fight itself, focusing my energy and resources on what actually delivers tangible, undeniable value to users and businesses.

Join me, maybe? You don’t have to hate tech or reject innovation outright. Just stay curious, remain critically engaged, and feel free to skip the badges. Your product, and your sanity, will thank you.


Comments

There are no comments added yet.

Let's hear your thoughts

For my eyes only