Why Is Everyone Supposed to Die If Machines Can Think?

Why Is Everyone Supposed to Die If Machines Can Think?

How to bully your way out of a bubble.
Fund this Blog

If you only listen to spokespersons for AI companies, you'll have a skewed view of how AI is actually being integrated into the workplace. You probably don't need to convince a developer to include it in their workflow, but you also can't dictate how they do so. Whenever I sit next to another developer during pair programming, I can't help but feel frustrated by their setup. But I don't complain, because they'd be just as annoyed with mine. The beauty of dev work is that all that matters is the output.

If you use a boilerplate generator like Create React App, few will complain. If you use AI to generate the same code, as long as it works, no one will complain either. If the code is crafted with your own wetware, no one will be the wiser. Developers will use any tool at their disposal to increase their own productivity. But what happens when that thousand-dollar-per-developer-per-month subscription starts to feel expensive? What happens when managers expect a tenfold return on investment, yet sprint velocity doesn't budge?

On one end, new metrics are created to track developers' use of the tool. Which, in my experience, are highly inaccurate and vary wildly. On the other hand, companies are using AI as justification for laying off workers. So which metric is to be trusted?

AI isn't simply a solution in search of a problem. It's quite useful. One person will tell you it's great for writing tests, another will praise it for writing utility functions, and another will use it to better understand a requirement. Each is a valid use case. But the question managers keep asking is: "Can we use AI instead of hiring another dev?"

I'm not sure what is supposed to happen if we achieve so-called AGI. Does it mean I no longer have to do code reviews? Is it AGI when the AI stops hallucinating? My shower-thought answer: AGI is an AI that can say "I don't know" when it doesn't know the answer. But I don't think Sam Altman sees that as a selling point.

Why are we supposed to die if a machine can think? Every time someone raises this argument, I think of Thanos. In the Avengers saga, he kills half of all living beings in the universe. It's an act so total and irreversible that the writers had to bend time itself to undo it. And still, fifteen movies later, the franchise keeps going. Each new antagonist has to threaten something, but nothing lands the same way. You already saw the worst. The scale is broken.

The villain is a terrorist from an un-named country? Gimme a break.

That's what the AI extinction narrative has done to the conversation about AI. By opening with the end of the world, it made every practical concern feel small by comparison. Who wants to talk about sprint velocity and hallucinated function calls when we're supposedly staring down an existential threat? So we don't. We argue about the apocalypse instead. Meanwhile, I am debugging a production incident at 2am, in a codebase that has never once tried to kill me, but has absolutely tried to ruin my weekend.


The reality is quite different from the drama that unfolds online. The longer this AI craze continues, the less I believe we're headed for a dramatic bubble pop. Instead, I think the major players will try to bully their way out of one. And that bullying is already happening on at least three fronts: language, narrative, and money.

Microsoft is leading the language crack down. They are rounding up critics in their own Copilot Discord servers, banning users who use the now-deemed-derogatory term "Microslop." Nvidia is publicly asking people to stop using the phrase "AI slop." These aren't isolated incidents of corporate thin skin. They are coordinated attempts to police the vocabulary we use to criticize the technology. Control the language, and you go a long way toward controlling the conversation. When you can't call a thing what it is, it becomes harder to argue that the thing exists at all.

On the narrative front, we are told every day that AI is good, innovative, and inevitable. Then we're told it's going to take our jobs. And at the same time, we're told it's an existential threat that could wipe us off the planet. It is simultaneously the best thing that could ever happen to humanity and the worst. I'm reminded that "War is peace, freedom is slavery, ignorance is strength" as George Orwell puts it. It's a cognitive trap.

When a technology is framed as both savior and apocalypse, the questions regular people ask are seen as mundane. We can't ask: "Does it work? Is it worth the cost? Are we actually benefiting from this?" Instead, we spend our energy arguing about the end of the world, and the companies keep burning through cash while the narrative burns through our attention.

On the money front, we all witnessed it firsthand with the fiasco involving Anthropic, OpenAI, and the Department of Defense. People were quick to sort the players into the good guys, the bad guys, and the ugly. But to me, it looked like a dispute designed to obscure the problem that has plagued AI companies from the very beginning: they need to make money.

It doesn't matter if a company generates $20 billion a year when its operating costs double annually. They're still in the red.

Anthropic was making a grand stand, positioning itself as the principled actor fighting against the US war machine. At the same time, they had no issue working with Palantir, a company that makes no secret of its commitment to mass surveillance and its role in powering the machinery of war.

Meanwhile, OpenAI is struggling with its own financial stability. They've just launched ads on their platform, something Sam Altman once described as a last resort. When you're in the red and a customer is willing to pay, principles become a luxury you can do without. Given their history of bending copyright law and converting to a for-profit entity, it's naive to assume there are other principles they wouldn't bend as well. They quickly jumped into the DoD deal, scooping up a $200 million contract to replenish their coffers.

There was one detail in Anthropic's statement that deserved more attention than it got:

We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

In other words: surveilling citizens is immoral. If you're a non-citizen or a foreigner, you're on your own.


So right now, AI companies are hemorrhaging money, policing the words we use to criticize them, manufacturing existential dread to crowd out any skepticism, and taking defense contracts while performing ethical restraint. And somewhere in the middle of this, we're supposed to believe that only they can save us.

When you're losing money but need to maintain the illusion of infinite growth, you don't wait for the market to correct you. You make the bubble burst feel not just unlikely, but unthinkable. You bully the language, inflate the stakes, and monetize the fear.

As individuals, what are we supposed to do with the useful part of the technology? It helps me write tests. It helps my colleagues parse requirements. Used without hype and within realistic expectations, it is actually a good tool. But "a good tool" doesn't justify the valuations, the layoffs-as-euphemism, the defense contracts, or the Discord bans. It doesn't sustain the mythology that has been built around it.

That gap between the tool that exists and the revolution that was promised, is precisely what the bullying is designed to keep you from looking at too closely. I still struggle to answer managers who ask me to justify the team use of the tool. I never had to justify my IDE, or my secret love affair with tmux before. For now all I can tell them is: "it's useful, within limits, and that should be enough."

It won't be what they want to hear. But it's more than the industry has managed to say about itself.


Comments

There are no comments added yet.

Let's hear your thoughts

For my eyes only