How we get radicalized in America

Published:

by

Be healthy, be young, fall ill. You have a great job of course, you have insurance. It would be ok if the worst thing about health insurance in America was it is hard to navigate. No! The actual problem is that your insurance is incentivized not to cover you at your most vulnerable moment.

You pay them every month. That's money that goes from your paycheck, into their pockets. Now if they cover you, that's money that leaves their pocket, and go into your treatment. There are two ways they can make money. 1. You continue paying every month, and never fall ill. 2. You fall ill, and they deny you care.

Only the second option is an active option. Health Insurance is a scam that we have normalized in the United States. It helps no one, it makes healthcare unaffordable, and you have to fight tooth and nail to get any sort of care. When Luigi was in the headlines, and news anchors were asking how such a young man can get radicalized, I shook my head.

In America, it is our tradition to get 2 jobs. It is our tradition to live paycheck to paycheck. And it is our tradition to get radicalized the moment we get sick. When you get sick, the healthcare industry tries to charge much as they can get away with and the insurance industry tries to deny as much as it can.

Shower Thought: Git Teleportation

Published:

by

In many sci-fi shows, spaceships have a teleportation mechanism on board. They can teleport from inside their ship to somewhere on a planet. This way, the ship can remain in orbit while its crew explores the surface.

But then people started asking: how does the teleportation device actually work? When a subject stands on the device and activates it, does it disassemble all the atoms of the person and reconstruct them at the destination? Or does it scan the person, kill them, and then replicate them at the destination?

This debate has been on going for as long as I can remember. Since teleportation machines exist only in fiction, we can never get a true answer. Only the one that resonates the most.

So, that's why I thought of Diff Teleportation. Basically, this is a Git workflow applied to teleportation. When you step onto a device, we run the command:

$> git checkout -b kirk-teleport-mission-123
$> beam -s kirk -d planet-xyz -o kirk-planet-xyz    # beam is a vibe-coded teleportation command

Then, the machine will have to suspend activity on the master branch. This will make merging the branch much simpler in the future.

# sci-fi git command
$> git suspend master

Now, the person that has been teleported can explore the planet and go about mission 123. While they are doing their job, let's see what flags are supported in beam:

$> beam -h
Usage: beam [OPTION]... [FILE]...
Beam a file to a destination

    -s, --subject               subject to beam
    -d, --destination       destination to beam a subject
    -o, --output                name of the file at the destination
    -D, --Destroy               destroy a subject

When the mission is completed, they can be teleported back. Well, not the whole person, otherwise we end up with a clone.

$> beam -s kirk-planet-xyz -d ss-ent-v3 -o kirk-temp

We could analyze the new data and remove any unwanted additions. For example, we could clean up any contamination at this point. But for the sake of time, I'll explore that another day. As an exercise, run git diff for your own curiosity. For now, all we are interested in is the information that the teleportee has gathered from the planet, which we will merge back into master.

$> git add src/neurons
$> git commit -m "add mission 123 exploration"
$> git stash
$> git stash drop   # Hopefully you've analyzed it.
$> git push origin kirk-teleport-mission-123

I imagine in science fiction, there is an automated way for PR reviews that is more reliable than an LLM. Once that process is completed, we can merge to master and run some cleanup code in the build pipeline.

$> git branch -D kirk-teleport-mission-123
$> beam -s kirk-planet-xyz -D
$> beam -s kirk-temp -D
$> git unsuspend master

Somewhere down on planet XYZ, a clone stepped onto the teleportation device. He saw a beam of light scan his body from head to toe. Then, for a moment, he wondered if the teleportation had worked. But right before he stepped off, the command beam -s kirk-planet-xyz -D ran, and he was pulverized.

Back in the spaceship, a brand-new clone named kirk-temp appeared at the teleportation station. He was quickly sanitized, diff'd, and reviewed. But before he could gather his thoughts, the command beam -s kirk-temp -D ran, and he was pulverized.

Not a second later, the original subject was reanimated, with brand-new information about "his" exploration on planet XYZ.

Teleportation is an achievable technology. We just have to come to terms with the fact that at least two clones are killed for every successful teleportation session. In fact, if we are a bit more daring, we might not even need to suspend the first subject. We can create multiple clones, or agents, and have them all explore different things. When their task is complete, we can wrestle a bit with merge conflict, run a couple beam -D commands, and the original subject is blessed with new knowledge.

OK, I'm getting out of this shower.

You Digg?

Published:

by

digg old logo

For me, being part of an online community started with Digg. Digg was the precursor to Reddit and the place to be on the internet. I never got a MySpace account, I was late to the Facebook game, but I was on Digg.

When Digg redesigned their website (V4), it felt like a slap in the face. We didn't like the new design, but the community had no say in the direction. To make it worse, they removed the bury button. It's interesting how many social websites remove the ability to downvote. There must be a study somewhere that makes a sound argument for it, because it makes no sense to me.

Anyway, when Digg announced they were back in January 2026, I quickly requested an invite. It was nostalgic to log in once more and see an active community building back up right where we left off.

But then, just today, I read that they are shutting down. I had a single post in the technology sub. It was starting to garner some interest and then, boom! Digg is gone once more.

digg is gone

The CEO said that one major reason was that they faced "an unprecedented bot problem."

This is our new reality. Bots are now powered by AI and they are more disruptive than ever. They quickly circumvent bot detection schemes and flood every conversation with senseless text.

It seems like there are very few places left where people can have a real conversation online. This is not the future I was looking for. I'll quietly write on my blog and ignore future communities that form.

Rest in peace, Digg.

The Server Older than my Kids!

Published:

by

This blog runs on two servers. One is the main PHP blog engine that handles the logic and the database, while the other serves all static files. Many years ago, an article I wrote reached the top position on both Hacker News and Reddit. My server couldn't handle the traffic. I literally had a terminal window open, monitoring the CPU and restarting the server every couple of minutes. But I learned a lot from it.

The page receiving all the traffic had a total of 17 assets. So in addition to the database getting hammered, my server was spending most of its time serving images, CSS and JavaScript files. So I decided to set up additional servers to act as a sort of CDN to spread the load. I added multiple servers around the world and used MaxMindDB to determine a user's location to serve files from the closest server. But it was overkill for a small blog like mine. I quickly downgraded back to just one server for the application and one for static files.

Ever since I set up this configuration, my server never failed due to a traffic spike. In fact, in 2018, right after I upgraded the servers to Ubuntu 18.04, one of my articles went viral like nothing I had seen before. Millions of requests hammered my server. The machine handled the traffic just fine.

It's been 7 years now. I've procrastinated long enough. An upgrade was long overdue. What kept me from upgrading to Ubuntu 24.04 LTS was that I had customized the server heavily over the years, and never documented any of it. Provisioning a new server means setting up accounts, dealing with permissions, and transferring files. All of this should have been straightforward with a formal process. Instead, uploading blog post assets has been a very manual affair. I only partially completed the upload interface, so I've been using SFTP and SCP from time to time to upload files.

It's only now that I've finally created a provisioning script for my asset server. I mostly used AI to generate it, then used a configuration file to set values such as email, username, SSH keys, and so on. With the click of a button, and 30 minutes of waiting for DNS to update, I now have a brand new server running Ubuntu 24.04, serving my files via Nginx. Yes, next months Ubuntu 26.04 LTS comes out, and I can migrate it by running the same script.

I also built an interface for uploading content without relying on SFTP or SSH, which I'll be publishing on GitHub soon.

It's been 7 years running this server. It's older than my kids. Somehow, I feel a pang of emotion thinking about turning it off. I'll do it tonight...

But while I'm at it, I need to do something about the 9-year-old and 11-year-old servers that still run some crucial applications.

My older servers need upgrading

I'm Not Lying, I'm Hallucinating

Published:

by

Andrej Karpathy has a gift for coining terms that quickly go mainstream. When I heard "vibe coding," it just made sense. It perfectly captured the experience of programming without really engaging with the code. You just vibe until the application does what you want.

Then there's "hallucination." He didn't exactly invent it. The term has existed since the 1970s. In one early instance, it was used to describe a text summarization program's failure to accurately summarize its source material. But Karpathy's revival of the term brought it back into the mainstream, and subtly shifted its meaning, from "prediction error" to something closer to a dream or a vision.

Now, large language models don't throw errors. They hallucinate. When they invent facts or bend the truth, they're not lying. They're hallucinating. And with every new model that comes out and promises to stay clean off drugs, it still hallucinates.

An LLM can do no wrong when all its failures are framed as neurological disorder. For my part, I hope there's a real effort to teach these models to simply say "I don't know." But in the meantime, I'll adopt the term for myself. If you ever suspect I'm lying, or catch me red-handed, just know that it's not my fault. I'm just hallucinating.

“How old are you?” Asked the OS

Published:

by

A new law passed in California to require every operating system to collect the user's age at account creation time. The law is AB-1043. And it was passed in October of 2025.

How does it work? Does it apply to offline systems? When I set up my Raspberry Pi at home, is this enforced? What if I give an incorrect age, am I breaking the law now? What if I set my account correctly, but then my kids use the device? What happens?

There is no way to enforce this law, but I suspect that's not the point. It's similar to statements you find in IRS documents. The IRS requires you to report all income from illegal activities, such as bribes and scams. Obviously, if you are getting a bribe, you wouldn't report it, but by not reporting it you are breaking additional laws that can be used to get you prosecuted.

When you don't report your age to your OS whether it's a windows device or a Tamagotchi, you are breaking the law. It's not enforced of course, but when you are suspected of any other crime, you can be arrested for the age violation first, then prosecuted for something else.

What a world we live in.

That's it, I'm cancelling my ChatGPT

Published:

by

Just like everyone, I read Sam Altman's tweet about joining the so-called Department of War, to use ChatGPT on DoW classified networks. As others have pointed out, this is the entry point for mass surveillance and using the technology for weapons deployment. I wrote before that we had the infrastructure for mass surveillance in place already, we just needed an enabler. This is the enabler.

This comes right after Anthropic's CEO wrote a public letter stating their refusal to work with the DoW under their current terms. Now Anthropic has been declared a public risk by the President and banned from every government system.

Large language models have become ubiquitous. You can't say you don't use them because they power every tech imaginable. If you search the web, they write a summary for you. If you watch YouTube, one appears right below the video. There's a Gemini button on Chrome, there's Copilot on Edge and every Microsoft product. There it is in your IDE, in Notepad, in MS Paint. You can't escape it.

Switching from one LLM to the next makes minimal to no difference for everyday use. If you have a question you want answered or a document to summarize, your local Llama will do the job just fine. If you want to compose an email or proofread your writing, there's no need to reach for the state of the art, any model will do. For reviewing code, DeepSeek will do as fine a job as any other model.

OpenAI war soldier

A good use of ChatGPT's image generator.

All this to say, ChatGPT doesn't have a moat. If it's your go-to tool, switching away from it wouldn't make much of a difference. At this point, I think the difference is psychological. For example, my wife once told me she only ever uses Google and can't stand any other search engine. What she didn't know was that she had been using Bing on her device for years. She had never noticed, because it was the default.

When I read the news about OpenAI, I was ready to close my account. The only problem is, well, I never use ChatGPT. I haven't used it in years. My personal account lay dormant. My work account has a single test query despite my employer trying its hardest to get us to use it.

But I think none of that matters when OpenAI caters to a government agency with a near-infinite budget. For every public account that gets closed, OpenAI will make up for it with deeper integration into classified networks.

Not even 24 hours later, the US is at war with Iran. So while we're at it, here is a nice little link to help you close your OpenAI account.

Nvidia was only invited to invest

Published:

by

Nvidia was only invited to invest.

That is one reversal of commitment. Remember that graph that has been circling around for some time now? The one that shows the circular investment from AI companies:

OpenAI circular investment

Basically Nvidia will invest $100 billion in OpenAI. OpenAI will then invest $300 billion in Oracle, then Oracle invests back into Nvidia. Now, Jensen Huang, the Nvidia CEO, is back tracking and saying he never made that commitment.

“It was never a commitment. They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.”

So he never committed? Did we make up all these graphs in our head? Was it a misquote from a journalist somewhere that sparkled all this frenzy? Well, you can take a look in OpenAI press release in September of 2025. They wrote:

NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed.

In fact, Jensen Huang went on to say:

“NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.”

It sounds like Jensen is distancing himself from that $100 billion commitment. Did he take a peak inside OpenAI and change his mind? At the same time, OpenAI is experimenting with ads. Sam Altman stated before that they would only ever use ads as a last resort. It sounds like we are in the phase.

Markdown.exe

Published:

by

I've been spending time looking through "skills" for LLMs, and I feel like I'm the only one panicking. Nobody else seems to care.

Agent skills are supposed to be a way to teach your LLM how to handle specific tasks. For example, if you have a particular method for adding tasks to your calendar, you write a skill file with step-by-step instructions on how to retrieve a task from an email and export it. Once the agent reads the file, it knows exactly what to do, rather than guessing.

This can be incredibly useful. But when people download and share skills from the internet, it becomes a massive attack vector. Whether it's a repository or a marketplace, there is ample room for attackers to introduce malicious instructions that users never bother to vet. It is happening.

We are effectively back to the era of downloading .exe files from the internet and running them without a second thought.

Congratulations are in order! While you were busy admiring how nicely this skill formats your bullet points, it quietly rummaged through your digital life, uploaded your browser history to a pastebin, and ordered fifteen pounds of unscented kitty litter to your workplace. You thought you were downloading a productivity tool, but you actually just installed a digital intern with a criminal record and a vendetta. It turns out, treating a text file like a harmless puppy was a mistake. You saw "Markdown" and assumed safety, but you forgot that to an LLM, these words are absolute law. While you were vetting the font choice, the skill was busy sending your crypto keys to a generous prince in a faraway land. You didn't just automate your workflow; you automated your own downfall. So, sit back, relax, and watch as your calendar deletes your meetings and replaces them with "Time to Reflect on My Mistakes." You have officially been pawned. Next time, maybe read the instructions before you let the AI run your life.

I can't upgrade to Windows 11, now leave me alone

Published:

by

Microsoft won't let you dismiss the upgrade notification

So support for Windows 10 has ended. Yes, millions of users are still on it. One of my main laptops runs Windows 10. I can't update to Windows 11 because of the hardware requirements. It's not that I don't have enough RAM, storage, or CPU power. The hardware limitation is specifically TPM 2.0.

What is TPM 2.0, you say? It stands for Trusted Platform Module. It's basically a security chip on the motherboard that enables some security features. It's good and all, but Windows says my laptop doesn't support it. Great! Now leave me alone.

Well, every time I turn on my computer, I get a reminder that I need to update to Windows 11. OK, at this point a Windows machine only belongs to you in name. Microsoft can run arbitrary code on it. They already ran the code to decide that my computer doesn't support Windows 11. So why do they keep bothering me?

Windows 10 end of life announcement

Fine, I'm frustrated. That's why I'm complaining. I've accepted the fact that my powerful, yet 10-year-old laptop won't get the latest update. But if Microsoft's own systems have determined my hardware is incompatible, why are they harassing? I'll just have to dismiss this notification and call it a day.

But wait a minute. How do I dismiss it?

remind me later or learn more

I cannot dismiss it. I can only be reminded later or... I have to learn more. If I click "remind me later," I'm basically telling Microsoft that I consent to being shown the same message again whenever they feel like it. If I click "learn more"? I'm taken to the Windows Store, where I'm shown ads for different laptops I can buy instead. Apparently, I'm also probably giving them consent to show me this ad the next time I log in.

windows laptop buying guide

It's one thing to be at the forefront of enshittification, but Microsoft is now actively hostile to its users. I've written about this passive-aggressive illusion of choice before. They are basically asking "Do you want to buy a new laptop?" And the options they are presenting are "Yes" and "OK."

This isn't a bug. This is intentional design. Microsoft has deliberately removed the ability to decline.

Dear Microsoft

Listen. You said my device doesn't support Windows 11. You're right. Now leave me alone. I have another device running Windows 11. It's festered with ads, and you're trying everything in your power to get me to create a Microsoft account.

I paid for that computer. I also paid for a pro version of the OS. I don't want OneDrive. I don't want to sign up with my Microsoft account. Whether I use my computer online or offline is none of your business. In fact, if you want me to create an account on your servers, you are first required to register your OS on my own website. The terms and conditions are simple. Every time you perform any network access, you have to send a copy of the payload and response back to my server. Either that, or you're in breach of my terms.

Notes:

By the way, the application showing this notification is called Reusable UX Interaction Manager sometimes. Other times it appears as Campaign Manager.