Your AWS Certification Makes You an AWS Salesman

Published:

by

I must have been the last developer still confused by the AWS interface. I knew how to access DynamoDB, that was the only tool I needed for my daily work. But everything else was a mystery. How do I access web hosting? If I needed a small server to host a static website, what service would I use? Searching for "web hosting" inside the AWS console yielded nothing.

After digging through the web, I found the answer: an Elastic Cloud Compute instance, better known as EC2. I learned that I could use it under the "Free Tier." Amazon offers free tiers for many services, but figuring out the actual cost beyond that introductory period requires elaborate calculation tools. In fact, I’ve often seen independent developers build tools specifically to help people decipher AWS pricing

If you want to use AWS effectively, it seems the only path is to get certified. Companies send employees to conferences and courses to learn the platform. I took some of those courses and they taught me how to navigate the interface and build very specific things. But that skill isn't transferrable. In the course, I wasn't exactly learning a new engineering skill. Instead, I was learning Amazon.

Amazon has created a complex suite of tools that has become the industry standard. Hidden within its moat of confusion, we are trained to believe it is the only option. Its complexity justifies the high cost, and the Free Tier lures in new users who settle into the idea that this is just "the way" to do web development.

When you are presented with a simple interface like DigitalOcean or Linode and a much cheaper price tag, you tend to think that something is missing. Surely, a cheaper, simpler service must lack half the features, right? The reality is, you don't need half the stuff AWS offers. Where other companies create tutorials to help you build, Amazon offers certificates. It is a powerful signal for enterprise legitimacy, but for most developers, it is overkill.

This isn't to say AWS is "bad," but it obscures the reality of running a web service. It is much easier than it seems. There are hundreds of alternatives for hosting. You can run your services reliably on a VPS without ever breaking the bank.

Most web programming is free, or at the very least, affordable.

13th Year of Blogging

Published:

by

Of all the days to start a blog, I chose April Fools' Day. It wasn't intentional, maybe more of a reflection of my mindset. When I decide to do something, I shut off my brain and just do it. This was a commitment I made without thinking about the long-term effects.

I knew writing was hard, but I didn't know how hard. I knew that maintaining a server was hard, but I didn't know the stress it would cause. Especially that first time I went viral. Seeing traffic pour in, reading back the article, and realizing it was littered with errors. I was scrambling to fix those errors while users hammered my server. I tried restarting it to relieve the load and update the content, but to no avail. It was a stressful experience. One I wouldn't trade for anything in the world.

13 years later, it feels like the longest debugging session I've ever run. Random people message me pointing out bugs. Some of it is complete nonsense. But others... well, I actually sent payment to a user who sent me a proof of concept showing how to compromise the entire server. I thought he'd done some serious hacking, but when I responded, he pointed me to one of my own articles where I had accidentally revealed a vulnerability in my framework.

The amount you learn from running your own blog can't be replicated by any other means. Unlike other side projects that come and go, the blog has to remain. Part of its value is its longevity. No matter what, I need to make sure it stays online. In the age of AI, it feels like anyone can spin up a blog and fill it with LLM-generated content to rival any established one. But there's something no LLM can replicate: longevity.

No matter what technology we come up with, no tool can create a 50-year-old oak tree. The only way to have one is to plant a seed and give it the time it needs to grow.

Your very first blog post may not be entirely relevant years later, but it's that seed. Over time, you develop a voice, a process, a personality. Even when your blog has an audience of one, it becomes a reflection of every hurdle you cleared. For me, it's the friction in my career, the lessons I learned, the friends I made along the way. And luckily, it's also the audience that keeps me honest and stops me from spewing nonsense.

Nothing brings a barrage of emails faster than being wrong.

Maybe that's why I subconsciously published it on April Fools' Day. Maybe that's the joke. I'm going to keep adding rings to my tree, audience or no audience, I'm building longevity.

Thank you for being part of this journey.

Extra: Some articles I wrote on April Fools day.

How we get radicalized in America

Published:

by

Be healthy, be young, fall ill. You have a great job of course, you have insurance. It would be ok if the worst thing about health insurance in America was it is hard to navigate. No! The actual problem is that your insurance is incentivized not to cover you at your most vulnerable moment.

You pay them every month. That's money that goes from your paycheck, into their pockets. Now if they cover you, that's money that leaves their pocket, and go into your treatment. There are two ways they can make money. 1. You continue paying every month, and never fall ill. 2. You fall ill, and they deny you care.

Only the second option is an active option. Health Insurance is a scam that we have normalized in the United States. It helps no one, it makes healthcare unaffordable, and you have to fight tooth and nail to get any sort of care. When Luigi was in the headlines, and news anchors were asking how such a young man can get radicalized, I shook my head.

In America, it is our tradition to get 2 jobs. It is our tradition to live paycheck to paycheck. And it is our tradition to get radicalized the moment we get sick. When you get sick, the healthcare industry tries to charge much as they can get away with and the insurance industry tries to deny as much as it can.

Shower Thought: Git Teleportation

Published:

by

In many sci-fi shows, spaceships have a teleportation mechanism on board. They can teleport from inside their ship to somewhere on a planet. This way, the ship can remain in orbit while its crew explores the surface.

But then people started asking: how does the teleportation device actually work? When a subject stands on the device and activates it, does it disassemble all the atoms of the person and reconstruct them at the destination? Or does it scan the person, kill them, and then replicate them at the destination?

This debate has been on going for as long as I can remember. Since teleportation machines exist only in fiction, we can never get a true answer. Only the one that resonates the most.

So, that's why I thought of Diff Teleportation. Basically, this is a Git workflow applied to teleportation. When you step onto a device, we run the command:

$> git checkout -b kirk-teleport-mission-123
$> beam -s kirk -d planet-xyz -o kirk-planet-xyz    # beam is a vibe-coded teleportation command

Then, the machine will have to suspend activity on the master branch. This will make merging the branch much simpler in the future.

# sci-fi git command
$> git suspend master

Now, the person that has been teleported can explore the planet and go about mission 123. While they are doing their job, let's see what flags are supported in beam:

$> beam -h
Usage: beam [OPTION]... [FILE]...
Beam a file to a destination

    -s, --subject               subject to beam
    -d, --destination       destination to beam a subject
    -o, --output                name of the file at the destination
    -D, --Destroy               destroy a subject

When the mission is completed, they can be teleported back. Well, not the whole person, otherwise we end up with a clone.

$> beam -s kirk-planet-xyz -d ss-ent-v3 -o kirk-temp

We could analyze the new data and remove any unwanted additions. For example, we could clean up any contamination at this point. But for the sake of time, I'll explore that another day. As an exercise, run git diff for your own curiosity. For now, all we are interested in is the information that the teleportee has gathered from the planet, which we will merge back into master.

$> git add src/neurons
$> git commit -m "add mission 123 exploration"
$> git stash
$> git stash drop   # Hopefully you've analyzed it.
$> git push origin kirk-teleport-mission-123

I imagine in science fiction, there is an automated way for PR reviews that is more reliable than an LLM. Once that process is completed, we can merge to master and run some cleanup code in the build pipeline.

$> git branch -D kirk-teleport-mission-123
$> beam -s kirk-planet-xyz -D
$> beam -s kirk-temp -D
$> git unsuspend master

Somewhere down on planet XYZ, a clone stepped onto the teleportation device. He saw a beam of light scan his body from head to toe. Then, for a moment, he wondered if the teleportation had worked. But right before he stepped off, the command beam -s kirk-planet-xyz -D ran, and he was pulverized.

Back in the spaceship, a brand-new clone named kirk-temp appeared at the teleportation station. He was quickly sanitized, diff'd, and reviewed. But before he could gather his thoughts, the command beam -s kirk-temp -D ran, and he was pulverized.

Not a second later, the original subject was reanimated, with brand-new information about "his" exploration on planet XYZ.

Teleportation is an achievable technology. We just have to come to terms with the fact that at least two clones are killed for every successful teleportation session. In fact, if we are a bit more daring, we might not even need to suspend the first subject. We can create multiple clones, or agents, and have them all explore different things. When their task is complete, we can wrestle a bit with merge conflict, run a couple beam -D commands, and the original subject is blessed with new knowledge.

OK, I'm getting out of this shower.

You Digg?

Published:

by

digg old logo

For me, being part of an online community started with Digg. Digg was the precursor to Reddit and the place to be on the internet. I never got a MySpace account, I was late to the Facebook game, but I was on Digg.

When Digg redesigned their website (V4), it felt like a slap in the face. We didn't like the new design, but the community had no say in the direction. To make it worse, they removed the bury button. It's interesting how many social websites remove the ability to downvote. There must be a study somewhere that makes a sound argument for it, because it makes no sense to me.

Anyway, when Digg announced they were back in January 2026, I quickly requested an invite. It was nostalgic to log in once more and see an active community building back up right where we left off.

But then, just today, I read that they are shutting down. I had a single post in the technology sub. It was starting to garner some interest and then, boom! Digg is gone once more.

digg is gone

The CEO said that one major reason was that they faced "an unprecedented bot problem."

This is our new reality. Bots are now powered by AI and they are more disruptive than ever. They quickly circumvent bot detection schemes and flood every conversation with senseless text.

It seems like there are very few places left where people can have a real conversation online. This is not the future I was looking for. I'll quietly write on my blog and ignore future communities that form.

Rest in peace, Digg.

The Server Older than my Kids!

Published:

by

This blog runs on two servers. One is the main PHP blog engine that handles the logic and the database, while the other serves all static files. Many years ago, an article I wrote reached the top position on both Hacker News and Reddit. My server couldn't handle the traffic. I literally had a terminal window open, monitoring the CPU and restarting the server every couple of minutes. But I learned a lot from it.

The page receiving all the traffic had a total of 17 assets. So in addition to the database getting hammered, my server was spending most of its time serving images, CSS and JavaScript files. So I decided to set up additional servers to act as a sort of CDN to spread the load. I added multiple servers around the world and used MaxMindDB to determine a user's location to serve files from the closest server. But it was overkill for a small blog like mine. I quickly downgraded back to just one server for the application and one for static files.

Ever since I set up this configuration, my server never failed due to a traffic spike. In fact, in 2018, right after I upgraded the servers to Ubuntu 18.04, one of my articles went viral like nothing I had seen before. Millions of requests hammered my server. The machine handled the traffic just fine.

It's been 7 years now. I've procrastinated long enough. An upgrade was long overdue. What kept me from upgrading to Ubuntu 24.04 LTS was that I had customized the server heavily over the years, and never documented any of it. Provisioning a new server means setting up accounts, dealing with permissions, and transferring files. All of this should have been straightforward with a formal process. Instead, uploading blog post assets has been a very manual affair. I only partially completed the upload interface, so I've been using SFTP and SCP from time to time to upload files.

It's only now that I've finally created a provisioning script for my asset server. I mostly used AI to generate it, then used a configuration file to set values such as email, username, SSH keys, and so on. With the click of a button, and 30 minutes of waiting for DNS to update, I now have a brand new server running Ubuntu 24.04, serving my files via Nginx. Yes, next months Ubuntu 26.04 LTS comes out, and I can migrate it by running the same script.

I also built an interface for uploading content without relying on SFTP or SSH, which I'll be publishing on GitHub soon.

It's been 7 years running this server. It's older than my kids. Somehow, I feel a pang of emotion thinking about turning it off. I'll do it tonight...

But while I'm at it, I need to do something about the 9-year-old and 11-year-old servers that still run some crucial applications.

My older servers need upgrading

I'm Not Lying, I'm Hallucinating

Published:

by

Andrej Karpathy has a gift for coining terms that quickly go mainstream. When I heard "vibe coding," it just made sense. It perfectly captured the experience of programming without really engaging with the code. You just vibe until the application does what you want.

Then there's "hallucination." He didn't exactly invent it. The term has existed since the 1970s. In one early instance, it was used to describe a text summarization program's failure to accurately summarize its source material. But Karpathy's revival of the term brought it back into the mainstream, and subtly shifted its meaning, from "prediction error" to something closer to a dream or a vision.

Now, large language models don't throw errors. They hallucinate. When they invent facts or bend the truth, they're not lying. They're hallucinating. And with every new model that comes out and promises to stay clean off drugs, it still hallucinates.

An LLM can do no wrong when all its failures are framed as neurological disorder. For my part, I hope there's a real effort to teach these models to simply say "I don't know." But in the meantime, I'll adopt the term for myself. If you ever suspect I'm lying, or catch me red-handed, just know that it's not my fault. I'm just hallucinating.

“How old are you?” Asked the OS

Published:

by

A new law passed in California to require every operating system to collect the user's age at account creation time. The law is AB-1043. And it was passed in October of 2025.

How does it work? Does it apply to offline systems? When I set up my Raspberry Pi at home, is this enforced? What if I give an incorrect age, am I breaking the law now? What if I set my account correctly, but then my kids use the device? What happens?

There is no way to enforce this law, but I suspect that's not the point. It's similar to statements you find in IRS documents. The IRS requires you to report all income from illegal activities, such as bribes and scams. Obviously, if you are getting a bribe, you wouldn't report it, but by not reporting it you are breaking additional laws that can be used to get you prosecuted.

When you don't report your age to your OS whether it's a windows device or a Tamagotchi, you are breaking the law. It's not enforced of course, but when you are suspected of any other crime, you can be arrested for the age violation first, then prosecuted for something else.

What a world we live in.

That's it, I'm cancelling my ChatGPT

Published:

by

Just like everyone, I read Sam Altman's tweet about joining the so-called Department of War, to use ChatGPT on DoW classified networks. As others have pointed out, this is the entry point for mass surveillance and using the technology for weapons deployment. I wrote before that we had the infrastructure for mass surveillance in place already, we just needed an enabler. This is the enabler.

This comes right after Anthropic's CEO wrote a public letter stating their refusal to work with the DoW under their current terms. Now Anthropic has been declared a public risk by the President and banned from every government system.

Large language models have become ubiquitous. You can't say you don't use them because they power every tech imaginable. If you search the web, they write a summary for you. If you watch YouTube, one appears right below the video. There's a Gemini button on Chrome, there's Copilot on Edge and every Microsoft product. There it is in your IDE, in Notepad, in MS Paint. You can't escape it.

Switching from one LLM to the next makes minimal to no difference for everyday use. If you have a question you want answered or a document to summarize, your local Llama will do the job just fine. If you want to compose an email or proofread your writing, there's no need to reach for the state of the art, any model will do. For reviewing code, DeepSeek will do as fine a job as any other model.

OpenAI war soldier

A good use of ChatGPT's image generator.

All this to say, ChatGPT doesn't have a moat. If it's your go-to tool, switching away from it wouldn't make much of a difference. At this point, I think the difference is psychological. For example, my wife once told me she only ever uses Google and can't stand any other search engine. What she didn't know was that she had been using Bing on her device for years. She had never noticed, because it was the default.

When I read the news about OpenAI, I was ready to close my account. The only problem is, well, I never use ChatGPT. I haven't used it in years. My personal account lay dormant. My work account has a single test query despite my employer trying its hardest to get us to use it.

But I think none of that matters when OpenAI caters to a government agency with a near-infinite budget. For every public account that gets closed, OpenAI will make up for it with deeper integration into classified networks.

Not even 24 hours later, the US is at war with Iran. So while we're at it, here is a nice little link to help you close your OpenAI account.

Nvidia was only invited to invest

Published:

by

Nvidia was only invited to invest.

That is one reversal of commitment. Remember that graph that has been circling around for some time now? The one that shows the circular investment from AI companies:

OpenAI circular investment

Basically Nvidia will invest $100 billion in OpenAI. OpenAI will then invest $300 billion in Oracle, then Oracle invests back into Nvidia. Now, Jensen Huang, the Nvidia CEO, is back tracking and saying he never made that commitment.

“It was never a commitment. They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.”

So he never committed? Did we make up all these graphs in our head? Was it a misquote from a journalist somewhere that sparkled all this frenzy? Well, you can take a look in OpenAI press release in September of 2025. They wrote:

NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed.

In fact, Jensen Huang went on to say:

“NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.”

It sounds like Jensen is distancing himself from that $100 billion commitment. Did he take a peak inside OpenAI and change his mind? At the same time, OpenAI is experimenting with ads. Sam Altman stated before that they would only ever use ads as a last resort. It sounds like we are in the phase.