Programming insights to Storytelling, it's all here.
At my old job, I built subscription management pages. The kind that should let customers cancel with a few clicks. We were a customer service automation company. Most clients understood this basic courtesy. One did not.
The web development landscape is filled with frameworks, no-code platforms, and AI tools promising to abstract away the "old-school" work of writing HTML and CSS. Yet, despite these advancements, the web’s foundation remains unchanged: HTML structures content, and CSS styles it. Here’s why mastering these core technologies is not just relevant, it’s essential for building a future-proof skillset.
After months of planning, development, and testing, FamFlix is finally live. Families are uploading their precious memories, streaming home videos, and creating new traditions. But as any seasoned developer knows - launch day isn't the finish line, it's the starting gate.
When I launch personal projects, my usual approach is to walk away. If someone finds it and uses it, great. If not, no harm done. With shotsrv, my URL screenshot tool, I only learned about problems when frustrated users went out of their way to email me - which meant dozens had likely encountered the same issue before one bothered to report it.
The majority of the traffic on the web is from bots. For the most part, these bots are used to discover new content. These are RSS Feed readers, search engines crawling your content, or nowadays AI bots crawling content to power LLMs. But then there are the malicious bots. These are from spammers, content scrapers or hackers. At my old employer, a bot discovered a wordpress vulnerability and inserted a malicious script into our server. It then turned the machine into a botnet used for DDOS. One of my first websites was yanked off of Google search entirely due to bots generating spam. At some point, I had to find a way to protect myself from these bots. That's when I started using zip bombs.
When I was younger and deploying my first projects, my go-to method was the trusty scp command. I’d SSH into the server, copy over the files, and pray everything worked. Sometimes, I’d even use FTP to upload only the modified files. It felt quick and efficient... until it wasn’t.
I used to agonize over every word I published here, operating under the belief that the internet never forgets. Yet years later, countless links embedded in my old posts now lead to abandoned domains, their content vanished without a trace, not even archived in the Wayback Machine. The irony is stark: the same medium I feared for its permanence has proven so fragile.
Every time I buy a new computer, it ends up under my bed collecting dust within a week. It’s not that I dislike the device, it’s that I can’t work with a machine that doesn’t hold all my files. My workflow revolves around creating, downloading, and sharing text documents, code repositories, family photos, personal videos, and everything in between. The device I use feels like an extension of myself because that’s where my digital life resides.
How do you know your application works? Can you prove it? Does it work when: The user has a slow internet connection? They log in on two different devices at the same time? 100 people try to upload videos at once?
A few years ago, a lone programmer named t0st did something extraordinary: he fixed an 8-year-old bug in GTA Online that had been driving players crazy. The bug? Painfully long load times, sometimes up to 20 minutes. While the single-player mode loaded in seconds. His solution was elegant: a 13-line code tweak that cut load times by 70%. Rockstar Games, the studio behind GTA, rewarded him with a $10,000 bounty and patched the game. Problem solved, right?