Programming insights to Storytelling, it's all here.
Looking at old applications, we always wonder who in their right mind thought of building them so badly. I encountered such an application in my career, and I was lucky enough that they had used version control to preserve its history. Let me describe how the application looked in its latest state.
I’ve been using Audacity for over a decade, and for most of that time, it’s been my go-to tool for quick audio edits. Need to trim a podcast? Normalize a voice recording? Remove background noise? Audacity handles it effortlessly. But every time I’ve tried to use it for something bigger, an audiobook, a documentary, or a music project, I’ve hit a wall.
During a demo for our AI agent, a sales colleague once asked, ‘How does it process refunds? Does it click through the website like a human?’ I grinned and said, ‘Nope, it just calls the refund API.’ Cue the blank stares.
At my old job, I built subscription management pages. The kind that should let customers cancel with a few clicks. We were a customer service automation company. Most clients understood this basic courtesy. One did not.
The web development landscape is filled with frameworks, no-code platforms, and AI tools promising to abstract away the "old-school" work of writing HTML and CSS. Yet, despite these advancements, the web’s foundation remains unchanged: HTML structures content, and CSS styles it. Here’s why mastering these core technologies is not just relevant, it’s essential for building a future-proof skillset.
After months of planning, development, and testing, FamFlix is finally live. Families are uploading their precious memories, streaming home videos, and creating new traditions. But as any seasoned developer knows - launch day isn't the finish line, it's the starting gate.
When I launch personal projects, my usual approach is to walk away. If someone finds it and uses it, great. If not, no harm done. With shotsrv, my URL screenshot tool, I only learned about problems when frustrated users went out of their way to email me - which meant dozens had likely encountered the same issue before one bothered to report it.
The majority of the traffic on the web is from bots. For the most part, these bots are used to discover new content. These are RSS Feed readers, search engines crawling your content, or nowadays AI bots crawling content to power LLMs. But then there are the malicious bots. These are from spammers, content scrapers or hackers. At my old employer, a bot discovered a wordpress vulnerability and inserted a malicious script into our server. It then turned the machine into a botnet used for DDOS. One of my first websites was yanked off of Google search entirely due to bots generating spam. At some point, I had to find a way to protect myself from these bots. That's when I started using zip bombs.
When I was younger and deploying my first projects, my go-to method was the trusty scp command. I’d SSH into the server, copy over the files, and pray everything worked. Sometimes, I’d even use FTP to upload only the modified files. It felt quick and efficient... until it wasn’t.
I used to agonize over every word I published here, operating under the belief that the internet never forgets. Yet years later, countless links embedded in my old posts now lead to abandoned domains, their content vanished without a trace, not even archived in the Wayback Machine. The irony is stark: the same medium I feared for its permanence has proven so fragile.