Why does my laptop sound like an Airplane

It's about to take off
Fund this Blog

I was trying to experiment making an animation using the HTML Canvas and JavaScript. After drawing a small image and creating a game loop, I noticed my laptop fans whirring to life. The more I worked on my little animation, the louder my fans became. In fact, I didn't even finish my experiment, and I got a notification for low battery. What was it about my tiny 512x342 animation that made my powerful modern computer choke?

Recently, I got to use a Macintosh 128K, an Apple machine from 1984. Its compact size and minimalist design intrigued me. But what I found most fascinating was how snappy it was. The graphics, although primitive, rendered instantly, and each button click responded with no delay. How could this machine work effortlessly rendering its 512x342 monochrome display while my modern laptop struggled? The simple answer is: Overhead.

To draw graphics on the screen, these 80s machines had incredibly direct access to hardware. The program running could speak directly to the video hardware. When the CPU wanted to draw a pixel, it would write a value directly to a specific memory address that the video chip was constantly reading to generate the display signal. There were no operating systems with complex graphics APIs, no web browsers, no layers of abstraction. It was a direct line from CPU to memory to video chip.

As computers evolved and benefited from Moore's Law, we added layers of abstraction between our high-level function calls and the low-level hardware actions. When I call ctx.fillRect on a JavaScript canvas, I'm not talking directly to the GPU. Instead, I'm going through:

  1. The JavaScript engine
  2. The browser's rendering engine (often written in C++ or Rust)
  3. The operating system's graphics API (like DirectX on Windows, Metal on macOS, or OpenGL/Vulkan on Linux)
  4. The GPU driver
  5. Finally, the GPU itself.

Each of these layers adds overhead. My fans spin up because my CPU is doing a lot of work coordinating all these layers. Plus, the browser is likely doing a lot of behind-the-scenes optimization and potentially even creating a new texture on the GPU for each frame I'm updating pixel by pixel.

Another thing to take into account is the simplicity of the older machine's display. The original Macintosh, for example, had a 512x342 resolution, and it was monochromatic (1-bit, black and white). A specific part of the computer's memory, known as the framebuffer, was reserved for the video display. This region was continuously read by the display controller to render it on screen. When you drew a pixel on the screen at $(x, y)$, the CPU could directly update that pixel in memory. For instance, if each pixel was represented by a single bit, calculating the memory address was a simple arithmetic operation (e.g., framebuffer_start_address + (y * row_bytes) + (x / 8)). This was an incredibly fast, single-instruction operation for the CPU.

Furthermore, most low-level graphics routines were written in assembly language. This allowed programmers to have incredibly fine-grained control over the CPU's operations, minimizing wasted cycles and ensuring maximum performance from every clock tick.

There was also the benefit of dedicated resources. When a game or application ran on an 80s machine, it often had almost 100% of the CPU's attention and memory. There were no background processes, no constant checking for network requests, no garbage collection, and no Just-In-Time (JIT) compilation eating up cycles. Every single CPU cycle was dedicated to the task at hand.

My JavaScript code, performing seemingly simple drawings like directly changing pixel values using ImageData, or drawing shapes using fillRect, arc, stroke, etc., was accumulating significant overhead. Each of these operations translated into deeper and deeper abstractions, resulting in hundreds if not thousands of underlying operations. It was abstraction layers all the way down!

Note, I'm not complaining or advocating for a return to 80s machines. On the contrary, there's a good reason why these abstraction layers came to be. First of all, you wouldn't want to write assembly code in the browser, would you? These hidden layers make programming more accessible and secure. While having direct access to memory could be faster, it would also open up a Pandora's Box of security issues. Without these abstractions, it wouldn't be possible to create the isolated "sandboxes" that separate different applications, preventing them from accessing or interfering with each other.

The Best of Both Worlds

So, what did this little fan-screaming experiment teach me? It's a profound appreciation for how far computing has come. The Macintosh 128K's direct, no-nonsense approach to graphics was a marvel of efficiency for its time, squeezing every drop of performance from limited hardware. It offered a glimpse into a world where simplicity was key to speed. Today, our powerful machines sacrifice some of that raw, direct control for incredible versatility, robust security, and the ability to build complex, interconnected web applications that would be unimaginable in the 80s. While I still have much to learn about optimizing my web graphics (perhaps exploring WebGL next!), this journey into retro computing has given me a deeper understanding and respect for the diverse genius that shaped, and continues to shape, the digital world we inhabit.

Both eras, in their own unique ways, represent incredible feats of engineering. Now I wonder, is there a dedicated memory region for Liquid Glass?


Comments

There are no comments added yet.

Let's hear your thoughts

For my eyes only