Skip to main content

On the Phenomenology of Coding Agents

Large Language Models have undoubtedly revolutionized how we interact with technology. They are, at their core, supercharged search engines with remarkable encoding and decoding capabilities. You speak to them in natural language, and they respond with perfect written text, flawless speech synthesis, or sophisticated image generation. Their utility is undeniable, yet we must acknowledge a fundamental truth: they cannot think in the human sense of the word.

Read more…

MySQL GTID, Semi-Sync Replication and Partial View Caching: A good compromise to scale easy and cheap

For various reasons, I have often been involved in resolving infrastructural issues and performance gaps in MySQL deployments. I never envisioned my career focusing on database systems, yet it seems there is still a high demand for OLTP technologies in the Italian market, so here I am.

When you deal with a large dataset (over 500GB) with huge tables (more than 100 million rows), it's not hard to face performance issues. While many solutions exist for running analytical queries (OLAP) on large datasets by leveraging distributed systems, they are not typically "real-time" systems and often operate on stale data. When you have numerous, complex analytical queries — or expensive operations like COUNT(DISTINCT) — hat must be submitted against a fresh, real-time system, you have no choice: you need to run them on your OLTP engine.

Read more…

Embracing the IPv6 Revolution: A Homelab Story

The other day, I was contacted by a friend of mine asking for help setting up a home NAS (aka Network Attached Storage) based on OpenMediaVault. While he's an enthusiast, he wasn't able to properly configure all the technical aspects of the setup.

The NAS world is increasingly resembling a homelab rather than just file storage. Using Docker, you can get enterprise-grade software and setups running with quite simple steps.

After setting up the Docker containers, we faced the usual issue that most homelabs encounter these days: his internet connection was behind a CGNAT for IPv4 networking. However, I was pleasantly surprised to find an IPv6 public subnet already configured!

Read more…

The Hidden Costs of Static Linking and Containerization: A Critical Analysis

Statically-linked Programs are the evil

The trend toward static linking represents a fundamental regression in software engineering principles. By bundling every dependency directly into the executable, we're not just bloating our binaries - we're actively dismantling decades of progress in software modularization. Each statically linked program becomes an island, disconnected from the ecosystem of shared improvements and security updates.

Consider what happens when a critical vulnerability is discovered in a commonly used library. In a properly designed system using shared libraries, a single system update would protect all applications simultaneously. Instead, with static linking, we must embark on a complex and error-prone process of identifying every affected program, rebuilding each one individually, and ensuring they all get redeployed.

Read more…

Exchanging messages between processes (or even threads within the same program) using ZeroMQ

Inter-Process Communication with ZeroMQ (and Protocol Buffers)

Introduction

Some may certainly say that, when you are writing so called "daemons" under Linux/Unix OSes or "services" under Windows, you might want to use OS primitives/reuse existing libraries to make your programs communicate each other. And I strongly agree with the point: it is always a good idea to use a well-tested and solid library to implement such fundamental features such as message queues.

For example, under Linux you can use D-Bus, which allows IPC at scale within the OS scope. Or, in the microservices space, you can leverage on message brokers like RabbitMQ or Kafka to stream your messages through sophisticated routing logic. However, at times you are just looking for something trivial and simple to send and queue messages where at the same time you look for brokerless setup but still you are willing to leverage on some of the features that message queuing systems offer for free with ease. That's where ZeroMQ comes in.

Read more…

Building a Lightweight Node.js Background Job Scheduler: A Practical Solution for Simple Web Applications

Building a Lightweight Node.js Background Job Scheduler

As developers, we often come across situations where a fully-fledged background job system, with all its bells and whistles, might be overkill for our project needs. This was the case for me when I built a custom background job scheduler in TypeScript and Node.js, designed to handle essential tasks without the overhead of larger, more complex solutions.

Read more…

Full-fledged API + e2e tests + benchmark + IaC + Helm charts + more as an (interesting) exercise!

Last week, I was contacted for a coding challenge. The project seemed interesting, so I decided to take it on. At the very least, I would learn something new, which I was eager to explore: Pulumi, k6, FastAPI and some fancy modern things that make you look like a cool dev!

The project involved creating a simple REST API in Python, which needed to be packaged with Helm, ready for deployment in a Kubernetes (K8s) cluster, and including all the essential tools required.

Read more…