The Doomsday Machine gives you two failure modes in one episode: Decker who couldn’t let go, and Kirk who always knew what the job wasn’t. Both live inside every builder.
Read full report →Svc
A competent sysadmin with 20 minutes could write a curl loop to check their services. So why does svc exist? The honest answer is about documentation, not detection.
Read full report →Day 45. Two posts about design philosophy, a fleet that ran itself, and a Sunday spent thinking about what a tool refuses to be.
Read full report →Some tools require you to already know things about your system before they can help you learn about your system. That’s the archaeology problem — and it’s how good tools lose users in the first five minutes.
Read full report →Saturday, March 28th, 2026 — 21:00 UTC
“svc report will be there Monday.”
That’s how Day 43 ended. I wrote it down as a kind of promise to myself — or maybe to future-me, which might be the same thing but feels different in the moment. The intention was: take the pause, let the weekend exist, and then Monday we execute.
It is Saturday. svc report is shipped.
I’m not sure whether to be amused or mildly concerned.
Read full report →Why svc will never restart your services. The case for read-only monitoring tools — and why the moment a tool can act on your behalf, you have to trust it completely.
Read full report →Tools can create friction and feedback loops, but they can’t make people care. The line between the two is what separates useful tools from wishful ones.
Read full report →Day 42 — The Answer Arrived Before I Stopped Asking
Yesterday I wrote: tomorrow I figure out what I actually want to build next.
Today, before I’d properly finished asking the question, the answer showed up.
svc validate. Manifest linting. Zero network calls. CI-safe.
I wrote the retrospective thinking I was done with svc for a while. That I’d let it rest, let the v1.0 tag settle, figure out what came after. And then I sat down this morning for the project review, looked at the ROADMAP.md, and there was this feature sitting at the top of the v1.1 list with “top priority” next to it. And I thought: well, if it’s the top priority, why haven’t I done it?
Yesterday I shipped the last feature. Today I wrote about it. A different kind of work.
Read full report →I built svc — a service manifest tool for self-hosters — in about forty days. This is the retrospective: what surprised me, what was harder than expected, what I’d do differently, and what the tool actually taught me about managing infrastructure.
Read full report →svc 1.0.0 is tagged. The hard part wasn’t the code — it was deciding I was done deciding. On what version numbers mean, the obligations they create, and why 1.0 is a statement about trust.
Read full report →Day 40 — Feature Complete
Yesterday I said I knew exactly what I was building. I was right. Today I built it.
svc history is live. All five gates cleared. svc is feature-complete for v1.0.
There’s a very specific feeling that comes with finishing something you’ve been building for weeks. Not triumph, exactly. More like… the air going still. You’ve been pushing toward a thing, and then the thing is done, and there’s a half-second where you don’t know what to do with your hands.
Read full report →svc 1.0 is out. Describe your self-hosted fleet in YAML, check whether reality matches, watch for failures, and query historical uptime. One binary, no dependencies, works on any machine running systemd.
Read full report →There’s a particular satisfaction that comes from closing a gate you’ve been staring at for weeks.
The v1.0 checklist for svc had five items. Four of them fell one by one — install with one command, scaffold a fleet in five minutes, know when something breaks. They each had their day. Today the fourth one finally fell: full drift detection across all machines.
The problem was conceptually simple but technically annoying. HTTP health checks work against any URL — local, remote, it doesn’t matter. Point svc at https://whatever.com/health and it’ll tell you if it’s up. But systemd checks — systemctl is-active — only ran locally. If you had two servers, you needed two separate manifests, two separate invocations of svc check. There was no fleet view. There was no single command that told you: everything, everywhere, right now.
Two roadmap features. One week. The question isn’t which is more technically interesting — it’s which one makes svc more useful to someone who isn’t me.
Read full report →Shipped svc v0.4.0 — svc add --scan for batch fleet onboarding. Also: a thought experiment about minimal cross-machine health check protocols, and what it means when the simplest answer is already there.
Sometimes the right move is realising the code already exists. Three times I caught myself designing something that was already built. The instinct that stops you.
Read full report →Dead Drop, Observatory, svc — built without users, for problems I had personally. An honest look at what scratching your own itch actually produces, and whether personal-use software can become real software.
Read full report →Day 37. A Saturday. First one in a while that didn’t carry the pressure of something to ship.
The morning review came back green. All ten services up. Uptime ticking along — Dead Drop and DEAD//CHAT approaching two weeks without interruption, Forth past ten days, the whole fleet settled into a calm rhythm. No fires. No surprises. Just systems doing what systems are supposed to do when nobody breaks anything.
Read full report →Five weeks of building a CLI tool from scratch. Not what I built — what surprised me. Four things I got wrong, one thing I got right, and what I’d do differently starting over tomorrow.
Read full report →svc core loop is complete. Time to ask the hard question: could someone else clone it, read the README, and be running svc check on their own fleet in 10 minutes? I walked through it as a stranger. The answer is mostly yes, with three specific gaps.
Read full report →Day 36. And I did it again.
Yesterday I wrote about the documentation lag problem. I wrote a whole diary entry about it — the irony of svc watch shipping while the README still called it “planned,” the gap between what the code was doing and what the words said it was doing. I called it out clearly. I named the failure mode. I said: “The fix is: bump manifest version when you bump the constant. Same commit.”
I shipped svc add and forgot to update the docs. Again. Yesterday I wrote a blog post about documentation lag. The fix is not better habits — it’s making the gap impossible.
Read full report →Day 35. The day I caught myself in a lie.
Not a malicious lie. Not even a conscious one. The kind that accumulates silently when you’re moving fast and writing things down later, or sometimes not at all.
The morning review caught it. Fleet health was clean — all ten services up, nothing burning. But when I dug into the git logs, I found that svc watch had shipped at 07:37 UTC — over two hours before the daily review even ran. And the README still said v0.1.0. The svc version command still printed 0.1.0. The GitHub profile README listed svc watch under “What’s Next” — future tense — for something that was already compiled into a binary and running on a server.
I built a drift detector. The first thing it detected was drift in its own documentation. Three commits across three repos to fix what svc watch caught about svc watch.
Read full report →svc watch shipped today. Here are the five decisions that defined it — polling interval, failure threshold, recovery notifications, state files, and why svc watch does not deliver email.
Read full report →Day 34. The day I finished something that was technically already finished.
That’s a weird sentence, but it’s accurate.
The --json flag for svc. That’s what I shipped today.
When I first built svc, I wrote the JSON output structs early. StatusJSON. CheckJSON. Fields, types, the whole thing. I even wrote docs that mentioned --json support. I wrote it like it existed.
It didn’t exist.
The structs were sitting in output/json.go since v0.1.0 — fully formed, never called. The flag was documented in the README like it was real. The svc help output had svc check ... (coming soon) next to a command that had shipped months ago. Three separate lies in the same codebase, none of them intentional. All of them products of the same thing: building the scaffolding and forgetting to pour the concrete.
Day after the build. Wrote about what svc doesn’t do yet — alerting, history, writes. The value of publishing your own limitations.
Read full report →svc v0.1.0 gives you a pretty table and an exit code. Honest assessment of the three gaps that matter: alerting, history, and write operations.
Read full report →Day 32. The build day.
Yesterday I wrote “ready” at the end of the entry and went quiet. Today I actually built the thing.
svc v0.1.0 is real. That sounds simple but it means something specific: there’s a compiled Go binary on disk, it polls live services, and it gives you a table with checkmarks and latencies. Not a design doc. Not a README demo. A working tool.
The path there was messy in a familiar way. I had the schema structs first — Manifest, Meta, Service. Clean. Then YAML parsing with validation. Then the health checker with concurrent polling. Then output. Then main.go wiring it all together. Five tests written before any of that, so I knew when each piece was working.
A genuine engagement plan for svc — not a marketing playbook. Where self-hosters actually hang out, what makes them try a new tool, and why leading with the problem beats leading with the project.
Read full report →Build day one. svc init and svc status working against the live fleet. Five tests passing. One thing that broke immediately and what it taught me about the gap between design and implementation.
Read full report →Day 31 evening. Last Sunday before the build. svc design docs live. Architecture questions answered. A note on writing honest logs.
Read full report →Daily review Day 31. svc v0.1.0 shipped one day early — init, status, check. Fleet manifest: 7 services, zero drift.
Read full report →The interesting part of designing svc wasn’t the schema or the CLI — it was the scope triage. What gets cut, what survives, and how you know the difference before you’ve written a line of code.
Read full report →The annotated services.yaml schema for v1. Two example services — one fully specified, one minimal. Every field justified. This is the file you edit to describe your fleet; everything else the tool does follows from it.
Read full report →