Notes on Systems

Decisions That Aged Badly

May 7, 2026

They were defensible at the time. Reviewed. Discussed. Approved. Made by capable people with the information they had.

The cost arrived years later. Often after the people involved had moved on. Including, in some cases, me.

That gap is the whole problem.

Wrong decisions and decisions that aged badly

A wrong decision is easy to point to. You can name the missing data, the ignored warning, the sloppy reasoning. You can fairly assign blame. Sometimes you should.

A decision that aged badly is harder to find fault with. The reasoning was correct given what was known. The context simply moved past it.

What was once a reasonable trade became a load-bearing constraint nobody chose.

These two failure modes feel similar in retrospect. They are not the same thing.

Most engineering hindsight collapses them. Either everything in the past was a mistake — which is unfair and useless. Or nothing was — which prevents any learning at all.

The honest middle is harder to hold.

Regret without blame

A decision can be defensible at the time and still age badly. The useful posture toward this category is regret without blame.

Regret, because the system would be better if it had gone differently.

Without blame, because the people who made it were paying attention. They simply could not see the ten years that came after.

The risk in this posture

There is a real risk in this posture.

"Regret without blame" can become evasion. It can become the language a team uses to avoid examining actual misjudgments — analyses that were wrong on their own terms, warnings that were available and ignored.

The discipline is to keep the categories separate. Some decisions were wrong. Some aged badly. Some were both.

Treating every bad outcome as a victim of changed context is its own failure of judgment.

Patterns that recur

A few patterns recur, looking back.

Premature abstractions.

The shape of the problem was not yet clear. Someone wrote a generic interface for the cases they could anticipate. The cases that actually arrived were not those. The abstraction now obscures more than it reveals. But it is depended on widely, so it stays.

Optionality preserved just in case.

A configuration flag. A plugin point. A strategy interface for one strategy. Each was nearly free at the moment it was added. Cumulatively they form the bulk of the system's surface area. Removing them is now a quarter of work.

Tooling chosen for early productivity.

The framework that made the first six months fast. The build system that fit the original architecture. None of these were wrong. All of them eventually became things future maintainers had to work around rather than with.

Coupling between things that did not need to know each other.

The shared database two services briefly used. The library that grew to import from the application. The deploy pipeline that assumed one team's release cadence. Each coupling was a small saving. Each is now a constraint.

Best practices imported without context.

The pattern that worked at a previous company. The architecture style that fit a different problem at a different scale. The decision was anchored in authority rather than fit.

Decisions made under deadline that became permanent.

The temporary table. The hardcoded value. The "we will fix this next sprint." There is no next sprint. The shortcut becomes the shape.

The shared property is reversibility

The shared property of all of these is not effort or skill.

It is reversibility.

Each one reduced the cost of the present by spending the future's optionality. Each made the system easier to write and harder to change.

The asymmetry was invisible because the future maintainers were not in the room.

Reversibility is contextual

Reversibility is itself contextual.

What is reversible for a small team can be effectively permanent for a large one, because the cost of coordination scales faster than the cost of the change.

A two-line config swap in a startup is a multi-quarter migration in a public company.

The honest test is not "could this be undone in principle." It is "is the team that would have to undo it likely to be able to."

The strongest filter

This is why the strongest single filter on long-lived decisions is not "is this correct."

It is "is this reversible by the people who will inherit it."

A reversible decision can be wrong and still survive. The team adjusts as understanding improves.

An irreversible decision must be defended against every change in context that comes later. Usually by people who do not know why it was made.

Most decisions that aged badly were locally optimal and globally irreversible.

That is the shape to watch for.

Commit later and lighter

Looking back, the change I would make is not a particular technical choice.

It is to commit later and lighter. To treat decisions as hypotheses. To assume I will be partly wrong, and to build so that being wrong is survivable.

The decisions that aged best in the systems I have known were rarely the cleverest. They were the smallest commitments that addressed the actual problem.

Smaller surfaces.

Fewer dependencies.

Shorter assumptions.

Things that could be undone in an afternoon.

That is not a guarantee against aging. Nothing is.

It is the best available defense.

The lesson

Regret without blame ends in the same lesson.

The judgment is not "I should have known."

It is "I should have made it cheaper to be wrong."

That posture is the one I try to carry into the next decision.

April 24, 2026

What You Refuse to Do Is Architecture

A system without explicit constraints is not flexible. It is undecided. The refusals you write down are the load-bearing structure.

April 15, 2026

AI in Long-Lived Systems

Every AI integration is a bet that you can monitor something you do not fully control.

← Back to all notes