AI in Long-Lived Systems
Every AI integration is a bet that you can monitor something you do not fully control.
Notes on Systems
Most architecture conversations focus on what a system can do. The harder, more useful conversation is about what it won't.
A system without explicit constraints is not flexible. It is undecided. And undecided systems force every contributor to re-make the same decisions, in private, every week.
This is how complexity arrives. Not through bad code. Through deferred choice.
When a team says "we don't run background jobs in this service," a hundred future debates disappear. When it says "we never call the database directly from controllers," code review becomes shorter and clearer. When it says "we don't accept new dependencies without a documented reason," dependency drift slows.
None of these are restrictions on capability. They are restrictions on ambiguity.
Capability without constraint becomes a request to negotiate every decision twice — once when the code is written, and again when someone tries to understand it.
Every option you preserve is something a teammate must consider, document, test, and eventually maintain.
When the original author moves on — and they will — the option remains. No one can quite say whether it's holding something up or just sitting there. So nobody touches it. So it stays.
A team can lose a year to options nobody ever chose to use.
The cost of optionality is rarely visible at the moment it is added. It compounds in the form of caution, hesitation, and the slow accumulation of "I'm not sure if we can change this."
This is the part most people underestimate.
A large share of engineering energy goes not to writing code but to adjudicating what is acceptable. The smaller the allowed set, the less energy is consumed there.
A clear constraint says: don't think about this. Don't relitigate this. Spend your judgment elsewhere.
Teams that operate inside well-defined bounds tend to ship more, argue less, and produce systems that age predictably. Not because they have better engineers. Because they have fewer open questions per square meter.
The most useful constraints are the ones a team chooses before they are forced to.
A budget for latency. A ceiling on service count. A rule that no service owns more than one database. A list of dependencies that require written justification.
These are not best practices. They are local agreements with future-you.
When the constraint is explicit, the trade-off becomes visible. You can see what you are giving up. You can argue against it on its actual terms.
When the constraint is implicit — "we just don't tend to do that here" — it operates as folklore. Folklore decays with turnover. Documents survive turnover better.
Without a stated limit, a team optimizes for a moving target. With one, the trade-off is legible: we accept slower iteration here because we want fewer surprises later. We accept higher memory use because we want simpler code paths. We accept fewer features because we want smaller error surfaces.
These are not heroic decisions. They are ordinary ones, made legible.
The legibility is the point.
A system designed under explicit constraints tends to look smaller, slower to start, and disappointing on the day it ships.
It also tends to look reasonable five years later.
That is the trade — and it is the trade most teams are reluctant to take, because the cost of restraint is paid up front, and the benefit accrues to people who may not be in the room when it is felt.
This is also why constraints are often added under duress, after a system has grown beyond comprehension. By then, the constraints feel punitive instead of clarifying. They are perceived as taking something away, rather than preserving something.
It is much cheaper to start with constraints than to retrofit them.
A constraint chosen before the team understands the problem can ossify a misreading. Writing down "we never do X" too early locks in a decision the team does not yet have the evidence to make.
The defense is modest: prefer constraints that can be relaxed, and treat any rule older than the system itself with suspicion.
A team can relax a rule in an afternoon if it proves wrong. Reintroducing a rule once a hundred files quietly violate it is a quarter of work, sometimes more.
The asymmetry runs one direction. When in doubt: start narrower than feels natural. You can always widen. You will rarely need to.
They are its load-bearing structure.
The systems that age well are not the most powerful or the most general. They are the ones whose authors were willing, early on, to write down what the system would not be.
That decision — to refuse, in writing, on the record — is the part that carries.
Every AI integration is a bet that you can monitor something you do not fully control.
No one decides to make a system heavy. It happens one reasonable choice at a time.
Good abstractions compress complexity. Bad abstractions relocate it.
Newsletter
Get new essays on systems, AI, data, and software in your inbox. No launch theatrics, just thoughtful notes when something is worth sharing.
RSS is available too: subscribe via feed reader.