A laptop computer is extremely complex, but is actively developed and maintained by a small number of people, built on parts themselves developed by a small number of people, many of which are themselves built on parts themselves developed by a small number of people, and so on and so forth.
This works well in electronics design, because everything is documented and tested to comply with the documentation. You'd think this would slow things down, but developing a new generation of a laptop takes fewer man hours and less calendar time than developing a new generation of any software of a similar complexity running on it, despite the laptop skirting with the limitations of physics. Technical debt adds up really fast.
The top-level designers only have access to what the component manufacturers have published, and not to their internal designs, but that doesn't matter because the publications include correct and relevant data. When the component manufacturer comes out with something new, they use documentation from their supplier, to design the new product.
As long as each components of documentation is complete and accurate, it will meet all of the needs of anyone using that component. Diving deeper would only be necessary if something is incomplete or inaccurate.
1. You make it declarative. The system definition should be checked in to a repository, or multiple repos. If you're not using infrastructure as code, you should be. This is table stakes.
2. Systems should be explicit, not implicit. Configuration should be explicit wherever possible. Implicit behavior should be documented.
3. Living documentation adjacent to your systems. Write markdown files next to your code. If you keep systems documentation somewhere else (like some wysiwyg knowledge system bullshit) then you must build a markdown-to-whatever sync job (where the results are immutable) else the documentation is immediately out of date, and out of date documentation is just harmful noise.
4. If it's dead, delete it. You have version control for a reason. Don't keep cruft around. If there's a subnet that isn't being used, delete it.
Lastly, if you find yourself in this situation and have none of the above, ask yourself if you really have the agency to fix it -- and I mean really fix it, no half measures -- then do so. If you don't, then your options are to stop caring or find a new job. The alternative is a recipe for burnout.
I've been working on this problem specifically in the context of autonomous coding agents, and you hit the nail on the head with 'implicit context'.
The biggest issue isn't just that documentation gets outdated; it's that the 'mental model' of the system only exists accurately in a few engineers' heads at any given moment. When they leave or rotate, that model degrades.
We found the only way to really fight this is to make the system self-documenting in a semantic way—not just auto-generated docs, but maintaining a live graph of dependencies and logic that can be queried. If the 'map' of the territory isn't generated from the territory automatically, it will always drift. Manual updates are a losing battle.
I think you should have added a disclaimer that you are the founder of company that provides "Reliability and context for complex environments."
It feels a bit dishonest to be asking for advice on how to tackle the complexity problem for SREs when you're are actually providing a paid solution for the very same problem.
I'm seeing this pattern pop up more and more all over the place now. It's pervasive throughout Reddit too for example: pick a sub in the area that you built your app in, pose some problem, and then have another account also controlled by you present the solution that you built. All the writing styles in these posts are similar too; it's all likely written by AI, including the post we're commenting on.
More humans. Seriously. Keep more humans in the loop. Everything else does and will fail. Humans add resilience to systems; demand and complexity reduce resilience.
You’re describing the infrastructure of a large system — it’s a custom-built machine designed to serve a custom purpose. There are no examples in the world of things like that working without a lot of human intervention.
This is compounded, as you say, by increasing demands placed on the system: “Now it must react to AIs committing code,” or “Our customer base is growing but your Ops budget is decreasing.” This means the system needs more humans, not fewer.
Adding more humans seems like an immediate fix but systems of systems exist without humans.
Observability, automation, infrastructure as code, audits, all these things compliment the “wtf happened?” scenario and all of these are systems. Not humans.
Every company I've worked with has started with an ER diagram for their primary database (and insisted on it, in fact), only to give up when it became too complex. You quickly hit the point where no one can understand it.
You then eventually have that same pattern happen with services, where people give up on mapping the full thing out as well.
What I've done for my current team is to list the "downstream" services, what we use them for, who to contact, etc. It only goes one level deep, but it's something that someone can read quickly during an incident.
I don't think OP is looking for context from the AI model perspective but rather a process for maintaining a mental picture of the system architecture and managing complexity.
I'm not sure I've seen any good vendors but I remember seeing a reverse devops tool posted a few days ago that would reverse engineer your VMs into Ansible code. If that got extended to your entire environment, that would almost be an auto documenting process.
Context rots when it stays implicit.
Make the system model an explicit artifact with fixed inputs and checkpoints, then update it on purpose.
Otherwise you keep rebuilding the same picture from scratch.
Im honestly looking for both. I haven't found a vender to do this well for just humans nor am I seeing something that can expose this context, read only, to all of the ai agent coding models
One thing that’s evidently helped: using CLAUDE.md / agent instructions as de facto architecture docs. If the agent needs to understand system boundaries to work effectively, those docs actually get maintained
You don't, it's a map of intent, not infra state. What exists, why, what talks to what. Live state still needs IaC and observability. The .md captures the 'why' that terraform can't
Continuous improvement is essential, but we must distinguish between progress and mere decoration. If an old car runs perfectly and a new one offers the same speed but with a different shell, why replace the entire vehicle? It’s a waste of time and resources. Why not focus on upgrading the 'shell' instead of reinventing the wheel?
Good hierarchical documentation
A laptop computer is extremely complex, but is actively developed and maintained by a small number of people, built on parts themselves developed by a small number of people, many of which are themselves built on parts themselves developed by a small number of people, and so on and so forth.
This works well in electronics design, because everything is documented and tested to comply with the documentation. You'd think this would slow things down, but developing a new generation of a laptop takes fewer man hours and less calendar time than developing a new generation of any software of a similar complexity running on it, despite the laptop skirting with the limitations of physics. Technical debt adds up really fast.
The top-level designers only have access to what the component manufacturers have published, and not to their internal designs, but that doesn't matter because the publications include correct and relevant data. When the component manufacturer comes out with something new, they use documentation from their supplier, to design the new product.
As long as each components of documentation is complete and accurate, it will meet all of the needs of anyone using that component. Diving deeper would only be necessary if something is incomplete or inaccurate.
1. You make it declarative. The system definition should be checked in to a repository, or multiple repos. If you're not using infrastructure as code, you should be. This is table stakes.
2. Systems should be explicit, not implicit. Configuration should be explicit wherever possible. Implicit behavior should be documented.
3. Living documentation adjacent to your systems. Write markdown files next to your code. If you keep systems documentation somewhere else (like some wysiwyg knowledge system bullshit) then you must build a markdown-to-whatever sync job (where the results are immutable) else the documentation is immediately out of date, and out of date documentation is just harmful noise.
4. If it's dead, delete it. You have version control for a reason. Don't keep cruft around. If there's a subnet that isn't being used, delete it.
Lastly, if you find yourself in this situation and have none of the above, ask yourself if you really have the agency to fix it -- and I mean really fix it, no half measures -- then do so. If you don't, then your options are to stop caring or find a new job. The alternative is a recipe for burnout.
I've been working on this problem specifically in the context of autonomous coding agents, and you hit the nail on the head with 'implicit context'.
The biggest issue isn't just that documentation gets outdated; it's that the 'mental model' of the system only exists accurately in a few engineers' heads at any given moment. When they leave or rotate, that model degrades.
We found the only way to really fight this is to make the system self-documenting in a semantic way—not just auto-generated docs, but maintaining a live graph of dependencies and logic that can be queried. If the 'map' of the territory isn't generated from the territory automatically, it will always drift. Manual updates are a losing battle.
I think you should have added a disclaimer that you are the founder of company that provides "Reliability and context for complex environments."
It feels a bit dishonest to be asking for advice on how to tackle the complexity problem for SREs when you're are actually providing a paid solution for the very same problem.
I'm seeing this pattern pop up more and more all over the place now. It's pervasive throughout Reddit too for example: pick a sub in the area that you built your app in, pose some problem, and then have another account also controlled by you present the solution that you built. All the writing styles in these posts are similar too; it's all likely written by AI, including the post we're commenting on.
More humans. Seriously. Keep more humans in the loop. Everything else does and will fail. Humans add resilience to systems; demand and complexity reduce resilience.
You’re describing the infrastructure of a large system — it’s a custom-built machine designed to serve a custom purpose. There are no examples in the world of things like that working without a lot of human intervention.
This is compounded, as you say, by increasing demands placed on the system: “Now it must react to AIs committing code,” or “Our customer base is growing but your Ops budget is decreasing.” This means the system needs more humans, not fewer.
This is not what he asked.
Adding more humans seems like an immediate fix but systems of systems exist without humans.
Observability, automation, infrastructure as code, audits, all these things compliment the “wtf happened?” scenario and all of these are systems. Not humans.
The SRE needs signal from noise.
Every company I've worked with has started with an ER diagram for their primary database (and insisted on it, in fact), only to give up when it became too complex. You quickly hit the point where no one can understand it.
You then eventually have that same pattern happen with services, where people give up on mapping the full thing out as well.
What I've done for my current team is to list the "downstream" services, what we use them for, who to contact, etc. It only goes one level deep, but it's something that someone can read quickly during an incident.
Sorry what is an ER diagram?
First hits on DDG, anonymous Google, Bing
ERD/ Entity Relationship Diagram https://www.lucidchart.com/pages/er-diagrams
ERM / Entity-Relationship Model https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_mo...
(same-same, ERD is the more common acronym)
That is what I figured it would be, but you never know anymore with the amount of acronyms thrown around nowadays.
I don't think OP is looking for context from the AI model perspective but rather a process for maintaining a mental picture of the system architecture and managing complexity.
I'm not sure I've seen any good vendors but I remember seeing a reverse devops tool posted a few days ago that would reverse engineer your VMs into Ansible code. If that got extended to your entire environment, that would almost be an auto documenting process.
Context rots when it stays implicit. Make the system model an explicit artifact with fixed inputs and checkpoints, then update it on purpose. Otherwise you keep rebuilding the same picture from scratch.
Im honestly looking for both. I haven't found a vender to do this well for just humans nor am I seeing something that can expose this context, read only, to all of the ai agent coding models
I will check that tool out.
Monitoring tools (APM) will show dependencies (web calls, databases, etc) and should contain things like deployment markers and trend lines.
All of those endpoints should be documented in an environment variable or similar as well.
The breakdown is when you don't instrument the same tooling everywhere.
Documentation is generally out of date by the time you finish writing it so I don't really bother with much detail there.
This has been my experience as well. imo documentation feels like one of the few areas that AI can be good at today.
It's okay but it often lies. At an SRE level you need a pretty zoomed-out view of the world until you are trying to zoom-in to a problem component.
Always start at the head (what a customer sees -- actually load the website) and work down into each layer.
If something is breaking way downstream and customers don't see it then it doesn't actually matter right now.
One thing that’s evidently helped: using CLAUDE.md / agent instructions as de facto architecture docs. If the agent needs to understand system boundaries to work effectively, those docs actually get maintained
But how do you ensure the .md file is able to see all of the details of the infra?
You don't, it's a map of intent, not infra state. What exists, why, what talks to what. Live state still needs IaC and observability. The .md captures the 'why' that terraform can't
I use nix (nixos) with AI-agents. Its everything i ever dreamed of and a bit more. Makes all other distros and buildsystems look old and outdated :D
Woah what are you doing?
Yea im curious too, is this because most of your system can be explained by nixos configuration ? So the LLM can easily fetch context?
If the system is so good, why constantly change the context?
I think it is because of continous improvement mindset.
Continuous improvement is essential, but we must distinguish between progress and mere decoration. If an old car runs perfectly and a new one offers the same speed but with a different shell, why replace the entire vehicle? It’s a waste of time and resources. Why not focus on upgrading the 'shell' instead of reinventing the wheel?
but think about the shareholders!