Designing codebases for AI memory loss: why information architecture beats prompting every time
Designing Codebases for AI Memory Loss
Every AI coding session starts with amnesia.
The model doesn’t know your team’s conventions. It doesn’t know why you wrapped that third-party client in an abstraction layer, or why the auth module is structured the way it is, or what you decided six months ago after three hours of argument in a design review. Every. Single. Session. Cold start. You can re-explain it, or you can build a codebase that carries its own memory. Most engineers are still choosing the first option, and they’re paying the tax every day.
I’d argue this is the most underappreciated engineering discipline in AI-assisted development right now. Not prompt engineering. Not model selection. Information architecture.
Why Prompting Is the Wrong Lever
Prompts are ephemeral. You write a careful, well-structured prompt, get a good response, and the next session you start over. This is fine for one-off tasks. For sustained software development on a real codebase, it’s a productivity hole.
The mental model most engineers operate under is that better models or better prompts will solve context problems. They won’t. A frontier model is still just a very capable reader with no persistent memory. If your repository doesn’t carry its own context, you are the memory system, and you will get tired of doing that job.
The fix isn’t clever prompting. It’s designing your codebase so the AI can reconstruct context from structure alone.
What This Actually Looks Like
Nainsi Dwivedi put it cleanly in a recent breakdown of Claude Code workflows: “If your repo is messy, Claude behaves like a chatbot. If your repo is structured, Claude behaves like a developer living inside your codebase.” That’s not hype. That’s a straightforward consequence of how context windows work.
The pattern she describes has four components: a clear statement of why the system exists, a map of where things live, explicit rules about what’s allowed or forbidden, and documented workflows. These don’t need to be elaborate. A CLAUDE.md file with those three sections, kept deliberately short, functions as the north star for every session. Clarity beats comprehensiveness. If the file becomes too long, the model starts missing the signals buried inside it.
The part most teams skip is the rules layer. Hooks and guardrails that run automatically, regardless of what you put in a prompt, are how you prevent the model from touching auth, billing, or migration files when it has no business being there. Models forget. Hooks don’t. That asymmetry matters.
Architecture Decision Records Are Not Optional
This is where I’ll push harder than most writeups do. ADRs (architecture decision records) are the single most valuable thing you can add to a codebase for AI-assisted development, and almost nobody is writing them well.
An ADR that only records the outcome is nearly useless. If the record says “we use event sourcing for the order domain” but doesn’t explain that you tried CQRS without event sourcing first and hit specific consistency problems under load, the model will confidently suggest reverting your decision. It has no reason not to. From its perspective, there’s no recorded reason the current approach was chosen.
The why is the memory. Document the rejected alternatives. Document the constraints that existed at the time. A model reading a well-written ADR will not suggest undoing a deliberate decision. A model reading only code will absolutely try.
Local Context Files for Danger Zones
One pattern I’ve started using consistently is local context files inside complex or sensitive modules. A CLAUDE.md inside src/auth/ that explains the security model, why the session handling is structured the way it is, and what the non-obvious invariants are, will prevent a category of mistakes that no top-level prompt will catch reliably.
The reason this works is proximity. When the model is working inside that directory, the local context file is immediately relevant. It doesn’t have to navigate to find it. The danger zone explains itself.
This isn’t novel architecture. It’s the same principle as inline documentation, applied at the module level, for a reader that processes everything as text.
The Shift That Changes How You Work
The teams that are getting real productivity out of AI coding tools are not the ones with the best prompt libraries. They’re the ones who’ve accepted that maintaining an AI-readable repository is now part of the engineering job. It’s not a one-time setup. It’s ongoing work, the same way keeping tests green is ongoing work.
When a decision gets made, write the ADR with the why. When a module grows complex enough to have hidden invariants, add a local context file. When a workflow becomes standard, encode it somewhere the model can find it rather than re-explaining it in every session.
Prompting is temporary. Structure is permanent. And permanent is what you want when you’re trying to build something that lasts.
Sources & Further Reading
#AIEngineering #SoftwareArchitecture #ClaudeCode #AIWorkflows #EngineeringPractices
