Carlos Aggio
Voltar aos Artigos
AI & Software Engineering

Institutional Memory Without the Institution

Carlos Aggio·13 de fevereiro de 2026·3 min de leitura

Every experienced engineer carries a mental model of their system that took months or years to build. Why that particular database technology was chosen. Why the API follows one convention for customer-facing endpoints and a different convention for internal ones. Why the retry logic has specific timing parameters. This institutional knowledge is enormously valuable, and in traditional organizations, it walks out the door whenever someone changes roles or leaves the company.

Agents face a more acute version of this problem. They have zero accumulated context. Every session starts fresh. Even with large context windows and retrieval-augmented approaches, the agent's working memory is a fraction of what a tenured engineer carries implicitly.

The naive solution ('just give the agent all the documentation') hits practical limits fast. Retrieved context consumes the same finite token budget that the agent needs for its actual work. And retrieval quality is uneven: sometimes the right context surfaces, sometimes noise does, and the agent has limited ability to distinguish between the two.

The Knowledge Service Pattern

The approach that's proving most effective in production is a dedicated knowledge service that other agents invoke through structured calls. When a requirements agent encounters an unanswered question ('Which message broker does this project use?' or 'Do we support tenant isolation?'), it doesn't guess. It calls the knowledge service with a specific, structured query.

The knowledge service searches the project's accumulated documentation (architecture records, convention files, previous decision logs) and returns one of two responses:

A grounded answer: the documentation covers this topic. Here's the relevant information. The calling agent proceeds with a decision anchored in established fact.

A logged gap: the documentation doesn't cover this. The knowledge service records the question, the calling agent states the assumption it will make in order to proceed, and both the question and assumption are captured as structured data in the output.

This structured approach means every question asked and every assumption made appears as a discrete, reviewable item when the pull request opens. The reviewer doesn't need to read between the lines of code to understand what the agent assumed. The assumptions are listed explicitly, with their originating questions, as part of the PR.

The Flywheel Effect

This is where the design gets genuinely elegant. When a reviewer approves a pull request, they're implicitly validating the assumptions the agent made. When a reviewer corrects an assumption, that correction flows back into the project's knowledge base as a new documented decision. Over time, the knowledge base grows organically from the actual questions that agents encounter during real work.

Nobody had to sit down and write comprehensive documentation. The documentation emerged as a natural consequence of agents doing their jobs and encountering the same gaps that any new team member would. Except unlike a human new-hire who absorbs context through osmosis and hallway conversations, the agent's learning process produces written artifacts that benefit every future agent (and human) who works on the project.

I find this particularly compelling because it solves the perennial documentation problem through a mechanism that requires zero additional effort from the development team. You don't create documentation as a separate activity. You create it by doing your normal work and reviewing what agents produce.


This article is from The Agentic SDLC by Carlos Aggio.