One of the more elegant patterns in spec-driven development is how it uses the repository's file and folder structure as a machine-readable workflow contract. Ruby on Rails popularized the idea of 'convention over configuration' twenty years ago for web applications. The same principle, applied to agent-driven development, turns your repository into a self-describing workflow engine.
The structure that's emerged across leading implementations (Spec Kit, Kiro, and numerous custom setups) follows a consistent template. A dedicated directory in the repository (commonly .sdlc/ or .specs/) holds two categories of content:
Project-wide context: an overview of the system's purpose, technology stack, and boundaries. Architecture decision records documenting patterns and conventions. Coding standards and naming rules. Reusable templates for each type of artifact the agents will produce. This context applies globally to every feature, every agent session, every decision. It's the codified equivalent of what a senior engineer carries in their head after years on a project.
Feature-specific artifacts: each feature or requirement gets its own subdirectory containing the requirement specification, any architectural proposals specific to that feature, and a subdirectory of implementation task specifications. Everything related to a single feature lives in a single location.
What elevates this from mere organizational tidiness to genuine workflow engineering is that the naming patterns and directory hierarchy encode information the control layer uses to operate. The engine knows that files in the context directory apply globally while files in feature directories are scoped. It can identify parent-child relationships between requirements and their implementation tasks by parsing the directory tree. It validates completeness automatically: are all acceptance criteria covered by at least one task? It computes what can run in parallel versus what must be sequential. And it prevents premature transitions: a requirement can't be marked complete if any of its constituent tasks are still in progress.
Traceability as a Structural Property
Each artifact carries explicit linkage in its metadata (typically YAML headers embedded in markdown files). A task specification identifies which requirement it implements. A code commit identifies which task it fulfills. An architecture proposal references the requirement that motivated it.
This creates what I call 'traceability by construction' rather than 'traceability by documentation.' The audit chain exists because the structure requires it, not because someone remembered to write it down. In the industries I work in (mining, energy, financial services), regulators want to understand not just what was built but the chain of reasoning that led to each decision. When your development workflow produces that chain as a natural byproduct, compliance stops being a separate workstream and becomes embedded in how you build.
I saw the value of this most clearly during a governance review for a resources client. When auditors asked why a particular data pipeline used one processing approach versus another, the team could trace from the deployed code to the task spec, from the task to the requirement, and from the requirement to the architecture proposal where the agent had documented its rationale and the alternatives it considered. That traceability existed not because anyone sat down and created it for the audit. It existed because the development workflow produced it automatically.
There's a deeper principle at work here that connects to how I've always thought about data platforms and analytics architecture. When I was building enterprise data platforms in Australia a decade ago, the hardest lesson was that the platform's value wasn't in the technology stack. It was in the metadata layer: the data dictionaries, lineage tracking, quality rules, and governance policies that made the data trustworthy and reusable. Strip away that layer and you have a very expensive data lake that nobody trusts. The same principle applies to agent-driven development. The specifications, conventions, and architecture records are the metadata layer of your software factory. Without them, you have agents producing code that nobody can audit, maintain, or trust at scale.
The Qodo 2025 AI Code Quality report provides quantitative support for this. Teams using structured AI code review processes saw quality improvements jump to 81 percent (from 55 percent without structure). A separate Atlassian study showed that nearly 39 percent of comments left by AI agents in code reviews led to actual code fixes. These aren't perfect numbers, but they demonstrate that structured, specification-driven agent interactions produce meaningfully better outcomes than unstructured ones.
This article is from The Agentic SDLC by Carlos Aggio.