A Proper Memory Stack for AI-Assisted Engineering

Source asciidoc: `docs/article/proper-memory-stack-triad.adoc` Modern engineering teams keep trying to build a single giant “knowledge repository” and then wonder why it turns into a blur of scripts, notes, prompts, articles, and half-finished decisions. The problem is not lack of storage. The problem is category collapse.

A healthy memory architecture does not force every kind of knowledge into one container. It separates knowledge by how it is used, how authoritative it is, and who or what consumes it.

That is why a three-layer stack makes more sense than a monolith:

  • Canonization — executable rules, checks, policies, and machine-enforced constraints.

  • Documentation — stable, human-readable knowledge intended to explain, transfer, and preserve understanding.

  • Capturing — raw notes, prompts, fragments, working observations, and operational memory before they are polished.

If the repository name for the third layer is Capchewing, that is acceptable as a workspace brand. Conceptually, however, its function is still capturing: it is the intake and retention layer for useful but not yet fully normalized knowledge.

The authoritative pattern behind the triad

This three-part model is not a private preference dressed up as architecture. It aligns with several established patterns from software governance, documentation practice, and knowledge management.

First, the executable layer is real and well-established. Open Policy Agent describes policy-as-code as a way to specify policies in a declarative language and separate policy decision-making from policy enforcement. That is exactly the logic behind a script-backed canon layer: rules are not merely described, they are evaluated and enforced. [1]

Second, stable documentation is also a distinct authoritative layer. Microsoft’s .NET documentation guidance explicitly states that the ultimate goal is for XML comments in source code to be the “source of truth” for API documentation. [2] In other words, some knowledge is best preserved as maintained documentation, not as raw notes and not only as executable checks.

Third, mature knowledge management practice treats capture and transfer as a separate discipline. APQC describes knowledge flow as a cycle including creating, identifying, collecting, reviewing, sharing, accessing, and using knowledge. [3] APQC also explicitly frames knowledge transfer around the need to capture and transfer critical knowledge using intentional practices. [4] That is almost a direct justification for a dedicated capture layer.

Taken together, these sources point to a strong conclusion: good memory architecture separates executable knowledge, documented knowledge, and captured working knowledge because each has different operating rules.

Why one “source of truth” is often the wrong abstraction

Teams often repeat the phrase “single source of truth” as if it must mean one repository, one format, and one layer. Authoritative sources suggest something more precise.

Microsoft’s Azure Synapse guidance states that in Git-enabled development, the Git repository is the source of truth for code editing, while the service is the source of truth for execution. [5] Google’s own Search documentation makes the same point in another domain: Search Console is the source of truth for search performance, while Google Analytics is the source of truth for on-site behavior. [6]

That matters because it shows that serious systems often have multiple authoritative loci, each scoped to a different concern. The mistake is not having more than one authoritative layer. The mistake is letting layers overlap without boundaries.

For an AI-assisted engineering workflow, the right question is not “What is the one source of truth?” The right question is:

What kind of truth belongs in which layer?

The triad in practical terms

1. Canonization

This layer holds what must be enforced, validated, or computed.

Examples:

  • repository rules

  • structural constraints

  • lints

  • CI checks

  • policy engines

  • validation scripts

  • machine-readable manifests

If a rule must fail a build, flag a violation, or produce a deterministic evaluation, it belongs here.

This layer is not optimized for narrative clarity. It is optimized for repeatability.

2. Documentation

This layer holds what must be explained clearly to humans and retained in stable form.

Examples:

  • architectural articles

  • public or internal explanations

  • design rationale

  • operational guides

  • onboarding content

  • polished repository documents

If the goal is clarity, transfer, and durable understanding, the content belongs here.

This layer is not optimized for raw intake. It is optimized for readability and maintenance.

3. Capturing

This layer holds what should not be lost, even before it is refined.

Examples:

  • notes from chats

  • working prompts

  • technical fragments

  • partial conclusions

  • loose research snippets

  • rough operating ideas

  • temporary patterns worth keeping

This is where Capchewing fits.

Capchewing should not pretend to be final documentation and should not pretend to be the executable canon. Its purpose is to preserve useful intermediate knowledge so that it can later be promoted upward when needed.

This layer is not optimized for polish. It is optimized for capture, recall, indexing, and promotion.

Why the capture layer matters more in the AI era

Before AI-assisted workflows, a lot of useful knowledge simply died in private chats, notebooks, terminal history, and half-remembered experiments. That waste is much more expensive now.

In an AI-heavy workflow, the number of valuable partial artifacts explodes:

  • prompt formulations

  • model instructions

  • debugging trails

  • selection criteria

  • synthesis notes

  • intermediate findings

  • rejected but still informative branches

If these artifacts are forced to become polished documentation too early, capture friction rises and people stop saving them. If they are never captured at all, the team falls back to rediscovering the same insights repeatedly.

That is why the capture repository should exist as a first-class workspace. Not because it is the final authority, but because it is the memory intake system.

A better statement than “single source of truth”

For this architecture, the strongest statement is not “everything must live in one source of truth.”

A better and more defensible statement is this:

A proper memory stack assigns an authoritative home to each class of knowledge: executable rules belong in canon, stable explanations belong in documentation, and volatile working knowledge belongs in a capture layer.

That formulation is stronger because it is operational. It prevents confusion between enforcement, explanation, and recall.

If this stack is formalized, the message can be simple:

  • Canonization is the home of executable truth.

  • Documentation is the home of stable written truth.

  • Capchewing is the home of captured working memory.

That is not redundancy. That is division of labor.

And in practice, the lifecycle becomes straightforward:

  • Capture first in Capchewing.

  • Promote enduring knowledge to Documentation.

  • Promote enforceable rules to Canonization.

Some notes will never be promoted. That is fine. Their value may be retrieval rather than publication.

Final position

The triad is sound.

Not because it sounds elegant, but because it matches how serious systems already separate concerns: executable policy from prose, maintained documentation from transient working knowledge, and authoritative truth from recoverable memory.

In other words, the correct memory stack is not a pile. It is a pipeline.

References