>_

Claude Code Persistent Memory with Evolving Lite

Robin||5 min
claude-codememoryevolving-litekairnopen-source
Claude Code Persistent Memory with Evolving Lite
Listen to this article (5 min)

Claude Code Has No Persistent Memory. Here is How to Add It.

Eighteen months of building with Claude Code without persistent memory taught me one thing: manual systems compound slowly. You set up a _memory/ folder, you write the bootup ritual, you log progress at session end. It works. I described exactly how in the memory system post.

But manual means you. Every session end, you have to remember to log. Every architecture decision, you have to save it yourself. Every hard-won lesson stays in your head until you explicitly transfer it to a file.

Turns out, you can automate all of that - and add persistent memory with a knowledge graph on top. That is what Evolving Lite does.

The Problem: Claude Code Memory Files Do Not Scale

The _memory/ folder approach solved the stateless session problem. Claude Code stops starting from zero. Real progress. But it introduced a different friction: maintenance cost.

After 60+ sessions running the manual system, I noticed a pattern. The memory files were only as good as my discipline at session end. When I was tired, I'd skip logging. When I was in flow, I'd forget entirely. The system degraded exactly when I was most productive.

Three failure modes showed up repeatedly:

  • Architecture decisions made in session 12 were invisible by session 40
  • The same code mistakes resurfaced because lessons lived in flat JSON, not a queryable graph
  • Self-correction - learning from a code fix - required me to write a log entry manually, immediately, or the learning was lost

The system needed to get smarter on its own, not just remember what I told it.

The Solution: Evolving Lite + Kairn

Evolving Lite is an open-source Claude Code plugin. Kairn is a persistent knowledge graph MCP server. Together they replace the manual memory ritual with a self-evolving system.

The core shift: instead of Claude Code reading flat files I maintain, it reads and writes a structured knowledge graph that grows automatically as you work.

Manual _memory/ Approach
  • -You log architecture decisions manually
  • -Session-end ritual requires discipline
  • -Lessons stored as flat JSON, not queryable
  • -Self-correction only if you remember to log it
  • -Knowledge degrades when you skip updates
Evolving Lite + Kairn
  • +Architecture decisions saved automatically
  • +Bootup loads your full knowledge graph in seconds
  • +Knowledge graph queryable by concept and tag
  • +Code fixes trigger automatic learning events
  • +System gets smarter whether you log or not
Evolving Lite Claude Code plugin architecture - persistent memory layers with commands, hooks, agents, knowledge graph, and Kairn MCP integration
Evolving Lite full architecture - 15 commands, 10 hooks, 5 agents, Kairn knowledge graph (click to expand)

When Kairn boots up in a session, it loads the full project context: active projects, knowledge nodes, past experiences. In the demo video, that is 3 projects, 37 knowledge nodes, and 25 experiences - all available before Claude writes a single line of code.

Persistent Memory in Action (Demo)

Evolving Lite + Kairn Demo (2:18)

The demo walks through six stages. Here is what each one means in practice.

Stage 1 - Bootup. Claude queries Kairn at session start and gets back a structured context payload. Not a flat file. A graph of concepts, relationships, and experiences that were built up over previous sessions. The session starts at full knowledge in under 10 seconds.

Stage 2 - Knowledge Recall Before Codebase. This is the part that changed how I work most. Before searching the file system, Claude queries the knowledge graph. If an architecture decision exists - "we use SHA-256 for token hashing, not base64url" - Claude retrieves it before touching a single file. The codebase search confirms; the knowledge graph informs.

Stage 3 - Live Learning. During the session, when a new architecture decision is made, it gets saved as a permanent knowledge node. Not a log entry. A node with relationships to other concepts in the graph. Next session, it is part of the bootup context. No manual step required.

Stage 4 - Contextual Code Review. Claude reviews a code segment using stored architecture decisions as the reference frame - not just the code itself. The review is grounded in decisions the system has actually tracked, not abstract best practices.

Stage 5 - Code Fix. The demo shows replacing insecure base64url encoding with SHA-256 hashing. Standard fix. But what happens next is not standard.

Stage 6 - Self-Correction Without Intervention. After the code change, the system automatically logs the fix as a learning event. No manual entry. No prompt. The pattern - "base64url encoding is insecure for token generation" - becomes a permanent knowledge node. Next time Claude encounters base64url in a security context, it flags it from memory, not from inference.

Self-Evolving Memory Loop
Session Start: Kairn loads project context
v
Knowledge Recall: query graph before codebase
v
Work: Claude uses stored decisions as reference
v
Live Learning: new decisions saved as graph nodes
v
Self-Correction: code fixes trigger automatic learning
v
Next Session: richer context, smarter starting point

What Changes After Using It

The difference persistent memory makes is not just convenience. It is the compounding effect.

Every session adds to the knowledge graph. Architecture decisions made in week one are still active context in week eight - not because you remembered to maintain a file, but because the graph is append-only and queryable. The system does not forget unless you tell it to.

The self-correction loop is the piece I underestimated. In self-correcting Claude Code workflows, I wrote about hooks that detect context drift mid-session. Evolving Lite extends that idea across sessions. A lesson learned from a code fix in session 30 is available in session 31 without any manual transfer.

For teams exploring knowledge architecture patterns, Kairn's graph structure also means knowledge is organized by concept relationships - not just chronological logs. You can query "what decisions relate to authentication?" and get a structured answer, not a file search.

The numbers from the demo: 37 knowledge nodes, 25 experiences, 3 active projects - all loaded in the bootup phase. That is persistent memory that travels with your workflow, not a document you have to maintain.

Why Open Source Matters Here

Both Evolving Lite and Kairn are open source. That is a deliberate choice.

Memory systems are intimate. They know your architecture decisions, your past mistakes, your in-progress work. Trusting that to a closed system means trusting a vendor with your intellectual context. Open source means you own the graph, you own the data, you can inspect every read and write.

It also means the system is transparent. When Claude queries Kairn and retrieves a knowledge node, you can see exactly what was stored and why. No black box. The hooks automation patterns that power the self-correction loop are all visible and modifiable.

The Honest Part

First version of this system was rough. The bootup loaded too much context - everything in the graph, regardless of relevance. Sessions started slow. Claude spent the first few exchanges processing context rather than working.

The current version uses relevance scoring. Bootup loads nodes above a confidence threshold, not the full graph. The session starts fast because the context is filtered, not just fetched.

That iteration took about two weeks. If you set this up yourself, expect the same calibration period. The system works on day one, but it gets meaningfully better once you have 10-15 sessions worth of knowledge nodes built up.

Get Started

Both tools are free and open source:

  • Evolving Lite - Claude Code plugin. git clone + bash setup.sh + add to pluginDirectories. Done.
  • Kairn - Knowledge graph MCP server. pip install kairn-ai. Optional but recommended for semantic search.

Want the full system blueprint? Get the free 3-pattern guide.

FAQ

How is Evolving Lite different from the _memory/ folder approach?+
The _memory/ folder requires you to manually log decisions, lessons, and progress at session end. Evolving Lite automates that - architecture decisions are saved as graph nodes during the session, and code fixes trigger automatic learning events. The knowledge graph also makes stored information queryable by concept, not just readable as flat files.
What is Kairn and do I need it to use Evolving Lite?+
Kairn is a persistent knowledge graph MCP server that stores and retrieves knowledge nodes across sessions. Evolving Lite works fully standalone with its own file-based memory (~60% recall). Adding Kairn upgrades that to semantic search (~90% recall) and cross-project knowledge. Both are free and open source.
Does this replace my CLAUDE.md file?+
No. CLAUDE.md still holds your static preferences and conventions - how you want Claude to behave. Evolving Lite and Kairn handle dynamic, evolving project knowledge - decisions made, lessons learned, architecture patterns established. They work in parallel, not in competition.
How long does it take for the knowledge graph to become useful?+
Immediately for bootup context, but meaningfully better after 10-15 sessions. The first session loads whatever you have seeded manually. By session 15, the graph has architecture decisions, lessons from code fixes, and recalled experiences that make every subsequent session start from a stronger baseline.

>_ Get the free Claude Code guide

>_ No spam. Unsubscribe anytime.

>_ Related