llm-wiki: the reference implementation of Karpathy's self-building AI memory pattern
A plain-English guide to llm-wiki — the open-source reference implementation of the Karpathy LLM-wiki pattern. Your AI agent compiles its own Markdown wiki as you work. Read this to understand the pattern before picking a production tool.
Short version: llm-wiki is the open-source reference implementation of the Karpathy LLM-wiki pattern — your AI agent compiles and maintains its own Markdown wiki as it works, cross-linked and git-versionable. It's deliberately small and opinionated. Read the code (or the README) before picking a production-grade version (gbrain, claude-memory-compiler, obsidian-wiki). Understanding the pattern beats adopting any single tool.
What is llm-wiki?
In April 2026, Andrej Karpathy described a pattern publicly: instead of vector-DB RAG, have the LLM build and maintain its own structured wiki of interlinked Markdown pages. Facts become pages. Relationships become links. The agent rereads its own wiki at the start of each session and updates it at the end.
llm-wiki is the reference implementation of that pattern — a small, opinionated codebase that shows exactly how the loop works: ingest session data → LLM compiles structured pages → writes Markdown with cross-links → agent reads the wiki next time.
It's not the production tool you'll deploy. It's the teaching tool that makes the pattern click.
Who this is for
- Developers who want to understand the Karpathy pattern before picking a production tool.
- Anyone planning to build or adapt their own LLM-wiki implementation.
- Researchers exploring self-building knowledge bases.
- Founders evaluating
gbrain/claude-memory-compiler/obsidian-wikiwho want to know the shared foundation.
Skip this if
You're ready to ship. Go straight to one of the production-grade implementations:
gbrain— company-brain flavored, popular with founders.claude-memory-compiler— Claude Code-native, from Cole Medin.obsidian-wiki— Obsidian-native Python framework.
What problem it solves
The LLM-wiki pattern is a big shift. Without a small reference implementation, the ideas stay abstract. You read Karpathy's post and nod — but you can't actually tell how the compile step works, what the prompt looks like, how cross-links are generated, how the agent reads the wiki back in.
llm-wiki closes that gap. A few hundred lines of code. Read it end to end and you'll understand the shape: what gets passed to the compiler, what the output looks like, how the graph of pages evolves. Every production implementation is a variation on this shape.
How to install it (plain English)
- Clone the reference implementation.
git clonethe llm-wiki repo. - Set your LLM API key. OpenAI or Anthropic.
- Point it at a directory. llm-wiki writes Markdown pages into a target folder — we recommend a git-versioned directory so you have history.
- Feed it session data. Chat transcripts, meeting notes, whatever text you want compiled. llm-wiki runs the compile step and emits pages.
- Inspect the output. Read the pages. Read the links. Understand the shape.
Full walkthrough: /memory/tools/llm-wiki.
What you can do with it (for a non-technical founder)
Run llm-wiki to see the pattern, not to run production memory. Once you've seen how the compile step works, pick a production tool (gbrain, claude-memory-compiler, or obsidian-wiki) that's actively maintained and fits your workflow. The reference teaches; the production tool ships.
For the deeper context on the pattern itself and all four implementations, read: /blog/what-is-the-karpathy-llm-wiki-pattern.
What CLO adds on top
Every LLM-wiki variant (llm-wiki, gbrain, claude-memory-compiler, obsidian-wiki) produces a growing Markdown brain for your agent. Cognition CLO is what tracks whether your team is actually retaining the knowledge those pages contain. The wiki is content. CLO is pedagogy. You need both if you want humans to keep up with the agent.
FAQ
Is this production-ready?
No, and that's intentional. llm-wiki is a reference. Use it to understand; deploy a production tool.
Is it tied to a specific LLM?
No — the compile step works with OpenAI, Anthropic, or local models. Swap the provider in the config.
Why Markdown specifically?
Markdown is the only memory format that survives changes of vendor, model, and tool. Vector databases lock you in. A folder of Markdown files is portable, human-readable, and git-versionable.
How is this different from just using CLAUDE.md?
CLAUDE.md is hand-written. llm-wiki (and its production descendants) have the agent write and maintain the wiki automatically. Different labor model, different scale.
Who originated this pattern?
Andrej Karpathy published the gist describing the pattern in April 2026. Farza at F Inc built the first serious implementation. Garry Tan at YC popularized the company-brain flavor as gbrain. Cole Medin built the Claude Code-native version. Credit to all.
Can I contribute or extend it?
Yes. The repo is open to contributions. If you build a variant, it may be eligible for inclusion in the Cognition Memory registry — reach out.
Ready to read the reference? Full walkthrough at /memory/tools/llm-wiki. Deep dive on the pattern: /blog/what-is-the-karpathy-llm-wiki-pattern. Credit to @karpathy for describing the pattern, and the reference implementation maintainers — star the repo if it clarifies the pattern for you.
Share this post: