Loading...
Loading...
The missing piece is organizational memory — the shared context that makes the agent actually useful across a company. Today, founders piece it together from six different OSS projects with six different install paths.
The Memory Registry is a curated on-ramp. Every entry is open source. Every entry links to its real repo. We test the installs, write the MCP glue, and keep one clean page pointed at the stack that works.
We don't sell memory. We sell the pedagogy layer that sits on top of it. The registry is a public good — and it's how CLO earns the right to be the next thing you install.
benchmarks · in progress
No one in this category has shipped a real benchmark. We're running every engine in our marketplace through five standard workloads and publishing the leaderboard publicly. First results land 2026-05-15.
Given a 50-document corpus and 100 follow-up questions, what fraction returns the correct entry as the top result?
Sustained writes per second under a 10k-entry stress run. Includes embedding + index update latency.
P50 / P95 latency on a single retrieval call against a warmed-up 10k-entry store.
Disk + RAM footprint per 10k indexed documents. Measured against a stable workload.
How well does the engine model 'I haven't touched this in N days' vs 'I touched it yesterday'? Cognition's primitive — testing whether the substrate even tries.
under testmem0 · letta · zep · supermemory · gbrain · basic-memory
Notify me when leaderboard ships →We read the repos, ran the installs, and wrote plain-English setup guides for every tool — so non-technical founders don't have to decode a GitHub README to get started. Every entry links to our walkthrough; GitHub is one click away if you'd rather skip ours.
If you or your team uses Obsidian (or are willing to — it's free and takes 5 minutes to set up), these plugins turn your vault into a live memory layer that Claude, Cursor, and ChatGPT can read, write, and reason over. No terminal required.
In April 2026 Karpathy published a gist describing a new pattern: instead of RAG, have the LLM build and maintain its own structured wiki of interlinked markdown. Your agent never rediscovers what it already figured out. This is the frontier right now.
These are the tools built specifically for the Claude-Code-era of agent tooling. One line in your .mcp.json and your agent remembers across sessions. Good for small teams standing up a shared brain in an afternoon.
These are the frameworks the big agent builds run on. Self-hostable, multi-tenant, production-ready. If you're deploying agents at company scale — especially with compliance, audit, or temporal requirements — start here.
When a founder lands on /stack and describes their company, our recommender matches them to tools in this registry — by use case, not marketing spend. Getting listed is free. Getting matched is free. Email Vedant with your repo, a 2-sentence tagline, and the kind of team it's built for. We run the install, write the plain-English explanation, and ship you into the funnel within a week. Featured tools also get a deep-dive thread on X.
Memory stores what happened. CLO runs a Weibull survival curve per employee, per concept, on top of whatever memory stack you picked — and nudges the right person on the right surface before the incident.
Sitting on top of the memory stack you just installed, we mathematically model your brain — and every employee's brain — and, after a few days of signal, start simulating the decisions you make and why.
They build a model of the aircraft, the weather, and the pilot's reaction time. They run the model a thousand times, then the pilot steps onto a real flight with the hard parts already rehearsed.
Cognition is the flight simulator for your company's brain. Same principle, different organ. We fit a computational model to every person on your team, run it forward under realistic conditions, and surface the moments that will go wrong — before anyone gets hurt.
We don't ask your team to do anything new. Every signal they produce into the memory tools you just installed — docs touched, meetings attended, SOPs opened — streams into our model.
For each employee, for each concept they've encountered, we fit a Weibull survival function. After 3–5 days of signal, we have a computational model of how they think and what they hold.
We project who drops below 60% retention on which concept, by when. The model tells us who is about to make the decision they'll regret — before they make it.
A refresher is drafted and routed to whichever surface that person already uses — Slack DM, SMS, handheld, Teams — timed to hit them before the shift, the meeting, or the ticket.
S is the half-life of a concept for that specific person; β is how sharp the drop-off is. Both parameters are re-fit every time a new signal lands. No black box — the forgetting curve is an actual function you can inspect, not a vibe.
The CEO takes their hands off the micro-managing. The refresher lands on the employee's surface before the shift they would have walked into unprepared. If you want, we can take the whole decision layer off your plate.
Same memory. Different pedagogy. That gap is where every nudge-tool, every LMS, and every generic agent has failed — they all act on what people know, not how people process what they know. A generic AI can have the exact same context as the person it's supposed to stand in for and still make the wrong call, because it doesn't reason the way that specific person reasons.
Our self-learning brain models do. We fit a computational model to how you and every one of your employees actually reason — and then we can run that reasoning forward, at scale, without you in the loop.
Every tool above is a great substrate. None of them model forgetting. CLO sits on top of your memory stack and runs a Weibull survival curve per employee, per concept — so you know who's about to drop below the safety line and nudge them before it costs you.