supermemory: the second-brain-as-an-API that auto-ingests your browser, Slack, and Docs
A plain-English guide to supermemory — the open-source memory engine that auto-ingests from your browser, Google Docs, Slack, and Notion. Fastest way for a solo founder to stand up a personal AI brain. 15-minute install.
Short version: supermemory is the "second brain as an API" approach — it auto-ingests from your browser history, Google Docs, Slack, Notion, and other sources, then exposes the consolidated brain as an API your agent queries. Fastest way for a solo founder to stand up a personal brain that already contains all their scattered context. MIT licensed. 15-minute install. By Dhravya Shah and team.
What is supermemory?
Most memory tools require you to write into them. supermemory flips the model — it auto-ingests from the tools you already use. Browser history, Gmail, Slack DMs, Google Docs, Notion — wire a source once, and supermemory quietly ingests everything into a consolidated brain. You query it from your agent; you never think about writing.
It's open source (self-host) and also has a managed tier at supermemory.ai for founders who don't want to run Docker. Either way, the value prop is the same: your context, already collected, queryable as an API.
Who this is for
- Solo founders whose context lives in 10 SaaS apps (Slack, Notion, Docs, Gmail, browser).
- Anyone who wants their AI agent to feel like it already knows them on day one.
- Builders who value "it just works" over configurability.
- Teams who want a managed option (supermemory.ai) or self-host (OSS).
Skip this if
You want memory fully under your control — supermemory's auto-ingest trades some control for convenience. Also skip if you're building a multi-user product where every end-user gets scoped memory — that's mem0's job.
What problem it solves
Every founder has the same context problem: your brain's context lives in your browser tabs, your Slack DMs, your Google Docs, your Notion pages. When you ask your AI for help, it has none of that. You explain. It misses something you already researched. You re-explain.
supermemory solves that by ingesting everything. Your agent asks "what did I read about AI memory tools this week?" and gets real pages you visited. "What did Amara and I discuss about Q3?" pulls from your Slack thread. It feels like magic because the AI has context the AI shouldn't have had — the context was in your tools all along; supermemory just connects them.
How to install it (plain English)
- Install the SDK.
npm install supermemory. - Set your API key. Either from supermemory.ai (managed) or set your own LLM key and self-host.
- Add a source. Connect browser history, Slack, Google Docs, or Notion — each has a 3-click setup.
- Query from your agent.
import Supermemory from "supermemory"; const sm = new Supermemory({ apiKey: "..." }); const context = await sm.search({ userId: "vedant", query: "What did I read about AI memory tools this week?" }); - (Optional) Add the MCP bridge. Point Claude Code or Cursor at supermemory as an MCP server.
Full walkthrough: /memory/tools/supermemory.
What you can do with it (for a non-technical founder)
- "What did I read about X this week?" — query your browser history semantically.
- "What did Amara and I decide about pricing?" — pulls from Slack DMs and Docs.
- "Summarize last week's meetings" — if you use a transcript source, supermemory has them.
- Give a new AI tool instant context — hand over your supermemory API key and any new agent starts with all your context.
- Stop losing good ideas — if you read it, it's in the brain. No manual note-taking.
What CLO adds on top
supermemory is your personal second brain — auto-collected from your tools. Cognition CLO is your team's retention layer — modeling which employees are retaining which institutional concepts. They're orthogonal. Run supermemory for your personal brain. Run CLO for your team's retention.
FAQ
Does supermemory send my data to their cloud?
Managed tier (supermemory.ai): your content flows through their cloud during ingest. Self-hosted OSS: everything stays on your infra. Pick based on your comfort level.
Can I control what gets ingested?
Yes — URL-pattern filters, source-level toggles. Tune filters aggressively to cut noise.
How does it compare to mem0?
mem0 is explicit-write, multi-user-scoped, built for products. supermemory is auto-ingest, personal-first, built for founders' own second brain. Different use cases. See /blog/supermemory-vs-mem0.
What's the managed pricing like?
Free tier is generous; paid tiers scale with ingest volume. Check supermemory.ai for current numbers.
Can I export my data?
Yes. supermemory exports to JSON. Self-hosted users have raw database access.
Does it work with Claude Code?
Yes, via the MCP bridge. Claude Code queries supermemory as a standard MCP server.
Is the OSS version full-featured?
The core ingest + query is fully open. Some integrations (particular enterprise sources) may be managed-only. Check the repo for the current feature matrix.
Ready to install? Full walkthrough at /memory/tools/supermemory. Comparison with mem0: /blog/supermemory-vs-mem0. Credit to @dhravyashah and the supermemoryai team — star the repo if supermemory becomes your personal second brain.
Share this post: