cognee
Cognee is a lightweight, graph-first memory library with a famously simple API — you can get useful memory working in about 5 lines of code. It sits between mem0 (LLM-managed, vector-first) and graphiti (heavier, Neo4j-based). If you want graph semantics without the infrastructure cost, cognee is the right fit.
- Solo developers who want graph memory without running Neo4j
- Python teams prototyping agent memory
- Anyone who finds mem0 too opaque and graphiti too heavy
What you'll do
cognee is a pip install and a handful of imports. It can use a local SQLite/DuckDB backend, so no external database is required to start. Budget 10 minutes.
Before you start
- Python 3.10+
- An OpenAI or compatible API key
Step-by-step install
- 011. Install
pip install cognee
- 022. Minimal script
import cognee # Add some knowledge await cognee.add("Amara is the CX lead at Linea, effective April 15 2026.") await cognee.cognify() # Query results = await cognee.search("who is the CX lead at Linea?") print(results)Tip: The `cognify` step is cognee's graph construction phase — it builds entity/relation structure from the raw text. Run it after batches of adds. - 033. Configure storage (optional)
By default cognee stores locally. For production, point it at Postgres, Qdrant, or another backend via environment variables. See the docs.
- 044. Wrap it as an MCP server
cognee has a community MCP wrapper. Check the repo for the current install path if you want to call it from Claude Code.
Your first 10 minutes
- 01Add and cognify 10 facts about your company.
- 02Search them. Observe the graph semantics (results include linked entities, not just the matched text).
- 03Try adding a document — cognee can ingest and structure larger text blobs.
- 04Swap the backend to Postgres when you outgrow local.
- 05Add Cognition CLO for retention modeling on top.
Troubleshooting
Cognify is slow or expensive.
Batch your adds before cognifying. Each cognify pass costs LLM tokens — one per ~10 adds is a reasonable cadence.
cognee holds the knowledge. Cognition CLO models retention per employee per concept using a Weibull forgetting curve — so you see decay before it becomes a missed SOP or a failed audit.