gbrain
The open-source version of the knowledge brain that the CEO of Y Combinator actually uses. Your team's notes, people, companies, and meetings go into a Git repo of plain Markdown files; an AI agent reads them like you would. If you want a CEO-grade second brain for your whole org and don't mind someone technical standing it up once, this is the one to fork.
- Founder-led companies that want a single shared brain
- Teams tracking lots of people/companies (investors, customers, partners)
- Orgs where the CEO/CoS/ops lead is the primary operator
- Anyone who wants a Git-versioned audit trail of what the org knows
What you'll do
gbrain is the most ambitious option in the registry — it's the same system Garry Tan (CEO of YC) built for himself and open-sourced. It's not hard, but it needs someone on your team who's comfortable with a terminal and a Postgres database. Budget 45–60 minutes the first time.
Before you start
- A machine with git, Node.js 20+, and Docker installed
- About 500MB of disk space for Postgres + pgvector
- An OpenAI API key (or compatible — gbrain uses embeddings for hybrid search)
- Familiarity with a terminal (not programming — just running commands)
Step-by-step install
- 011. Fork or clone the repo
Go to github.com/garrytan/gbrain and either fork it (if you want your customizations version-controlled separately) or clone it directly. We recommend forking — your content will live here.
git clone https://github.com/garrytan/gbrain.git cd gbrain
- 022. Spin up Postgres with pgvector
gbrain uses Postgres with the pgvector extension for hybrid keyword + embedding search. The repo ships a docker-compose.yml that handles this for you.
docker compose up -d
Tip: If you already have a Postgres you prefer to use, check the .env.example for the connection strings you'd need to set instead. - 033. Configure your environment
Copy .env.example to .env and fill in your OPENAI_API_KEY. If you changed any Postgres defaults, set those too. Save.
cp .env.example .env # edit .env — at minimum set OPENAI_API_KEY
- 044. Install dependencies and run migrations
Install Node packages and run the database migrations that set up gbrain's schema.
npm install npm run migrate
- 055. Add some content
gbrain expects Markdown files in the content/ directory. You can either dump existing org notes there or start typing. Each file can describe a person, company, meeting, project, or concept. The first line of each file is its title; the rest is its body.
Tip: A great first move: export the last 30 days of meeting notes from your existing tool (Notion, Obsidian, Granola) into content/meetings/ and let gbrain index them. - 066. Index the content
Run the indexer. It reads every Markdown file, splits it into chunks, computes embeddings, and writes everything to Postgres. Re-run this whenever you add or change content.
npm run index
- 077. Connect to Claude Code via MCP
gbrain exposes an MCP server. Add it to your Claude config:
{ "mcpServers": { "gbrain": { "command": "node", "args": ["/absolute/path/to/gbrain/mcp-server.js"], "env": { "DATABASE_URL": "postgres://..." } } } }Tip: Use absolute paths, not relative. The MCP runtime may not resolve the working directory you expect. - 088. Test it
Restart Claude. Ask it: 'Using gbrain, find all the people we've met with in the last 30 days who work on AI agents.' If it returns a sensible list, you're live.
Your first 10 minutes
- 01Import your last 100 meeting notes. This is the highest-ROI content for a CEO brain.
- 02Add a file for each direct report: their goals, projects, last 1:1 notes. Your agent can now do perfect prep for every 1:1.
- 03Add entries for your top 20 customers / investors / partners. Each should list status, last touch, open asks.
- 04Run `npm run index` after every content push. Automate this via a git hook if you're technical.
- 05Wire up Cognition CLO on top — gbrain holds the knowledge, CLO models which pieces your team is about to forget.
Troubleshooting
`npm run index` fails with a pgvector error.
Make sure you ran `docker compose up -d` and the container is healthy (`docker ps`). The pgvector extension needs to be enabled in the database — the migration step handles this, so re-run `npm run migrate`.
Claude can't connect to the gbrain MCP.
Check that (a) the DATABASE_URL in your Claude config is correct, (b) Postgres is actually running, (c) you restarted Claude after editing the config, and (d) you used absolute paths in the config.
Searches return garbage or nothing relevant.
You probably haven't re-indexed after adding content. Run `npm run index` and wait for it to finish.
gbrain holds the knowledge. Cognition CLO models retention per employee per concept using a Weibull forgetting curve — so you see decay before it becomes a missed SOP or a failed audit.