hermes: Nous Research's personal AI agent that grows with you across every app
A plain-English guide to hermes — Nous Research's open-source personal agent with built-in cross-session memory and support for CLI, Telegram, Discord, Slack, and WhatsApp. MIT licensed. 30-minute install.
Short version: hermes is Nous Research's swing at a complete personal AI agent — not a memory layer for Claude, but a full agent runtime with memory, autonomous skill creation, scheduled tasks, and multi-surface support (CLI, Telegram, Discord, Slack, WhatsApp, email). MIT licensed. If you want to replace Claude Desktop with an OSS agent that follows you across every app you use, this is the one. 30-minute install. By Nous Research.
What is hermes?
Most of the tools in the Cognition Memory registry are memory layers — add-ons that give an existing agent (Claude Code, Cursor) persistence. hermes is different. It's a full agent runtime. You install hermes, configure it, and it is your agent. Memory is a first-class feature, but so is multi-surface support, scheduled tasks, autonomous skill creation, and MIT-licensed portability.
Think of it as the OSS response to Claude Desktop — one place where your AI assistant lives, reachable from every app you use, with memory built in.
Who this is for
- Technical users who want a fully-featured personal agent without relying on a single vendor.
- People who interact with AI across many surfaces (not just a desktop app — also on their phone, in Telegram, via email).
- Builders who want to see what a "complete" agent stack looks like in open source.
- Teams standardizing on Nous Research models who want the ecosystem to match.
Skip this if
You already invested heavily in Claude Code or Cursor and just want to add memory to them. Hermes is a replacement, not an add-on. If you want an add-on, look at agentmemory, mcp-memory-service, or memory-mcp.
What problem it solves
Most personal AI setups are locked to a single app. Claude Desktop works on your Mac. ChatGPT works in its own app. Your agent can't reach you in Telegram while you're away from your desk. It can't schedule a reminder. It can't talk to your WhatsApp.
hermes attacks that from a different angle: instead of patching each app, one agent with one memory lives on your computer (or a server) and connects to every surface you care about. You chat with it in Telegram. You chat with it from the terminal. You chat with it in Discord. It remembers everything. It's the same assistant.
How to install it (plain English)
- Install.
pip install hermes-agent(needs Python 3.11+). - Initialize.
hermes init— an interactive walkthrough that sets up your LLM provider and first surface. - Start with CLI. Easiest surface to verify the agent works before wiring Telegram or Slack.
- Run it.
hermes run. - Layer on surfaces. Once CLI works, add one at a time — Telegram, Discord, Slack, WhatsApp, email. Each needs its own bot token/API.
Full walkthrough: /memory/tools/hermes.
What you can do with it (for a non-technical founder)
If your team is standardizing on OSS AI infrastructure:
- One agent, many surfaces — the same assistant reachable from Slack, Telegram, email, and CLI.
- Scheduled tasks — "Every Monday at 9am, summarize last week's Slack threads."
- Autonomous skill creation — the agent can add new capabilities to itself, persisted across sessions.
- Full data sovereignty — MIT-licensed, self-hostable. Your conversations don't leave your infrastructure.
- Nous model ecosystem — if you're using Hermes LLMs (also from Nous Research), the agent is tuned for them.
What CLO adds on top
hermes owns your personal agent + memory. Cognition CLO owns your team's retention layer. Both fit together: hermes handles "what does my personal assistant remember about me?"; CLO handles "what are the people in my org actually retaining?". Different scope, complementary roles.
FAQ
Is hermes the same as the Hermes LLMs?
Related but different. The Hermes LLMs are Nous Research's open-weight models. hermes-agent is the open-source agent runtime that can use Hermes LLMs (or OpenAI, Claude, local models). You don't need to use Hermes LLMs to use the agent.
Can I self-host it on a home server?
Yes. The typical setup is a VPS or a Raspberry Pi — it runs anywhere Python 3.11 runs. Keep the machine on for 24/7 agent reachability.
How is it different from hiring a virtual assistant?
hermes is an AI agent, not a human. It scales trivially, costs pennies per day, and never sleeps. The tradeoff: it's not a human. For actual human judgement, you still need a human.
What LLM should I use with it?
Any. hermes speaks OpenAI API format and supports Claude, local models via Ollama, and Nous Research's own Hermes LLMs. Pick based on price/latency/quality.
Does it integrate with Claude Code?
Not directly — hermes is a standalone agent. If you want Claude Code-specific memory, use agentmemory or mcp-memory-service instead.
What about privacy?
MIT licensed, self-hostable. Your data goes only where you send it. Configure a local LLM if you want zero outbound traffic for inference.
Ready to install? Full walkthrough at /memory/tools/hermes. Credit to @NousResearch for the project — star the repo if hermes shapes how you think about personal agents.
Share this post: