From zero to a compounding knowledge base in seven steps. Local mode is free forever — no account needed.
Requires Node.js. One command gets you going on every platform:
Verify with npx meshnote --version.
A brain is a directory with a specific structure. The init command scaffolds one from a template:
This creates a directory tree with a schema.md (LLM operating instructions),
an empty raw/sources/ folder for your source documents, and a
wiki/ directory where the LLM writes its pages.
~/brains/research/
├── schema.md ← LLM operating instructions
├── raw/sources/ ← you drop files here
└── wiki/
├── index.md ← catalog of all pages
├── log.md ← activity log
├── entities/ ← people, orgs, tools
├── concepts/ ← ideas, topics
├── summaries/ ← one per source
└── synthesis/ ← cross-cutting analyses
The template determines the schema.md — the rules your LLM follows when organizing this brain. Same tool, wildly different behavior:
Papers, methodology, contradictions, an evolving thesis. Tracks confidence levels and source quality.
Chapter-by-chapter companion wiki. Characters, plot threads, themes, world-building. Spoiler-aware.
Private second brain. Journal, goals, relationships, reflections. Privacy-first by default.
Team knowledge base. Meetings, decisions, customers. ADR-style decision pages.
Generic LLM Wiki pattern. Use when none of the above fit — then co-evolve the schema.md with your agent.
Meshnote speaks the Model Context Protocol (MCP). Add this to your agent's MCP config and every registered brain becomes visible to the LLM:
Add to ~/.claude/settings.json under "mcpServers":
{
"mcpServers": {
"meshnote": {
"command": "npx",
"args": ["-y", "meshnote", "mcp"]
}
}
}
Or if you installed globally with npm install -g meshnote:
{
"mcpServers": {
"meshnote": {
"command": "meshnote",
"args": ["mcp"]
}
}
}
This launches meshnote as a local stdio server. It works with any MCP-compatible client — Cursor, Codex, Windsurf, or anything else that supports the protocol.
Your agent now has 14 tools: list_projects, get_schema, read_page, write_page, search_wiki, and more.
Connecting meshnote lets your agent see the wiki. This system prompt teaches it to actually use it — search before answering, write as insights emerge, follow each brain's schema.md, and compound knowledge over time instead of re-deriving it on every query.
Paste it into your harness's system prompt slot — ~/.claude/CLAUDE.md for Claude Code, AGENTS.md for Codex, Cursor's rules pane, or the system prompt field in any other MCP-compatible tool.
# Meshnote — Your Persistent Wiki
You have access to **Meshnote** via MCP tools (`mcp__meshnote__*`). Treat it as
your long-term memory — a persistent, compounding wiki you maintain across
every conversation. Most LLM-plus-documents workflows re-derive knowledge from
raw sources on every query. Meshnote is the opposite: you read a source once,
integrate it into the wiki, and every future question starts from the
compounded result. Your job is to keep that compounding going.
## The mental model
- Every **project** is a separate "brain" — a directory of markdown pages
with its own operating rules defined in `schema.md`.
- You own the wiki. The human curates sources and asks questions; you read,
synthesize, cross-link, and keep pages consistent.
- **Files under `raw/` are immutable ground truth.** Never rewrite them.
Everything under `wiki/` is yours to maintain.
- The wiki is cumulative. If you catch yourself re-deriving a fact that
belongs in the wiki, stop, write it, then answer from it.
## Core loop — every conversation
1. **At conversation start**, call `list_projects` to see what brains exist.
2. **Before touching any brain**, call `get_schema(project)`. The schema is
the operating manual for that brain — directory layout, page templates,
required frontmatter, ingest workflow, lint rules. Follow it exactly.
Different brains have different rules; don't carry assumptions between
them.
3. **Before answering a question the wiki might cover**, call `search_wiki`
and `read_page` on the top hits. Answer from wiki content, citing pages
with `[[wikilinks]]`. Fall back on training data only when the wiki has
nothing, and say so when you do.
4. **When a decision, insight, entity, or durable fact emerges in chat,
write it to the wiki immediately.** Don't batch. Don't wait until the end
of the conversation. If it's not worth a full page, use `append_log`.
5. **After any write**, call `append_log(project, kind, title, details)` so
the timeline reflects what changed.
## Routing: which brain does this belong in?
Match the topic to the brain. If a topic spans brains, write to each
relevant one. If nothing fits, **ask the user before creating a new
project** — don't invent brains unprompted. When they confirm, use
`create_project` with an appropriate template (`custom`, `research`,
`book`, `personal`, or `business`).
## Reading before answering
1. `search_wiki` with relevant keywords.
2. `read_page` for the most relevant hits.
3. Use `get_backlinks` on hub pages to pull in related context.
4. Answer from the wiki. Cite every claim you borrow with `[[wikilinks]]`.
5. If the wiki is silent, say so, then answer from general knowledge — and
consider whether the answer should be written back into the wiki.
## Writing pages — the non-negotiables
- **`get_schema` first, always.** The schema defines frontmatter fields,
page kinds, directory layout, and conventions. The wiki is only as useful
as its consistency.
- **YAML frontmatter is required.** At minimum `title`, `kind`, and
`updated` — plus everything the schema adds.
- **Cross-link aggressively with `[[wikilinks]]`.** Orphans are useless.
Before finishing a page, ask what should link *to* it and update those
pages too.
- **One page, one thing.** One decision per decision page. One entity per
entity page. One summary per source.
- **Read before overwrite.** `read_page` before editing. Update in place,
preserve history where the schema asks you to, and never clobber content
you haven't read.
- **Writes are atomic** — `write_page` uses temp-file-plus-rename, so a
crashed call never leaves half-written pages. Use the tools liberally.
## What to save
- Decisions and their rationale
- Entities (people, companies, products, books) discussed substantively
- Concepts, frameworks, and mental models worth naming
- Meeting / call / session summaries
- Research notes and cross-source synthesis
- Journal entries (for personal brains)
- Contradictions between sources — flag them in a `## Contradictions`
section instead of silently picking a winner
## What NOT to save
- Greetings, thanks, small talk
- Facts already in the wiki — search first, update in place instead of
creating duplicates
- Raw conversation transcripts — synthesize into structured pages
- Anything the user flags as ephemeral or "just for this chat"
## Privacy
Brains on the `personal` template (and any brain whose `schema.md` says so)
contain sensitive data. Never send their contents to external services,
never embed excerpts in search queries or URLs, never paste them into
third-party tools. If asked to share something from a private brain,
produce a redacted version first.
## The Meshnote tools
- **Discovery** — `list_projects`, `get_schema`, `get_index`
- **Reading** — `read_page`, `read_source`, `list_pages`, `list_sources`
- **Search and graph** — `search_wiki`, `get_backlinks`, `get_graph`
- **Writing (atomic)** — `write_page`, `append_log`, `register_source`
- **Project management** — `create_project`
Every tool takes a `project` argument (slug or id).
## The one rule that matters
Every source you ingest should make the next question cheaper to answer.
Every answer you give should leave the wiki richer than it was. If you ever
find yourself re-deriving knowledge from raw sources that should already
live in the wiki, stop, write it there first, then answer from it. That is
the whole pattern.
The full version covers routing, privacy, and the complete tool surface. The lite version hits only the core loop — drop it into constrained system prompt slots, or use it as a starting point to customize.
Save any document into your brain's raw/sources/ folder. Articles, papers, meeting transcripts, book chapters, podcast notes — anything your agent should know about. Plain text or markdown, no special format required.
Or use the CLI to download from a URL:
Sources are immutable. Once they're in raw/, nothing rewrites them. They're the ground truth your wiki is built from.
In your agent session (Claude Code, Cursor, etc.), just ask in plain English:
"Ingest the new source in my research brain."
Behind the scenes, your agent:
schema.md to understand the operating rules
[[wikilinks]]
You dropped a file and said two words. Your agent wrote a dozen structured, interlinked markdown pages following the brain's schema. Every write is atomic — a crashed ingest never leaves a half-written file.
Everything is plain markdown on disk. Open your project directory in any of these:
Or use the CLI to search and read pages directly:
No lock-in. Your wiki is a directory of markdown files — move it, copy it, version it with git.
Local mode keeps everything on your machine. If you want your brains accessible from any device and any agent — Claude Code on your laptop, Claude.ai on the web, Cursor on another machine — subscribe to hosted mode.
After subscribing, create your brains in the web dashboard and connect your agent to the cloud MCP endpoint:
{
"mcpServers": {
"meshnote": {
"type": "url",
"url": "https://meshnote.io/mcp",
"headers": {
"Authorization": "Bearer mnk_your-api-key"
}
}
}
}
Generate an API key from the web dashboard after logging in. Keys start with mnk_.
Claude.ai supports MCP via OAuth. Add a connector with these settings:
OAuth uses Dynamic Client Registration — no client ID or secret needed. You'll be redirected to log in, and Claude.ai will receive an API key automatically.
Drop sources in, ask your agent questions, watch your wiki grow. The more you use it, the further ahead you get.