llmwiki documentation
NoneA local, stdlib-only Python knowledge base built from your AI-coding-agent session transcripts. Install in five minutes, then keep every session searchable, interlinked, and offline. No database, no account, no cloud.
Pick your mode
llmwiki runs two interchangeable ways. Pick one, start — you can switch later.
| API mode | Agent mode | |
|---|---|---|
| Who calls the LLM | Python + Anthropic API | Your running Claude Code / Codex CLI |
| API key | Yes (ANTHROPIC_API_KEY) |
No |
| Cost | Per token (with cache) | Included in your agent subscription |
| Concurrency | Batch + parallel | Serial |
| Best for | Large corpora, cron, CI | Interactive, exploratory |
→ Read the full comparison before picking.
Getting started — 5 minutes
| # | Tutorial | Time |
|---|---|---|
| 01 | Installation — macOS / Linux / Windows / Docker | 5 min |
| 02 | First sync — from install to a browsable site | 5 min |
If it's not working in 10 minutes, open an issue — that's a bug in the docs.
Use with your agent
- Claude Code — slash commands, session metadata,
/wiki-ingest,/wiki-sync,/wiki-query. - Codex CLI — sync from
~/.codex/sessions/, live-session filtering. - Adapter reference: Claude Code · Codex CLI · Cursor · Gemini CLI · Copilot · Obsidian · OpenCode / OpenClaw · ChatGPT.
Use it locally
- Query your wiki —
/wiki-query,/wiki-graph,/wiki-lint,/wiki-candidates,/wiki-serve. - Bring your existing Obsidian / Logseq vault —
llmwiki sync --vault <path>, non-destructive by default. - Example workflows — four real, end-to-end workflows.
Deploy
| Target | Guide |
|---|---|
| GitHub Pages | deploy/github-pages.md |
| GitLab Pages | deploy/gitlab-pages.md |
| Docker / GHCR | deploy/docker.md |
| Vercel / Netlify | deploy/vercel-netlify.md |
| PyPI publishing | deploy/pypi-publishing.md |
| Homebrew tap | deploy/homebrew-setup.md |
Reference
- CLI reference — every
python3 -m llmwiki <subcommand>with every flag and realistic examples. - Slash commands reference — every
/wiki-*command used from Claude Code / Codex. - UI reference — every screen on the compiled site, how to reach it, what it shows.
- Architecture — three layers (
raw//wiki//site/). - Configuration · Full configuration reference.
- Cache tiers — L1 / L2 / L3 / L4 frontmatter.
- Prompt caching + batch API.
- Reader API contract — stable shapes of every file
llmwiki buildwrites. - Reader-first article shell — opt-in Wikipedia-style layout.
- Entity schema — structured model-profile frontmatter.
- Adapter authoring — write an adapter for a new agent.
Operate
- Command cheatsheet — every slash + CLI command on one page.
- Upgrade guide — what changes between releases, migrations, opt-ins.
- FAQ · Troubleshooting · Privacy.
- Accessibility (WCAG 2.1 AA).
- Benchmarks · Competitor landscape.
- Maintainers — governance docs at
docs/maintainers/.
Contributing
- Style guide — how to write docs that match this site's voice.
- Adapter authoring — ship a new agent adapter.
- Architecture — understand the three-layer model before changing code.
- Roadmap · Public roadmap.
What llmwiki is not
It's not a vector database, not a RAG framework, not a hosted service. It
compiles markdown from JSONL transcripts, writes a static site, and stays
out of the way. The only third-party runtime dependency is markdown.
What's new
See the CHANGELOG. Latest tagged release: v1.3.82.
The version above is substituted from
llmwiki/__init__.py:__version__at build time, so this hub stays current on every release without a manual edit (#457).