llmwiki documentation

None

A local, stdlib-only Python knowledge base built from your AI-coding-agent session transcripts. Install in five minutes, then keep every session searchable, interlinked, and offline. No database, no account, no cloud.


Pick your mode

llmwiki runs two interchangeable ways. Pick one, start — you can switch later.

API mode Agent mode
Who calls the LLM Python + Anthropic API Your running Claude Code / Codex CLI
API key Yes (ANTHROPIC_API_KEY) No
Cost Per token (with cache) Included in your agent subscription
Concurrency Batch + parallel Serial
Best for Large corpora, cron, CI Interactive, exploratory

Read the full comparison before picking.


Getting started — 5 minutes

# Tutorial Time
01 Installation — macOS / Linux / Windows / Docker 5 min
02 First sync — from install to a browsable site 5 min

If it's not working in 10 minutes, open an issue — that's a bug in the docs.


Use with your agent


Use it locally


Deploy

Target Guide
GitHub Pages deploy/github-pages.md
GitLab Pages deploy/gitlab-pages.md
Docker / GHCR deploy/docker.md
Vercel / Netlify deploy/vercel-netlify.md
PyPI publishing deploy/pypi-publishing.md
Homebrew tap deploy/homebrew-setup.md

Reference


Operate


Contributing


What llmwiki is not

It's not a vector database, not a RAG framework, not a hosted service. It compiles markdown from JSONL transcripts, writes a static site, and stays out of the way. The only third-party runtime dependency is markdown.

What's new

See the CHANGELOG. Latest tagged release: v1.3.82.

The version above is substituted from llmwiki/__init__.py:__version__ at build time, so this hub stays current on every release without a manual edit (#457).