← Docs hub

Getting started

5-minute quickstart. By the end you'll have a browsable wiki of every coding-agent session you've ever run.

Prerequisites

llmwiki auto-detects whichever agents you have installed. No configuration needed.

That's it. No npm, no brew, no database, no account.

Install

macOS / Linux

git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
./setup.sh

Windows

git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
setup.bat

setup.sh / setup.bat does the following, idempotently:

  1. Installs markdown (the only runtime dep) via pip install --user. Syntax highlighting runs in the browser via highlight.js loaded from a CDN, so the build stays stdlib-only.
  2. Scaffolds raw/, wiki/, site/ directories
  3. Runs llmwiki adapters to show which agents are detected
  4. Does a dry-run of the first sync so you see what would be converted

Checking detected agents

After install, run llmwiki adapters to see which session stores were found:

python3 -m llmwiki adapters

Example output:

Registered adapters:
  claude_code       available: yes  (Claude Code — reads ~/.claude/projects/*/*.jsonl)
  codex_cli         available: yes  (Codex CLI — reads ~/.codex/sessions/**/*.jsonl)
  copilot_chat      available: no   (GitHub Copilot Chat — reads VS Code workspaceStorage chatSessions)
  copilot_cli       available: no   (GitHub Copilot CLI — reads ~/.copilot/session-state/*/events.jsonl)
  cursor            available: yes  (Cursor IDE — reads chat history)
  gemini_cli        available: no   (Gemini CLI — reads ~/.gemini/ session history)
  obsidian          available: no   (Obsidian vault)

The PDF adapter was removed in the simplification sweep — llmwiki adapters no longer lists it.

Any adapter marked available: yes will be included when you run llmwiki sync. See multi-agent-setup.md for details on configuring individual agents.

Three commands after install

./sync.sh        # pull new sessions from your agent store → raw/sessions/<project>/*.md
./build.sh       # compile raw/ + wiki/ → site/
./serve.sh       # serve site/ at http://127.0.0.1:8765/

Open http://127.0.0.1:8765/ and click around. Try:

Where your data ends up

llm-wiki/
├── raw/sessions/             # [gitignored] converted transcripts
│   ├── ai-newsletter/
│   │   ├── 2026-04-04-<slug>.md
│   │   └── ...
│   └── <other-project>/
├── wiki/                     # [gitignored] LLM-maintained wiki pages
│   ├── index.md
│   ├── log.md
│   ├── overview.md
│   ├── sources/
│   ├── entities/
│   └── concepts/
└── site/                     # [gitignored] generated static HTML
    ├── index.html
    ├── style.css
    ├── script.js
    ├── search-index.json
    ├── projects/
    └── sessions/

Everything under raw/, wiki/, and site/ stays local. It is never committed and never sent anywhere.

New in recent versions

Building the wiki (Karpathy layer 2)

The sync step populates raw/sessions/ with markdown. To build the actual wiki on top of that — wiki/sources/, wiki/entities/, wiki/concepts/, linked by [[wikilinks]] — you need an LLM in the loop. That's where Claude Code (or any supported agent) comes in.

Inside a Claude Code session at the llm-wiki repo root:

/wiki-ingest raw/sessions/ai-newsletter/

The agent reads the source markdowns, writes summary pages, cross-links entities, and updates wiki/index.md. See CLAUDE.md for the full Ingest Workflow.

Then re-run ./build.sh to get the compiled wiki into the HTML site.

Auto-sync on session start (optional)

To make sync happen automatically every time you start Claude Code, add a SessionStart hook to ~/.claude/settings.json:

{
  "hooks": {
    "SessionStart": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "(python3 /absolute/path/to/llm-wiki/llmwiki/convert.py > /tmp/llmwiki-sync.log 2>&1 &) ; exit 0"
          }
        ]
      }
    ]
  }
}

The ( ... &) ; exit 0 pattern backgrounds the sync and makes sure it never blocks Claude Code starting.

Next steps