← Docs hub

CLI reference

Every python3 -m llmwiki <subcommand> — with every flag, realistic examples, and expected output. If a command isn't listed here it isn't shipping. This page is generated against the live argparse tree, so adding a flag without documenting it will fail the guardrail test.

Global flags: -h / --help on every command, --version at the root.


Top-level

python3 -m llmwiki --version    # → llmwiki <version>
python3 -m llmwiki --help       # list every subcommand
python3 -m llmwiki              # same as --help

The shorter alias llmwiki works too once the package is installed (pip install llm-notebook or via Homebrew — see deploy/pypi-publishing.md / deploy/homebrew-setup.md).


init — scaffold raw/ / wiki/ / site/

Creates the three data directories + seeds nine navigation files inside wiki/.

python3 -m llmwiki init

Flags: none.

Expected output:

  raw/sessions/
  wiki/sources/
  wiki/entities/
  wiki/concepts/
  wiki/syntheses/
  site/
  seeded wiki/dashboard.md
  seeded wiki/index.md
  ...

Idempotent. Safe to re-run — it never overwrites files that exist.


sync — convert .jsonl sessions to markdown

The workhorse. Walks every configured adapter, converts new sessions into raw/sessions/, then (by default) auto-builds and auto-lints.

python3 -m llmwiki sync
python3 -m llmwiki sync --since 2026-04-01 --project llm-wiki
python3 -m llmwiki sync --adapter claude_code codex_cli
python3 -m llmwiki sync --no-auto-build --no-auto-lint
python3 -m llmwiki sync --vault "~/Documents/Obsidian Vault"
python3 -m llmwiki sync --vault ~/my-vault --allow-overwrite
python3 -m llmwiki sync --force

Flags

Flag What
--adapter NAME [NAME ...] Limit to specific adapters. Default: every adapter with a session store on disk.
--since YYYY-MM-DD Only sessions on/after this date (e.g. --since 2026-04-01).
--project SUBSTRING Filter by project-slug substring.
--include-current Include sessions < 60 min old (default skips live ones).
--force Ignore the mtime state file, reconvert everything.
--auto-build / --no-auto-build Rebuild site/ after sync (default: on).
--auto-lint / --no-auto-lint Run lint after sync (default: on).
--vault PATH Vault-overlay mode — write new pages inside the given Obsidian / Logseq vault instead of wiki/. See guides/existing-vault.md.
--allow-overwrite With --vault: allow clobbering existing vault pages (default: refuse, append under ## Connections instead).

Expected output (typical)

==> claude_code: 3 new sessions since last sync
✓ wrote 3 pages under raw/sessions/
✓ ingested into wiki/sources/ (2 new entities, 1 new concept)
✓ auto-build: site/ rebuilt (690 HTML files)
✓ auto-lint: 28 issues: 0 errors, 22 warnings, 6 info

Common recipes


build — compile the static HTML site

Turns wiki/ markdown into site/ HTML.

python3 -m llmwiki build
python3 -m llmwiki build --out ~/public_html
python3 -m llmwiki build --search-mode tree
python3 -m llmwiki build --synthesize --claude /usr/local/bin/claude
python3 -m llmwiki build --vault ~/my-vault --out ~/site

Flags

Flag What
--out PATH Output directory. Default: ./site/.
--synthesize Call the claude CLI for overview synthesis (experimental).
--claude PATH Path to the claude binary. Default: /usr/local/bin/claude.
--search-mode {auto,tree,flat} Search routing mode (#53). auto picks tree vs flat from heading depth; tree / flat force the mode. Default: auto.
--vault PATH Vault-overlay mode — build from an existing Obsidian / Logseq vault. Output still lands at --out.

Expected output (final lines)

  wrote search-index.json (7 KB meta) + 30 chunks (904 KB total) · tree mode · 64% deep pages
  wrote 7 AI-consumable exports: ai-readme.md, graph.jsonld, llms-full.txt, llms.txt, robots.txt, rss.xml, sitemap.xml
  wrote site/graph.html (interactive graph viewer)
  wrote site/prototypes/index.html (6 prototype states)
  wrote site/docs/ (94 editorial pages: hub + tutorials + style guide)
==> build complete: 703 HTML files, 61 MB

serve — start a local HTTP server

python3 -m llmwiki serve
python3 -m llmwiki serve --port 9000
python3 -m llmwiki serve --dir ~/public_html
python3 -m llmwiki serve --open

Flags

Flag What
--dir PATH Directory to serve. Default: ./site/.
--port N Port. Default: 8765.
--host ADDR Bind address. Default: 127.0.0.1. Use 0.0.0.0 to share on LAN.
--open Open the browser at the root URL after starting.

Stdlib only — it's http.server underneath. Safe for local use; don't expose to the public internet.


adapters — list every adapter + its status

python3 -m llmwiki adapters

Flags: none.

Expected output:

Registered adapters:
  name              default   configured    description
  ----------------  --------  ------------  ----------------------------------------
  chatgpt           no        -             ChatGPT — parses conversations.json …
  claude_code       yes       ✓            Claude Code — reads ~/.claude/projects/
  codex_cli         no        ✓            Codex CLI — reads ~/.codex/sessions/
  copilot           no        -             GitHub Copilot — reads VS Code …
  cursor            no        -             Cursor — reads VS Code workspaceStorage
  gemini_cli        no        -             Gemini CLI — reads ~/.gemini/
  jira              no        -             Jira — reads via REST API
  meeting           no        -             Meeting transcripts (VTT/SRT)
  obsidian          no        -             Obsidian — reads a vault
  opencode          no        -             OpenCode / OpenClaw sessions
  web_clipper       no        -             Obsidian Web Clipper intake

Columns: default (runs when you don't pass --adapter), configured (adapter sees a valid session store on this machine).


graph — build the knowledge graph

python3 -m llmwiki graph                              # builtin wikilink graph
python3 -m llmwiki graph --engine graphify             # AI-powered graph (requires graphifyy)
python3 -m llmwiki graph --format json
python3 -m llmwiki graph --format html

Flags

Flag What
--format {json,html,both} Output format(s). Default: both.
--engine {builtin,graphify} Graph engine. builtin = stdlib wikilink graph. graphify = AI-powered with community detection, confidence-scored edges, god nodes. Requires pip install graphifyy. Default: builtin.

Builtin engine: Emits graph/graph.json (nodes + edges) and/or graph/graph.html (vis-network interactive viewer). The interactive version is also auto-copied into site/graph.html on every build.

Graphify engine: Runs the Graphify pipeline: tree-sitter AST extraction for code, semantic analysis for docs, Leiden community detection, god-node analysis. Outputs to graphify-out/ (graph.json, graph.html, GRAPH_REPORT.md) and copies to graph/ for build compatibility. Install: pip install llm-notebook[graph] or pip install graphifyy.


export — AI-consumable site exports

Single positional argument picks the format.

python3 -m llmwiki export llms-txt
python3 -m llmwiki export llms-full-txt
python3 -m llmwiki export jsonld
python3 -m llmwiki export sitemap
python3 -m llmwiki export rss
python3 -m llmwiki export robots
python3 -m llmwiki export ai-readme
python3 -m llmwiki export all --out ~/custom-site

Positional

Value Writes
llms-txt site/llms.txt — llmstxt.org spec
llms-full-txt site/llms-full.txt — flattened plain-text corpus (≤ 5 MB)
jsonld site/graph.jsonld — schema.org entity graph
sitemap site/sitemap.xml
rss site/rss.xml
robots site/robots.txt
ai-readme site/ai-readme.md
all all of the above

Flags

Flag What
--out PATH Output directory. Default: ./site/.

lint — run 14 wiki-quality rules

python3 -m llmwiki lint
python3 -m llmwiki lint --json
python3 -m llmwiki lint --fail-on-errors
python3 -m llmwiki lint --rules link_integrity,orphan_detection
python3 -m llmwiki lint --include-llm
python3 -m llmwiki lint --wiki-dir ~/another-wiki

Flags

Flag What
--wiki-dir PATH Wiki dir. Default: ./wiki.
--rules NAMES Comma-separated rule names. Default: all applicable.
--include-llm Also run the three LLM-powered rules (requires a callback wired in).
--json JSON output.
--fail-on-errors Exit 1 if any error-severity issues.

Rules

8 structural (frontmatter_completeness, frontmatter_validity, link_integrity, orphan_detection, content_freshness, entity_consistency, duplicate_detection, index_sync) + 3 LLM-powered (contradiction_detection, claim_verification, summary_accuracy) + 2 v1.1+ (stale_candidates, cache_tier_consistency).

Expected output

  scanned 31 pages
  28 issues: 0 errors, 22 warnings, 6 info

## link_integrity (22)
  [warning] entities/GPT5.md: broken wikilink [[MultimodalModels]]
  ...

candidates — approval workflow

Positional action picks list / promote / merge / discard.

python3 -m llmwiki candidates list
python3 -m llmwiki candidates list --stale --stale-days 60
python3 -m llmwiki candidates list --json
python3 -m llmwiki candidates promote --slug NewEntity
python3 -m llmwiki candidates promote --slug NewEntity --kind concepts
python3 -m llmwiki candidates merge --slug DuplicateFoo --into Foo
python3 -m llmwiki candidates discard --slug BogusEntity --reason "LLM hallucinated"

Flags

Flag What
--slug NAME Candidate slug. Required for promote / merge / discard.
--into NAME For merge: target slug.
--reason TEXT For discard: why (written to archive's .reason.txt).
--kind {entities,concepts,sources,syntheses} Subtree. Auto-detected if omitted.
--wiki-dir PATH Wiki dir. Default: ./wiki.
--stale With list: only stale candidates.
--stale-days N Staleness threshold. Default: 30.
--json JSON output for list.

See guides/existing-vault.md for the round-trip semantics when a candidate lives inside a vault.


synthesize — LLM-backed source-page synthesis

python3 -m llmwiki synthesize --check            # probe the backend
python3 -m llmwiki synthesize --estimate         # cost preview, no API calls
python3 -m llmwiki synthesize --force            # re-synth everything
python3 -m llmwiki synthesize                    # real run

Flags

Flag What
--check Probe backend availability + exit (0 if reachable).
--force Ignore state, re-synth every source.
--estimate Print cached-vs-fresh token + dollar estimate (#50).

Backend is picked from synthesis.backend in sessions_config.json (dummy by default, ollama for local, future anthropic). See reference/prompt-caching.md.

Auto-tagging (#351)

Every synthesize call now produces topical tags alongside the deterministic baseline. The synthesizer emits a <!-- suggested-tags: prompt-caching, rag, github-actions --> block as the first line of its response; the pipeline parses it, strips it from the body, and merges the tags into frontmatter with:

No extra API round-trip — rides the existing synthesis call, so cost estimates from --estimate are unchanged. If the backend returns no suggested-tags block (dummy backend, malformed output), the page still ships with baseline tags.


version — print the installed version

python3 -m llmwiki version
python3 -m llmwiki --version

Both print llmwiki <version>.


query — search the knowledge graph

python3 -m llmwiki query "what projects is Pratiyush working on"
python3 -m llmwiki query "Flutter mobile" --depth 2 --budget 1000

Flags

Flag What
--depth N BFS traversal depth. Default: 3.
--budget N Max output tokens. Default: 2000.

Requires Graphify (pip install llm-notebook[graph]). Run llmwiki graph first to build the graph.


all — run the full pipeline

Convenience entry point that runs buildgraphexport alllint in order. This is the one command to run after sync to produce a CI-ready site.

python3 -m llmwiki all
python3 -m llmwiki all --graph-engine builtin   # skip optional graphify
python3 -m llmwiki all --skip-graph --strict    # fail CI on any lint issue

Flags

Flag What
--out DIR Output dir for build + export. Default: site/.
--search-mode {auto,tree,flat} Forwarded to build. Default: auto.
--graph-engine {builtin,graphify} Forwarded to graph. Default: graphify.
--skip-graph Skip the graph step entirely (useful when graphify is not installed).
--fail-fast Stop at the first non-zero step. Default: continue, report the worst exit code.
--strict Exit 2 if lint reports any errors/warnings.

Exit codes:


Exit codes (conventions)

Code Meaning
0 Success
1 Operation failed (user-visible error)
2 Usage error (bad flags, missing file, etc.)

Subcommands document their own non-zero exit conditions where relevant (lint --fail-on-errors).