A plugin for AI coding agents
Your agent fixed the bug.
lesson makes sure you learn why.
lesson turns a live debugging session into a grounded textbook lesson built from the actual files, commands, errors, and wrong turns that got you there. It doesn’t invent a tutorial — it reconstructs the real path.
How it works
From the moment you type /lesson to the moment the lesson is written, here is what actually happens.
No magic, no cloud, no hidden calls to a model. Every step is a small, plain thing the plugin does to your filesystem. Read it end to end and you will know exactly what this tool is doing to your computer.
Step 01 — You start a session
You type /lesson with a short note about what you’re working on.
Example: /lesson useEffect infinite loop in React. Your agent runs the /lesson slash command, which is just a prompt that tells it to set up a folder for this session. No AI thinking is needed here — it’s mostly bookkeeping.
The agent creates a new session folder and a few tiny files inside it:
.claude/lessons/
├── active-session ← one line: the session's slug
└── sessions/20260419-1430-useeffect/
├── meta.json ← your goal, start time, working dir
├── arc.jsonl ← empty — will fill up as you work
└── counter ← the number "0"The active-session file is the switch. If it exists, tracking is on. If it doesn’t, the plugin does nothing. That one file is how the whole system stays silent until you want it.
Step 02 — You work, silently watched
Every tool call the agent makes fires a hook. The hook is the ears.
Claude Code has a feature called hooks — small scripts that run after every tool use (reading a file, running a command, editing code). The lesson plugin registers one of these hooks. It’s a ~200 line Python script called post_tool_use.py.
Here is what the hook does, in order, every single time you use a tool:
- Check if
.claude/lessons/active-sessionexists. If not, exit silently. No session, no work. - Score the tool call for significance. Was it an error? An edit? Did the output mention a version number? Simple heuristics, no AI.
- Append one line to
arc.jsonldescribing what happened: tool name, arguments, result snippet, whether it errored. - Increment the
counterfile. One more event logged. - Exit with status 0. The hook never prints to your conversation.
A line in arc.jsonl looks like this:
{"ts": 1713542400.1, "tool": "Bash",
"args": "{\"command\":\"npm test\"}",
"result_head": "FAIL src/App.test.tsx ...",
"is_error": true, "significant": true}Notice the significant: true flag. That’s the signal that will later decide: does this moment make it into the lesson, or does it get filtered out as noise? Reading a dozen files to warm up the context is noise. A test failing with a specific error is signal.
Step 03 — Every 25 events, the graph is rebuilt
When the counter hits 25, the hook quietly launches a second program that turns the raw log into a structured knowledge graph.
The hook doesn’t do the heavy work itself. It launches a detached subprocess called lesson compress and immediately returns. The subprocess runs in the background, takes about 50 milliseconds, and never talks to your conversation. It’s a pure Python pipeline — zero LLM tokens used.
Here is what lesson compress does:
- Read every line of
arc.jsonl. - Score each event with TF-IDF + error signal + edit signal. Novel content scores high. Repeated noise scores low.
- Promote the top-scoring events into typed graph nodes:
goal,observation,hypothesis,attempt,concept,resolution. - Deduplicate using cosine similarity on sentence embeddings — two events that say the same thing collapse into one node.
- Wire edges between nodes to encode causality, not just order:
motivated,produced,revealed,contradicted. - Find the root cause by running betweenness centrality on the graph. The concept node with the highest centrality is tagged
root_cause: true. - Save the graph to
session_graph.jsonand reset the counter to0.
The graph is incremental. The next time compression runs, it reads the existing graph, folds in the new events, and saves the updated version. Node IDs never change — o1 is o1 forever.
Step 04 — You ask for the lesson
You type /lesson-done. The agent reads the graph and writes a textbook.
This is the one step where the AI actually does significant work. The /lesson-done slash command is a long, detailed prompt that walks the agent through generating a grounded lesson.
The agent:
- Reads
session_graph.json— the distilled story of your session. - Reads
~/.claude/lessons/profile.json— your learner history. If this is a misconception you’ve had before, the lesson will say so. - Identifies the root cause concept, the misconception that tripped you up, the pivotal moments, and the prerequisites you need to understand it.
- Optionally does web research if the concept needs external sources (like a specific kernel version or a library’s release notes).
- Fills in a markdown template with: narrative, foundations, explanation of the concept, mermaid diagrams of the session, quiz questions, and citations.
- Writes
.claude/lessons/output/<slug>.md, runs a script to render a PDF, updates your profile, deletesactive-session, and reports back.
What you end up with: a real lesson. Grounded in your session. Keyed to your mistakes. Something you can re-read a year from now and actually learn from.
What you get
Six things, and no more than that.
- 01Silent by default — no nagging, no blocked exits, no interruptions.
- 02Grounded output — lessons come from real commands, files, errors, and fixes.
- 03Root-cause extraction — graph centrality surfaces what the session was really about.
- 04Cross-session memory — recurring misconceptions get flagged next time.
- 05Portable artifacts — markdown, PDF, concept map, and index from the same source.
- 06Standalone CLI — inspect, compress, and render graphs outside any agent.
Platform support
Ten agent environments. One session format.
Hook behaviour differs per host, but the graph, the lesson output, and the learner profile are identical everywhere. Works with:
Claude Code, Codex, Cursor, Gemini CLI, GitHub Copilot CLI, OpenCode, OpenClaw, Factory Droid, Trae, Google Antigravity.
Install
Two commands. You’re done.
Install the Python package, then run the bundled installer for your target agent. Everything runs locally; nothing phones home.
pip install lesson-ai
python3 scripts/install.py --platform claude-codeClaude Code users can also install from the plugin marketplace: /plugin marketplace add OussemaBenAmeur/lesson followed by /plugin install lesson. Requires Python 3.10+. MIT licensed.