I used to run a mental checklist before every commit: did I check for secrets? Are the tests passing? Is the context window about to explode? It was exhausting, and I forgot steps constantly. Six months ago I built my first hook - a security checker that blocks commits containing API keys. It saved me once on day one. Now I have 20 hooks running automatically, and I haven't manually checked a single thing in weeks.
The breaking point was a production incident where I pushed credentials to a private repo. Nothing catastrophic, but enough to make me rethink my workflow. I realized I was treating Claude Code like a human assistant - expecting it to remember things I should automate. Hooks became my forcing function. If I catch myself repeating a check three times, it becomes a hook.
New to hooks? Start with the complete guide - it covers setup, patterns, and real-world examples in 15 minutes.
The Manual Repetition Problem
Every Claude Code session used to start the same way: read the memory files, check git status, scan for uncommitted changes, verify no sensitive data is staged. Then during the session I'd manually check context usage, decide when to delegate, and validate outputs before committing. Each check took 10-30 seconds. Across a dozen sessions per day, I was burning an hour on mechanical validation.
The worst part was inconsistency. The checks that mattered most happened least reliably when I was deep in a problem. I needed a system that ran independently of my attention span.
Hook System Architecture
Hooks in Claude Code are event-triggered scripts that run at specific points in your workflow. Unlike traditional git hooks (which only trigger on git events), Claude Code hooks can fire on tool use, session lifecycle, or custom conditions you define. They live in your .claude/hooks/ directory and are configured via CLAUDE.md patterns.
There are four main hook types:
PreToolUse hooks validate before an action executes. If a PreToolUse hook fails, the action is blocked. This is where security checks and validation logic live.
PostToolUse hooks run after an action completes. They handle cleanup, logging, and triggering downstream actions. Think of them as the "and then..." part of your workflow.
Notification hooks alert you when conditions are met without blocking execution. Context warnings, deployment notifications, and metric thresholds use this pattern.
Session hooks (SessionStart, SessionStop) handle setup and teardown. Memory bootup and session archival are classic use cases.
The execution model is simple: when an event fires, Claude Code checks for matching hooks, runs them in priority order, and either proceeds or halts based on the results. Total execution time for all 20 of my hooks averages 0.8 seconds - fast enough that I never notice them running.
Three Essential Hooks
Let me show you three hooks that handle the most common automation needs. These are simplified to show the concept - the full implementations handle edge cases and configuration options.
1. Security Tier Check (PreToolUse)
This hook blocks dangerous commands in production environments. It checks for destructive operations like rm -rf, drop database, or git push --force and requires explicit confirmation before proceeding.
The matcher looks at the command being executed and the current git branch. If you're on main or production and trying to run a tier-8+ dangerous command, it blocks and prompts for confirmation. On development branches, it logs a warning but allows execution. My first version blocked everything, which was annoying. Now it's context-aware and only interferes when risk is real.
Since adding this hook, I've been blocked from dangerous operations 23 times. Every single one was a mistake I would have regretted.
2. Delegation Enforcer (UserPromptSubmit)
This hook calculates a delegation score for every prompt and warns when the score exceeds the threshold. Here is the actual settings.json config entry:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "python3 .claude/hooks/delegation-enforcer.py"
}
]
}
]
}
}
The hook script reads the user prompt, scores it against factors like scope and task type, and outputs a JSON response:
#!/usr/bin/env python3
import json, sys
THRESHOLD = 3
SCORE_FACTORS = {
"explore": 3, "search": 3, "find": 3,
"review": 2, "refactor": 2, "research": 2,
}
DEDUCTIONS = ["production", "deploy", "password", "secret"]
def score_prompt(prompt: str) -> int:
score = 0
lower = prompt.lower()
for keyword, points in SCORE_FACTORS.items():
if keyword in lower:
score += points
for danger in DEDUCTIONS:
if danger in lower:
score -= 10
return score
prompt = json.loads(sys.stdin.read())["prompt"]
score = score_prompt(prompt)
if score >= THRESHOLD:
output = {
"decision": "allow",
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": f"Delegation score: {score}. Consider delegating this task."
}
}
else:
output = {"decision": "allow"}
print(json.dumps(output))
The production version adds inline hints ([explore], [debug]), trait injection, model routing, and session-end summaries of delegation gaps. But this simplified version already enforces the core principle: score above 3, delegate.
3. Context Warning (Notification)
When your context window hits 70% usage, this hook alerts you and suggests clearing non-essential files from memory. At 90% it gets aggressive - blocking new file reads until you clear context or explicitly override.
The implementation tracks token usage across the session and maintains a running count. When thresholds are crossed, it injects a warning into the next response. At critical levels (>90%), it switches from notification to PreToolUse mode and blocks expensive operations.
Before this hook existed, I'd regularly hit context limits mid-task and lose work. Now I get early warnings and can plan accordingly. My sessions stay lean because the hook forces me to clean up as I go.
- -Check git status for secrets
- -Estimate task complexity for delegation
- -Monitor context window usage
- -Remember to run tests before commit
- -Validate no sensitive data staged
- -Check if memory files need updating
- +security-tier-check blocks bad commits
- +delegation-enforcer auto-routes tasks
- +context-warning alerts at 70% usage
- +pre-commit hook runs test suite
- +secret-scanner validates staging area
- +session-stop hook updates memory
Production Results
After six months running 20 production hooks across 81 agents and 47 workflows, here's what changed:
- Manual checks per commit: 6 → 0
- Context limit hits: 12/month → 1/month
- Accidental dangerous commands: 4 → 0
- Delegation decisions: Manual → Automatic (83% of tasks)
- Average hook execution time: 0.8 seconds
- Sessions lost to context overflow: 8 → 0
The biggest surprise wasn't time saved - it was consistency. Hooks don't forget. They don't skip steps when you're tired. That reliability compounds.
Getting Started
You don't need all 20 hooks on day one. Start with one high-value automation: context warnings if you hit limits often, security checks if you've pushed secrets before, or delegation enforcement if you handle too much manually.
The pattern is always the same: identify a manual check, write a hook that codifies it, test it, then enable in production. Most hooks are 20-50 lines of logic. Hooks work best when combined with a solid planning framework - you plan the automation before building it.
Want the full system blueprint? Get the free 3-pattern guide.
You now have a working hook pattern: settings.json config, Python script reading stdin, JSON response controlling behavior. Start with delegation enforcement, then add security and context warnings using the same structure.



