Daemon Mode
CKB daemon mode provides an always-on service for continuous code intelligence. The daemon runs in the background, automatically refreshing indexes, watching for file changes, and delivering webhook notifications.
Overview
The daemon provides:
- HTTP API - REST endpoints for IDE and CI/CD integration
- Job Queue - Async operations with progress tracking
- Scheduler - Automated refresh with cron/interval expressions
- File Watcher - Git change detection with debouncing
- Webhooks - Outbound notifications to Slack, PagerDuty, Discord
Quick Start
# Start the daemon
ckb daemon start
# Check status
ckb daemon status
# View logs
ckb daemon logs --follow
# Stop the daemon
ckb daemon stop
CLI Commands
ckb daemon start
Start the daemon process.
ckb daemon start [flags]
| Flag | Default | Description |
|---|---|---|
--port |
9120 | HTTP server port |
--bind |
localhost | Bind address |
--foreground |
false | Run in foreground (don't daemonize) |
The daemon stores its state in ~/.ckb/daemon/:
daemon.pid- Process ID filedaemon.log- Log outputdaemon.db- SQLite database for jobs, schedules, webhooks
ckb daemon stop
Gracefully stop the daemon. Waits for running jobs to complete (30s timeout).
ckb daemon stop
ckb daemon restart
Stop and start the daemon.
ckb daemon restart [flags]
Accepts the same flags as start.
ckb daemon status
Show daemon status and statistics.
ckb daemon status
Output includes:
- Running state and PID
- Uptime
- HTTP server address
- Jobs running/queued
- Repositories being watched
ckb daemon logs
View daemon logs.
ckb daemon logs [flags]
| Flag | Default | Description |
|---|---|---|
--follow, -f |
false | Follow log output |
--lines, -n |
100 | Number of lines to show |
Configuration
Add daemon settings to .ckb/config.json:
{
"daemon": {
"port": 9120,
"bind": "localhost",
"logLevel": "info",
"logFile": "~/.ckb/daemon/daemon.log",
"auth": {
"enabled": true,
"token": "${CKB_DAEMON_TOKEN}"
},
"watch": {
"enabled": true,
"debounceMs": 5000,
"ignorePatterns": ["*.log", "node_modules/**", ".git/objects/**"]
}
}
}
Authentication
When auth.enabled is true, all API requests require a bearer token:
curl -H "Authorization: Bearer $CKB_DAEMON_TOKEN" http://localhost:9120/daemon/status
Set the token via environment variable or directly in config. See Authentication for index server authentication with scoped tokens.
Scheduler
The scheduler runs tasks automatically on a schedule.
Schedule Expressions
Cron syntax:
*/5 * * * * # Every 5 minutes
0 */4 * * * # Every 4 hours
0 2 * * * # Daily at 2 AM
0 0 * * 0 # Weekly on Sunday
Interval syntax:
every 30m # Every 30 minutes
every 4h # Every 4 hours
every 1d # Every day
daily at 02:00 # Daily at 2 AM
Task Types
| Type | Description |
|---|---|
refresh |
Re-index a repository |
federation_sync |
Sync federation members |
cleanup |
Clean old cache/logs |
health_check |
Check system health |
compaction |
Database compaction and snapshot cleanup (v7.3) |
Compaction Scheduler (v7.3)
The compaction scheduler automatically maintains database health by:
- Snapshot cleanup - Delete old snapshot files (keeps last N)
- Journal pruning - Remove old change journal entries
- FTS optimization - VACUUM FTS5 tables for better performance
Configuration:
{
"daemon": {
"compaction": {
"enabled": true,
"keepSnapshots": 5,
"keepDays": 30,
"compactJournalAfterDays": 7,
"schedule": "0 3 * * *",
"vacuumFTS": true,
"dryRun": false
}
}
}
| Setting | Default | Description |
|---|---|---|
enabled |
true | Enable compaction |
keepSnapshots |
5 | Keep last N snapshots |
keepDays |
30 | Delete snapshots older than N days |
compactJournalAfterDays |
7 | Prune journal entries older than N days |
schedule |
0 3 * * * |
Cron schedule (default: 3 AM daily) |
vacuumFTS |
true | VACUUM FTS5 tables during compaction |
dryRun |
false | Preview what would be deleted without deleting |
Manual compaction:
# Run compaction immediately (dry run)
ckb daemon compact --dry-run
# Run compaction
ckb daemon compact
Compaction result:
{
"startedAt": "2024-12-22T03:00:00Z",
"completedAt": "2024-12-22T03:00:45Z",
"durationMs": 45000,
"snapshotsDeleted": 3,
"journalEntriesPurged": 1250,
"bytesReclaimed": 52428800,
"ftsVacuumed": true,
"dryRun": false,
"deletedSnapshots": ["snapshot_old1.db", "snapshot_old2.db", "snapshot_old3.db"]
}
CLI Commands
# List schedules
ckb daemon schedule list
# Run a schedule immediately
ckb daemon schedule run <schedule-id>
# Enable/disable a schedule
ckb daemon schedule enable <schedule-id>
ckb daemon schedule disable <schedule-id>
File Watcher
The file watcher monitors repositories for git changes and triggers automatic refresh.
How It Works
The watcher uses polling (not fsnotify) for simplicity and cross-platform compatibility:
- Polls
.git/HEADand.git/indexevery 2 seconds - Detects changes by comparing content (HEAD) or mtime (index)
- Debounces changes for 5 seconds before triggering refresh
- Worst-case latency: ~7 seconds (2s poll + 5s debounce)
What It Watches
.git/HEAD- Branch changes (content comparison).git/index- Staged file changes (modification time)
Debouncing
Multiple rapid changes are batched together. The debounceMs setting (default 5000ms) controls how long to wait after the last change before triggering a refresh.
Configuration
{
"daemon": {
"watch": {
"enabled": true,
"debounceMs": 5000,
"ignorePatterns": [
"*.log",
"node_modules/**",
".git/objects/**",
"dist/**"
]
}
}
}
Webhooks
Webhooks deliver event notifications to external services.
Event Types
| Event | Description |
|---|---|
refresh_completed |
Repository refresh finished |
refresh_failed |
Repository refresh failed |
hotspot_alert |
High-churn code detected |
federation_sync |
Federation sync completed |
schedule_run |
Scheduled task ran |
test |
Test webhook delivery |
Payload Formats
JSON (default):
{
"event": "refresh_completed",
"timestamp": "2025-01-15T10:30:00Z",
"data": {
"repoId": "my-project",
"duration": "45s",
"symbolCount": 1234
}
}
Slack:
{
"text": "CKB: Refresh completed for my-project",
"attachments": [...]
}
PagerDuty:
{
"routing_key": "...",
"event_action": "trigger",
"payload": {...}
}
Discord:
{
"content": "CKB Event",
"embeds": [...]
}
Security
Webhooks are signed with HMAC-SHA256. The signature is included in the X-CKB-Signature-256 header:
X-CKB-Signature-256: sha256=<hex-encoded-signature>
Verify the signature in your webhook handler:
import hmac
import hashlib
def verify_signature(payload, signature, secret):
expected = 'sha256=' + hmac.new(
secret.encode(),
payload.encode(),
hashlib.sha256
).hexdigest()
return hmac.compare_digest(signature, expected)
Retry Logic
Failed deliveries are retried with exponential backoff:
- Attempt 1: Immediate
- Attempt 2: 1 minute
- Attempt 3: 5 minutes
- Attempt 4: 30 minutes
- Attempt 5: 2 hours
After 5 failed attempts, the delivery moves to the dead letter queue.
CLI Commands
# List webhooks
ckb webhooks list
# Test a webhook
ckb webhooks test <webhook-id>
# View delivery history
ckb webhooks deliveries <webhook-id> [--status=failed]
# Retry failed deliveries
ckb webhooks retry <webhook-id>
# View dead letter queue
ckb webhooks dead-letter <webhook-id>
HTTP API
When the daemon is running, these endpoints are available:
Health Check
GET /health
No authentication required. Returns 200 OK if healthy.
Daemon Status
GET /daemon/status
Returns daemon state, uptime, and statistics.
Jobs
GET /daemon/jobs # List jobs
GET /daemon/jobs/:jobId # Get job details
POST /daemon/jobs/:jobId/cancel # Cancel a job
Schedules
GET /daemon/schedule # List schedules
Repository Operations
GET /repos # List repositories
GET /repos/:repoId/status # Repository status
POST /repos/:repoId/refresh # Trigger refresh
Index Refresh API (v7.5)
Trigger index refresh via HTTP for CI/CD integration:
# Trigger incremental refresh
curl -X POST http://localhost:9120/api/v1/refresh
# Force full reindex
curl -X POST http://localhost:9120/api/v1/refresh -d '{"full": true}'
# Specify repository path
curl -X POST http://localhost:9120/api/v1/refresh -d '{"repo": "/path/to/repo"}'
Response:
{
"status": "queued",
"repo": "/path/to/repo",
"type": "incremental"
}
This is the recommended way to integrate CKB with CI/CD pipelines. See CI-CD-Integration for complete examples.
MCP Tools
The daemon exposes these tools via MCP for AI assistant integration:
| Tool | Description |
|---|---|
daemonStatus |
Get daemon health and stats |
listSchedules |
List scheduled tasks |
runSchedule |
Run a schedule immediately |
listWebhooks |
List configured webhooks |
testWebhook |
Send test webhook |
webhookDeliveries |
Get delivery history |
See MCP-Tools for full tool documentation.
Troubleshooting
Daemon won't start
- Check if already running:
ckb daemon status - Check port availability:
lsof -i :9120 - View logs:
cat ~/.ckb/daemon/daemon.log - Remove stale PID file:
rm ~/.ckb/daemon/daemon.pid
Webhooks not delivering
- Test the webhook:
ckb webhooks test <id> - Check delivery history:
ckb webhooks deliveries <id> - Verify URL is accessible from daemon host
- Check dead letter queue:
ckb webhooks dead-letter <id>
File watcher not detecting changes
- Verify watcher is enabled in config
- Check that repo is being watched:
ckb daemon status - Ensure changes are in git (committed or staged)
- Check ignore patterns aren't too broad
See Also
- User-Guide - General CLI usage
- MCP-Integration - AI assistant integration
- Configuration - Full configuration reference