Plugin System
Last updated April 7, 2026
What a plugin is, what it can do, and where it sits in the platform.
Plugins are the primary way to extend KB Labs. Everything user-facing — the commit assistant, AI code review, the QA runner, the Mind RAG indexer, the release manager — ships as a plugin. The platform itself is small: a CLI runtime, a REST API host, a workflow engine, Studio, a gateway, a marketplace. The features that make it useful all live in plugins.
This page explains the model. For the practical "how do I write one" see Plugins → Overview. For the full manifest schema see Manifest Reference.
What a plugin is
A plugin is a package that declares, in a manifest, what it contributes to the platform. It can contribute any combination of:
- CLI commands invoked via
pnpm kb <command>. - REST routes mounted under
/v1/plugins/<name>on the REST API service. - WebSocket channels for real-time bidirectional communication.
- Workflow step handlers callable from workflow specs.
- Webhook handlers triggered by external events (GitHub, Slack, ...).
- Jobs (on-demand background tasks) and cron schedules (recurring jobs).
- Studio pages and menus — full React applications mounted via Module Federation.
Each of these is a named entity (cli-command, rest-route, studio-page, ...) attached to handler code that the platform will execute on demand. A single plugin typically ships several entities from the same package.
What a plugin is not
- Not a daemon. Plugins don't run continuously. Their code is loaded lazily when a handler is invoked and unloaded after execution.
- Not a service. A plugin can expose REST routes, but it doesn't own a port or a process — the REST API service hosts routes from all plugins together.
- Not a framework. There's no
onInithook that runs on every CLI startup. Thelifecyclesection exists for rare cases; 99% of plugins never use it. - Not trusted. Every plugin is sandboxed. Whatever it declares in
permissionsis the upper bound on what it can do; anything outside is refused by the runtime.
Where plugins sit
pnpm kb <cmd> REST request workflow step
│ │ │
▼ ▼ ▼
CLI runtime REST API service Workflow daemon
│ │ │
└───────────────────┼─────────────────────┘
▼
┌─────────────────────┐
│ Plugin Runtime │
│ (sandbox + APIs) │
└─────────────────────┘
│
┌───────────────────┴───────────────────┐
▼ ▼
Plugin handler Platform services
(execute function) ◀──── useLLM, (LLM, cache, storage,
useCache, vector store, analytics,
useStorage, event bus, ...)
...The CLI, REST, and workflow hosts are the three entry points that invoke plugin code. They all funnel through the same plugin runtime, which:
- Builds a
PluginContextV3(identity, trace,ctx.runtimeshims,ctx.apihelpers). - Imports the handler module.
- Calls
handler.execute(ctx, input). - Enforces permissions via the sandbox shims.
- Returns results plus execution metadata.
Plugin handlers themselves never import from Node's fs, fetch, or child_process directly — they go through ctx.runtime.* (or the equivalent SDK hooks), which check the permission allow-lists on every call.
Discovery is marketplace-driven
There is no filesystem scanning. A package becomes a plugin only after it's registered in .kb/marketplace.lock via kb marketplace install or kb marketplace link. The lock is the single source of truth: remove an entry and the plugin disappears on the next startup; add one and it shows up.
This design has two consequences worth internalizing:
- Install is explicit. There's no surprise code loading from random
node_modulesentries. You see every plugin you have, and you know how it got there (marketplaceorlocal). - Dev-mode and prod-mode are the same code path. A local-linked plugin and a marketplace-installed one go through the same discovery, same manifest loading, same sandbox. The only difference is that local entries auto-refresh their integrity hash instead of failing on mismatch, because their
package.jsonchanges every build.
See DiscoveryManager for the canonical flow.
Execution is pluggable
When a handler is invoked, the platform picks one of several execution backends:
- In-process — import the handler into the host's own Node process. Fastest, used for trusted first-party plugins in dev.
- Worker pool — run in a worker thread with fault isolation.
- Subprocess — fork a child process with IPC; the child talks to the platform over a Unix socket.
- Container — run inside a Docker environment the platform provisions on demand.
Which backend is used is a deployment choice, configured in kb.config.json — plugins don't choose. The sandbox shims behave identically across all of them: ctx.runtime.fs.read(path) either returns the file (if permissions.fs.read allows it) or throws, regardless of whether you're in-process or in a container.
See Execution Model for details.
Permissions are allow-list only
A plugin declares, in its manifest, everything it wants to touch:
- Files — which paths it can read and write.
- Env vars — which names (or prefixes) it can read.
- Network — which hosts it can
fetch. - Platform services — whether it can use the LLM, cache, vector store, storage, event bus, etc., and with what scoping (cache namespaces, LLM models, vector collections).
- Shell — a whitelist of commands it can exec.
- Cross-plugin invocation — a whitelist of other plugin IDs it can call.
- Quotas — timeout, memory, CPU ceilings.
Nothing is implicit, and there is no deny list — the platform separately enforces hardcoded security rules (never read .env, never write to .git, etc.) and the plugin gets the intersection of (what it asked for) and (what the platform permits at all).
The authoring surface is combinePermissions() plus eight reusable presets (minimal, gitWorkflow, kbPlatform, llmAccess, ...). Presets compose cleanly: string arrays union, scalars override, fs upgrades one-way from read to readWrite. See Plugins → Permissions for the full story.
Versioning and integrity
Every installed plugin is pinned in the lock file by version and by content hash (SHA-256 of package.json). If the content changes unexpectedly, discovery refuses to load the plugin with INTEGRITY_MISMATCH — unless it's a local-linked development copy, in which case the hash is silently refreshed.
Marketplace-installed plugins can additionally carry a platform signature (ed25519 or sha256-rsa) — a cryptographic attestation that the package passed platform checks (integrity, types, lint, tests). Unsigned plugins still work; they just produce an info diagnostic at load time.
Relationship to adapters
Plugins and adapters are both extension points, but they sit at different layers:
- Adapters implement the contracts the platform runtime depends on:
ILLM,ICache,IStorage,IVectorStore,ILogger, and so on. You swap adapters inkb.config.jsonto change what backend the platform uses. - Plugins build on top of those contracts. A plugin calls
useLLM()and gets whichever LLM adapter is configured. The plugin doesn't know or care whether it's talking to OpenAI, Anthropic, or a local model.
If you want to change which LLM the platform uses, that's an adapter. If you want to add a new AI-powered command, that's a plugin. See Adapters → Overview for the other side of this distinction.
What to read next
- Plugins → Overview — the hands-on "anatomy of a plugin" page.
- Manifest Reference — every field in the schema.
- Permissions — how the sandbox works in practice.
- SDK → Hooks — the idiomatic API surface plugin handlers use.
- Execution Model — the backend layer beneath the runtime.