Hooks
Last updated April 7, 2026
Platform composables: usePlatform, useLLM, useCache, useStorage, and friends.
Runtime hooks are how plugin handlers talk to platform services. They all share the same pattern: read from a global platform singleton, return the adapter if configured, return undefined otherwise. No context drilling, no DI, no wiring — just call the hook and check the result.
Every hook in this page is exported from @kb-labs/sdk. The actual implementations live in @kb-labs/shared-command-kit and read from platform in @kb-labs/core-runtime.
The pattern
import { useCache } from '@kb-labs/sdk';
async function execute(ctx, input) {
const cache = useCache();
if (cache) {
const hit = await cache.get<Result>('key');
if (hit) { return hit; }
}
const result = await doExpensiveThing();
if (cache) {
await cache.set('key', result, 60_000);
}
return result;
}Three things to internalize:
- Check the return value. Every service hook except
useLoggercan returnundefined. Degrade gracefully — don't crash if cache or LLM isn't configured. - No throw on missing adapter. The hook itself never throws. It either returns the adapter or returns
undefined. If you hit an error, it's because you tried to call a method onundefined. - Singleton, not per-request. All hooks resolve the same global
platformobject. Today this means single-tenant; multi-tenant scoping viaAsyncLocalStorageis noted as a future direction in the source but isn't wired up yet.
The one hook that doesn't return undefined is useLogger(). The platform always provides a logger — a no-op or console fallback when nothing else is configured — so handler code can log unconditionally.
usePlatform()
The root hook. Returns the full platform singleton with every adapter attached.
import { usePlatform } from '@kb-labs/sdk';
const platform = usePlatform();
if (platform.llm) {
await platform.llm.complete('…');
}
platform.logger.info('done');What's on the singleton
As documented in use-platform.ts, the platform exposes:
platform.llm— LLM adapterplatform.embeddings— Embeddings adapterplatform.vectorStore— Vector storageplatform.storage— File/blob storageplatform.cache— Caching layerplatform.analytics— Analytics/telemetryplatform.logger— Structured loggingplatform.eventBus— Event systemplatform.workflows— Workflow engineplatform.jobs— Background jobsplatform.cron— Scheduled tasksplatform.resources— Resource managementplatform.invoke— Plugin invocationplatform.artifacts— Build artifacts
In practice you'll rarely call usePlatform() directly — the specific hooks (useLLM, useCache, ...) are more ergonomic. Reach for usePlatform() when you need several services in the same scope, or when you want to check configuration at runtime without calling them:
const platform = usePlatform();
logger.debug('platform state', {
hasLLM: !!platform.llm,
hasCache: !!platform.cache,
hasVectorStore: !!platform.vectorStore,
});isPlatformConfigured(adapterName)
Returns true if a named adapter is registered and isn't a no-op/fallback implementation. Use it for conditional branches where you actually need the real thing.
import { isPlatformConfigured } from '@kb-labs/sdk';
if (isPlatformConfigured('llm')) {
return await llmPoweredPath();
}
return deterministicFallbackPath();Internally, this checks platform.hasAdapter(name) if the method exists, and falls back to a constructor-name heuristic (noop / fallback in the name → treat as unconfigured) otherwise.
useLogger()
Returns the platform logger. Always defined — no nullable check needed.
import { useLogger } from '@kb-labs/sdk';
const logger = useLogger();
logger.trace('verbose');
logger.debug('debug', { userId: 123 });
logger.info('processing', { taskId });
logger.warn('slow operation', { durationMs: 5000 });
logger.error('failed', err, { taskId });
logger.fatal('unrecoverable', err, { taskId });Method signatures
All log methods are synchronous (return void, not Promise<void>). error and fatal take an optional Error object as their second argument, separate from the metadata bag:
interface ILogger {
trace(message: string, meta?: Record<string, unknown>): void;
debug(message: string, meta?: Record<string, unknown>): void;
info(message: string, meta?: Record<string, unknown>): void;
warn(message: string, meta?: Record<string, unknown>): void;
error(message: string, error?: Error, meta?: Record<string, unknown>): void;
fatal(message: string, error?: Error, meta?: Record<string, unknown>): void;
child(bindings: Record<string, unknown>): ILogger;
}There's no await — logging is fire-and-forget. Adapters that persist logs (Pino, SQLite) flush asynchronously in the background.
Child loggers
Both ILogger#child(context) and the convenience useLoggerWithContext(context) return a new logger that attaches a persistent context bag to every entry:
import { useLoggerWithContext } from '@kb-labs/sdk';
const logger = useLoggerWithContext({ operation: 'release', version: '1.0.0' });
logger.info('started'); // { operation: 'release', version: '1.0.0', ...}
logger.info('step-1'); // same context
logger.info('completed'); // same contextPrefer child loggers over passing context to every call — cheaper to write, harder to forget.
useLLM(options?)
Returns the LLM adapter if configured, undefined otherwise.
import { useLLM } from '@kb-labs/sdk';
const llm = useLLM();
if (llm) {
const response = await llm.complete('Write a haiku about kittens.');
console.log(response.content);
}Tier-based selection
When you pass { tier }, the hook returns a lazily-bound LLM that resolves to the appropriate adapter on first call:
const llm = useLLM({ tier: 'small' }); // fast and cheap
const llm = useLLM({ tier: 'medium' }); // balanced
const llm = useLLM({ tier: 'large' }); // maximum qualityThe three tiers are user-defined slots, not model names. Your plugin says "this task is simple" by asking for small; the user decides in kb.config.json which actual model that maps to. If the configured model doesn't match your request, the router adapts — asking for small when only medium is configured gets you medium, and asking for large when only medium is available gets you medium with a warning.
Capabilities
You can also constrain by capability:
const llm = useLLM({ tier: 'medium', capabilities: ['coding'] });
const llm = useLLM({ capabilities: ['vision'] });Immutability
A critical detail from the source: useLLM({ tier: '…' }) returns a new immutable binding (LazyBoundLLM). It does not mutate the global router's state. This fixes a real race where one handler calling useLLM({ tier: 'large' }) would have been clobbered by a concurrent useLLM({ tier: 'small' }) from somewhere else in the same process.
The binding resolves lazily — the actual tier selection happens on the first complete() / stream() / chatWithTools() call, not on the useLLM() call itself.
Methods
The returned ILLM exposes:
complete(prompt: string, options?: LLMOptions): Promise<LLMResponse>
stream(prompt: string, options?: LLMOptions): AsyncIterable<string>
chatWithTools(messages: LLMMessage[], options: LLMToolCallOptions): Promise<LLMToolCallResponse>stream() falls back to complete() if the adapter doesn't support streaming and execution.stream.mode isn't 'require'. chatWithTools() throws 'Current adapter does not support chatWithTools' if the adapter doesn't implement it — there's no fallback for tool-calling.
isLLMAvailable() and getLLMTier()
import { isLLMAvailable, getLLMTier } from '@kb-labs/sdk';
if (isLLMAvailable()) {
const tier = getLLMTier(); // 'small' | 'medium' | 'large' | undefined
logger.debug(`LLM tier: ${tier ?? 'default'}`);
}getLLMTier() returns undefined if the configured LLM isn't a router (i.e. it's a direct adapter without tier support).
useCache()
Returns the cache adapter (Redis, InMemory, or custom) if configured.
import { useCache } from '@kb-labs/sdk';
const cache = useCache();
if (cache) {
const hit = await cache.get<MyType>('query:123');
if (hit) { return hit; }
const fresh = await computeExpensiveThing();
await cache.set('query:123', fresh, 60_000); // 60s TTL
return fresh;
}
return await computeExpensiveThing();Methods
get<T>(key: string): Promise<T | null>
set<T>(key: string, value: T, ttlMs?: number): Promise<void>
delete(key: string): Promise<void>
clear(pattern?: string): Promise<void>
// Sorted sets (for scheduling, queues, time-series)
zadd(key: string, score: number, member: string): Promise<void>
zrangebyscore(key: string, min: number, max: number): Promise<string[]>
zrem(key: string, member: string): Promise<void>
// Atomic (for distributed locking)
setIfNotExists<T>(key: string, value: T, ttlMs?: number): Promise<boolean>get() returns null (not undefined) when a key is missing or expired. clear() accepts an optional glob pattern ('user:*') and wipes everything matching it. The sorted-set and atomic operations are there because Redis and the in-memory cache both implement them — plugins that need scheduling primitives or distributed locks don't need a separate backend.
Namespaces
Declare the namespace prefixes your plugin writes to in permissions.platform.cache:
.withPlatform({ cache: ['mine:'] })Then scope your keys:
await cache.set('mine:query-abc', result, 60_000);The runtime enforces the prefix — writes outside declared namespaces are refused.
isCacheAvailable()
import { isCacheAvailable } from '@kb-labs/sdk';
if (isCacheAvailable()) {
// use caching path
}useStorage()
Returns the file/blob storage adapter if configured.
import { useStorage } from '@kb-labs/sdk';
const storage = useStorage();
if (storage) {
await storage.write('releases/v1.0.0.json', Buffer.from(JSON.stringify(data)));
const buf = await storage.read('releases/v1.0.0.json');
if (buf) {
const data = JSON.parse(buf.toString('utf-8'));
}
const exists = await storage.exists('releases/v1.0.0.json');
const files = await storage.list('releases/');
}Core methods (required by every adapter)
read(path: string): Promise<Buffer | null>
write(path: string, data: Buffer): Promise<void>
delete(path: string): Promise<void>
list(prefix: string): Promise<string[]>
exists(path: string): Promise<boolean>read() returns Buffer | null, not a string — handle text encoding yourself (buf.toString('utf-8')). write() takes a Buffer, so wrap strings with Buffer.from(...). list() requires a prefix; pass an empty string if you want everything under the root.
Extended methods (optional)
Adapters can optionally implement the following for better performance. When absent, the runtime falls back to composing the core methods:
readStream?(path: string): Promise<NodeJS.ReadableStream | null>
writeStream?(path: string, stream: NodeJS.ReadableStream): Promise<void>
copy?(sourcePath: string, destPath: string): Promise<void>
move?(sourcePath: string, destPath: string): Promise<void>
listWithMetadata?(prefix: string): Promise<StorageMetadata[]>
stat?(path: string): Promise<StorageMetadata | null>StorageMetadata has { path, size, lastModified, contentType?, etag? }. stat() lets you check size without reading the file — a real win on large blobs.
The path space is controlled by permissions.platform.storage in your manifest — use the { read: […], write: […] } form for granular control.
Prefer useStorage() over ctx.runtime.fs for anything durable. The storage adapter can be backed by an object store, a snapshot layer, or the local disk depending on deployment, whereas ctx.runtime.fs is always local files with permission gating.
useVectorStore()
Returns the vector database adapter (Qdrant, local, ...) if configured.
import { useVectorStore } from '@kb-labs/sdk';
const vs = useVectorStore();
if (vs) {
await vs.upsert([
{ id: '1', vector: [0.1, 0.2, /* … */], metadata: { source: 'docs' } },
]);
const hits = await vs.search([0.1, 0.2, /* … */], 10);
}isVectorStoreAvailable()
Same shape as the other availability checks.
useEmbeddings()
Returns the embeddings adapter if configured.
import { useEmbeddings } from '@kb-labs/sdk';
const embeddings = useEmbeddings();
if (embeddings) {
const vector = await embeddings.embed('Hello, world!');
// vector.length === 1536 for OpenAI text-embedding-3-small, etc.
}isEmbeddingsAvailable()
import { isEmbeddingsAvailable } from '@kb-labs/sdk';
if (isEmbeddingsAvailable()) {
// embeddings path
}useAnalytics()
Returns the analytics adapter if configured.
import { useAnalytics } from '@kb-labs/sdk';
const analytics = useAnalytics();
if (analytics) {
await analytics.track('command_executed', {
command: 'release:run',
duration_ms: 1234,
success: true,
});
await analytics.identify('user-123', { email: 'alice@example.com' });
}Core methods
track(event: string, properties?: Record<string, unknown>): Promise<void>
identify(userId: string, traits?: Record<string, unknown>): Promise<void>
flush(): Promise<void>track() records a named event with a properties bag. identify() associates a user ID with traits for attribution. flush() forces the adapter to drain any buffered events — call it before process exit in long-running scripts.
Optional query methods
Some adapters (SQLite, Postgres) also expose read-side APIs for dashboards — getEvents(), getStats(), getDailyStats() with time bucketing and breakdowns. These are optional on the interface; check for presence before calling:
if (analytics?.getDailyStats) {
const daily = await analytics.getDailyStats({
type: 'llm.completion.completed',
from: '2026-01-01T00:00:00Z',
to: '2026-01-31T23:59:59Z',
groupBy: 'day',
breakdownBy: 'payload.model',
metrics: ['totalCost', 'totalTokens'],
});
}See the Analytics adapter interface page for the full surface.
trackAnalyticsEvent(event, properties?)
Convenience wrapper that safely no-ops when analytics isn't configured — use it when you don't want a null check:
import { trackAnalyticsEvent } from '@kb-labs/sdk';
await trackAnalyticsEvent('release_completed', {
version: '1.0.0',
packages: 5,
});No if (analytics) needed — the wrapper handles the missing case.
useConfig<T>(productId?, profileId?)
Async hook that returns the product-specific config from kb.config.json. Unlike every other hook, it returns a Promise.
import { useConfig } from '@kb-labs/sdk';
interface MyConfig {
model: string;
maxRetries: number;
}
const config = await useConfig<MyConfig>();
if (config) {
logger.info(`using model ${config.model}`);
}Auto-detection
If you omit productId, the hook reads manifest.configSection from the execution context (via globalThis.__KB_CONFIG_SECTION__, set by the runner). Most plugins never pass a product ID — the auto-detection does the right thing:
// In a plugin whose manifest has `configSection: 'commit'`
const config = await useConfig<CommitConfig>();
// ↑ Reads `profiles[].products.commit` from kb.config.jsonExplicit product and profile
const mindConfig = await useConfig<MindConfig>('mind');
const prodConfig = await useConfig<WorkflowConfig>('workflow', 'production');What you get back
useConfig returns only the product-specific slice, never the entire kb.config.json. This is a security boundary: one plugin can't read another plugin's config.
Both the v2 profile-based structure and the legacy flat structure are supported:
// Profiles v2
{
"profiles": [
{
"id": "default",
"products": {
"mind": { "scopes": [...] },
"workflow": { "maxConcurrency": 10 }
}
}
]
}
// Legacy
{
"knowledge": { "scopes": [...] }, // reads as `mind` product
"workflow": { "maxConcurrency": 10 }
}Availability quick reference
| Hook | Return type | Missing adapter |
|---|---|---|
usePlatform() | PlatformServices | Always defined |
useLogger() | ILogger | Always defined (fallback provided) |
useLLM(options?) | ILLM | undefined | undefined |
useCache() | ICache | undefined | undefined |
useStorage() | IStorage | undefined | undefined |
useVectorStore() | IVectorStore | undefined | undefined |
useEmbeddings() | IEmbeddings | undefined | undefined |
useAnalytics() | IAnalytics | undefined | undefined |
useConfig<T>(...) | Promise<T | undefined> | undefined |
Patterns
Cache-first, compute-fallback
async function getOrCompute<T>(key: string, compute: () => Promise<T>, ttlMs = 60_000): Promise<T> {
const cache = useCache();
if (cache) {
const hit = await cache.get<T>(key);
if (hit) { return hit; }
}
const value = await compute();
await cache?.set(key, value, ttlMs);
return value;
}LLM-powered, deterministic fallback
async function summarize(text: string): Promise<string> {
const llm = useLLM({ tier: 'small' });
if (!llm) {
return text.slice(0, 200) + '…';
}
const response = await llm.complete(`Summarize in 1 sentence:\n\n${text}`);
return response.content;
}Scoped logging around an operation
async function runRelease(version: string): Promise<void> {
const logger = useLoggerWithContext({ operation: 'release', version });
logger.info('started');
try {
await doRelease();
logger.info('completed');
} catch (err) {
logger.error('failed', err instanceof Error ? err : undefined);
throw err;
}
}Analytics without null checks
import { trackAnalyticsEvent } from '@kb-labs/sdk';
async function execute(ctx, input) {
const start = Date.now();
try {
const result = await doWork();
await trackAnalyticsEvent('job_succeeded', {
durationMs: Date.now() - start,
});
return result;
} catch (err) {
await trackAnalyticsEvent('job_failed', {
durationMs: Date.now() - start,
error: String(err),
});
throw err;
}
}Things that are not hooks (yet)
The SDK doesn't ship dedicated hooks for every platform service. The following are reachable only through usePlatform():
platform.eventBus— publish/subscribe to platform events.platform.workflows— start/list/cancel workflow runs from inside a handler.platform.jobs— submit background jobs.platform.cron— register cron schedules at runtime.platform.invoke— call another plugin's handler.platform.artifacts— platform-level artifact storage (distinct fromctx.api.artifacts).platform.resources— resource management.
These may get dedicated hooks in a later SDK minor. Until then, reach through the singleton:
const platform = usePlatform();
if (platform.eventBus) {
await platform.eventBus.publish('my-event', { … });
}