Your First Workflow
Last updated April 7, 2026
Write a two-job workflow, run it via the CLI, inspect results in Studio.
Workflows are declarative specs that the workflow engine runs as jobs and steps. This guide walks through writing a minimal workflow with two jobs that depend on each other, running it, and seeing the results. By the end you'll understand the core model: trigger → jobs → steps → artifacts.
Reference material lives at Workflows → Overview, Workflows → Spec Reference, and Workflows → Patterns. This page is the hands-on walkthrough.
Prerequisites
- A running KB Labs workspace with the workflow daemon up (
kb-dev start workfloworkb-dev start). - The REST API also running (workflows are triggered through the gateway → workflow daemon path).
Check:
kb-dev status
# workflow should be 'running' and healthyWhat we're building
A two-job workflow:
fetch— runscurlto download a JSON file and saves it as an artifact.process— depends onfetch, reads the artifact, and prints a summary.
It's trivial work-wise, but it covers every core concept: trigger definition, inputs, jobs, steps, artifacts, dependencies, and the artifact produce/consume relationship.
Step 1 — Write the workflow spec
Create workflows/hello-workflow.json at the workspace root:
{
"name": "hello-workflow",
"version": "1",
"description": "Fetch a file and summarize it",
"on": {
"manual": true
},
"inputs": {
"url": {
"type": "string",
"description": "URL to fetch",
"required": true
}
},
"jobs": {
"fetch": {
"runsOn": "sandbox",
"timeoutMs": 60000,
"steps": [
{
"name": "Download",
"id": "download",
"uses": "builtin:shell",
"with": {
"command": "curl -sf -o .kb/out/payload.json '${{ trigger.payload.url }}' && echo '::kb-output::{\"size\":'$(wc -c < .kb/out/payload.json)'}'"
}
}
],
"artifacts": {
"produce": ["payload"]
}
},
"process": {
"runsOn": "sandbox",
"needs": ["fetch"],
"timeoutMs": 30000,
"artifacts": {
"consume": ["payload"]
},
"steps": [
{
"name": "Summarize",
"uses": "builtin:shell",
"with": {
"command": "cat .kb/out/payload.json | head -c 200"
}
}
]
}
}
}Three things happen here:
on.manual: truedeclares the workflow can be triggered from the CLI or Studio.inputs.urlis a typed, required string. Every run must provide one.fetchruns first (no dependencies),processwaits for it (needs: ['fetch']).
The ${{ trigger.payload.url }} interpolation inside the shell command resolves to the URL the run was triggered with. The ::kb-output::{"size":N} marker makes the file size available as steps.download.outputs.size for any later step that wants it.
See Workflows → Spec Reference for every field.
Step 2 — Trigger the workflow
pnpm kb workflow:run --workflow-id=hello-workflow \
--inputs='{"url":"https://raw.githubusercontent.com/github/explore/main/topics/nodejs/index.md"}'This POSTs to the workflow daemon, which validates the inputs against the schema, creates a run, and schedules the first job.
You'll see the run ID printed:
✓ Run created: run_abc123Step 3 — Watch the run
Option A — CLI polling:
pnpm kb workflow:status --run-id=run_abc123Shows the current state of the run, its jobs, and its steps. Keep running it to watch state transitions.
Option B — Studio UI:
Open http://localhost:3000 (default Studio port), navigate to Workflows, and click on your run. You'll see:
- Run header — name, version, trigger type, status, duration.
- Job graph —
fetch→processwith live status indicators. - Step list — per-step status, outputs, duration.
- Live logs — stdout/stderr from each shell step, streamed in real time.
- Artifacts — any files produced by the run.
Option C — tail the workflow daemon logs:
kb-dev logs workflow -fThe correlation IDs (runId, jobId, stepId) appear in every log line, making it easy to grep for a specific run.
Step 4 — Inspect the result
Once the run finishes (should take a few seconds):
pnpm kb workflow:get --run-id=run_abc123Output:
{
"id": "run_abc123",
"status": "success",
"jobs": [
{
"jobName": "fetch",
"status": "success",
"durationMs": 342,
"steps": [
{
"name": "Download",
"id": "download",
"status": "success",
"outputs": { "size": 12456, "stdout": "...", "exitCode": 0, "ok": true }
}
]
},
{
"jobName": "process",
"status": "success",
"durationMs": 87,
"steps": [
{
"name": "Summarize",
"status": "success",
"outputs": { "stdout": "# Node.js...\n...", "exitCode": 0, "ok": true }
}
]
}
]
}Step 5 — Make it fail
Pass a URL that doesn't exist:
pnpm kb workflow:run --workflow-id=hello-workflow \
--inputs='{"url":"https://does-not-exist.example.com/missing.json"}'The run will fail at the fetch job because curl returns a non-zero exit code. The process job is marked skipped because its dependency failed.
Inspect the failure:
pnpm kb workflow:get --run-id=<run-id>You'll see:
status: 'failed'on the run.status: 'failed'on thefetchjob with a non-zero exit code in the step outputs.status: 'skipped'onprocesswith a pending-dependency reason.
This is the default behavior — dependents are skipped when a dependency fails. For retry or recovery patterns, see Workflows → Retries & Error Handling.
Step 6 — Add a retry policy
Edit the spec to add retries to the fetch job:
{
"jobs": {
"fetch": {
"runsOn": "sandbox",
"timeoutMs": 60000,
"retries": {
"max": 3,
"backoff": "exp",
"initialIntervalMs": 2000
},
"steps": [
{
"name": "Download",
"uses": "builtin:shell",
"with": {
"command": "curl -sf -o .kb/out/payload.json '${{ trigger.payload.url }}'"
}
}
],
"artifacts": {
"produce": ["payload"]
}
}
}
}Re-run with a flaky URL and watch the retries in the daemon logs. Exponential backoff: 2s → 4s → 8s between attempts.
Step 7 — Use workflow outputs
Step outputs from earlier steps are available via ${{ steps.<id>.outputs.<key> }} interpolation:
{
"jobs": {
"fetch": {
"steps": [
{
"name": "Download",
"id": "download",
"uses": "builtin:shell",
"with": {
"command": "curl -sf -o .kb/out/payload.json '${{ trigger.payload.url }}' && echo '::kb-output::{\"size\":'$(wc -c < .kb/out/payload.json)'}'"
}
},
{
"name": "Check size",
"if": "${{ steps.download.outputs.size > 1000 }}",
"uses": "builtin:shell",
"with": {
"command": "echo File is large: ${{ steps.download.outputs.size }} bytes"
}
}
]
}
}
}The ::kb-output:: marker extracts structured data from shell stdout into step outputs. The second step uses it in both an if condition and a string interpolation.
What's next
Try these extensions to understand more of the workflow engine:
- Add a third job that runs in parallel with
process, also depending onfetch. Both should start at the same time oncefetchcompletes. - Add a conditional job with
if: "${{ trigger.payload.dryRun == 'true' }}"that only runs when adryRuninput is true. - Add an approval step using
builtin:approvalbefore theprocessjob. The run will pause waiting for someone to approve via Studio. - Add a cron trigger (
on: { schedule: { cron: '0 * * * *' } }) so the workflow runs every hour automatically. - Call a plugin handler from a step by changing
usesfrombuiltin:shelltoplugin:<plugin-id>:<handler-id>.
Each of these is documented in the references linked below.
What to read next
- Workflows → Overview — the complete conceptual picture.
- Workflows → Spec Reference — every field in
WorkflowSpec. - Workflows → Jobs and Steps — deeper reference on each.
- Workflows → Artifacts — produce, consume, merge.
- Workflows → Gates & Approvals — human-in-the-loop patterns.
- Workflows → Patterns — end-to-end recipes for common shapes.
- Services → Workflow Daemon — the service running everything.