How Workflows Work
A workflow is a directed acyclic graph (DAG) of nodes connected by edges, with a single entry point. Each node gives Claude an instruction and a set of skills (tools). Edges define the flow between nodes, optionally with natural-language conditions that Claude evaluates at runtime.
This is SWEny’s core abstraction. Instead of writing procedural automation scripts, you declare what should happen at each step and when to move between steps. Claude handles the execution.
Anatomy of a workflow
Section titled “Anatomy of a workflow”Every workflow has four parts:
| Field | Purpose |
|---|---|
| id | Unique identifier (used in CLI commands and exports) |
| name | Human-readable display name |
| entry | The node where execution begins |
| nodes | A map of node definitions, each with an instruction and skills |
| edges | Connections between nodes, optionally with when conditions |
Here is the TypeScript interface from @sweny-ai/core:
interface Workflow { id: string; name: string; description: string; nodes: Record<string, Node>; edges: Edge[]; entry: string;}
interface Node { name: string; instruction: string; skills: string[]; output?: JSONSchema;}
interface Edge { from: string; to: string; when?: string;}Execution model
Section titled “Execution model”The executor walks the graph node-by-node, starting at the entry node, until it reaches a terminal node (one with no outgoing edges).
Step-by-step:
- Start at the entry node. The executor looks up
workflow.entryin the node map. - Build context. Claude receives the node’s
instruction, the workflow input, and the accumulated results from all prior nodes. - Resolve tools. The executor gathers tools from every skill listed in the node’s
skillsarray. - Claude executes. Claude runs the instruction, calling tools as needed (querying APIs, reading files, creating issues, etc.).
- Collect the result. Claude returns a
NodeResultwith a status (success,skipped, orfailed), arbitrary data, and a record of all tool calls made. - Route to the next node. The executor evaluates outgoing edges. If there is a single unconditional edge, it follows it. If there are conditional edges, Claude evaluates the
whenclauses against the current results and picks a path. - Repeat until reaching a terminal node.
interface NodeResult { status: "success" | "skipped" | "failed"; data: Record<string, unknown>; toolCalls: ToolCall[];}Conditional routing
Section titled “Conditional routing”Edges can have a when clause written in natural language. At runtime, Claude evaluates each condition against the current node’s result and picks the matching path.
edges: - from: investigate to: create_issue when: "The issue is novel (not a duplicate) and severity is medium or higher" - from: investigate to: skip when: "The issue is a duplicate of an existing ticket, or severity is low"The executor presents all outgoing conditions as choices and asks Claude to pick one. If an edge has no when clause, it acts as a default/unconditional path.
Structured output
Section titled “Structured output”Nodes can declare an output schema (JSON Schema). When present, Claude’s response is validated against the schema, producing structured data that downstream nodes and routing conditions can reference.
investigate: name: Root Cause Analysis instruction: "Classify every distinct issue as novel or duplicate..." skills: [github, linear] output: type: object properties: findings: type: array items: type: object properties: title: { type: string } root_cause: { type: string } severity: { type: string, enum: [critical, high, medium, low] } is_duplicate: { type: boolean } fix_complexity: { type: string, enum: [simple, moderate, complex] } required: [title, root_cause, severity, is_duplicate] novel_count: { type: number } highest_severity: { type: string, enum: [critical, high, medium, low] } recommendation: { type: string } required: [findings, novel_count, highest_severity, recommendation]This is how the triage workflow’s conditional routing works: the investigate node outputs a findings array where each item is classified as novel or duplicate. The novel_count and highest_severity fields drive edge conditions.
Dry run
Section titled “Dry run”Pass dryRun: true (CLI: --dry-run, Action: dry-run: true) to run a workflow in analysis-only mode. The executor processes nodes normally — Claude queries logs, searches code, analyzes errors — but stops before any action that requires a routing decision.
Specifically: after each node completes, the executor checks outgoing edges. If any edge has a when condition (a conditional branch), execution stops and returns the results so far. Unconditional edges are followed normally because they represent analysis flow, not action decisions.
This is a hard gate enforced by the executor, not a prompt instruction. Claude cannot bypass it. The routing check is in the executor code itself — if dryRun is true and a conditional edge exists, the executor halts regardless of what Claude returns.
In practice:
- Triage workflow: runs
prepare→gather→investigate, then stops. You get the full investigation report but no issues are created, no PRs opened, no notifications sent. - Implement workflow: runs
analyze, then stops. You get the analysis and fix plan but no code changes are made.
Execution events
Section titled “Execution events”The executor emits events at every stage. Pass an observer function to receive them in real-time:
type ExecutionEvent = | { type: "workflow:start"; workflow: string } | { type: "node:enter"; node: string; instruction: string } | { type: "node:progress"; node: string; message: string } | { type: "tool:call"; node: string; tool: string; input: unknown } | { type: "tool:result"; node: string; tool: string; output: unknown } | { type: "node:exit"; node: string; result: NodeResult } | { type: "route"; from: string; to: string; reason: string } | { type: "workflow:end"; results: Record<string, NodeResult> };
type Observer = (event: ExecutionEvent) => void;The CLI uses these events to render a live DAG visualization in your terminal. Studio uses them for live mode. You can also use them for logging, metrics, or custom integrations.
Validation
Section titled “Validation”SWEny validates every workflow before execution. The validateWorkflow() function checks:
| Rule | Error code |
|---|---|
| Entry node must exist in the node map | MISSING_ENTRY |
All edge from values must reference existing nodes | UNKNOWN_EDGE_SOURCE |
All edge to values must reference existing nodes | UNKNOWN_EDGE_TARGET |
| No edge may reference the same node as both source and target | SELF_LOOP |
| All nodes must be reachable from the entry node (BFS) | UNREACHABLE_NODE |
| Referenced skills must exist in the skill catalog (optional) | UNKNOWN_SKILL |
You can validate a workflow file without running it:
sweny workflow validate my-workflow.ymlBuilt-in workflows
Section titled “Built-in workflows”SWEny ships with two built-in workflows:
- Triage — investigate a production alert, determine root cause, create an issue, and notify the team.
- Implement — analyze an issue, implement a fix, open a PR, and notify the team.
You can export either as YAML and use it as a starting point for customization:
sweny workflow export triage > my-triage.ymlsweny workflow export implement > my-implement.ymlVisualize a workflow
Section titled “Visualize a workflow”Any workflow can be rendered as a Mermaid diagram — great for PR descriptions, runbooks, or wherever GitHub/GitLab render Mermaid natively:
# Fenced markdown, paste-readysweny workflow diagram my-workflow.yml
# Raw .mmd file for mmdc or the Mermaid Live Editorsweny workflow diagram my-workflow.yml -o diagram.mmdSee sweny workflow diagram for all options.
What’s next?
Section titled “What’s next?”- Custom Workflows — build workflows from natural language, YAML, or Studio
- Triage Workflow — the built-in alert investigation workflow
- Implement Workflow — the built-in issue-to-PR workflow
- YAML Reference — full schema reference