Build agents at the speed of spec
Define workflows in YAML. Wire tools in any language. Run from one command.
See it in action
Three steps to your first agent
Define
Describe your workflow in YAML or JSON using the Open Agent Specification.
# flow.yaml
nodes:
- name: start
type: StartNode
- name: agent
type: AgentNode
agent:
tools: [web_search]
llm:
type: OpenAiConfig
model: gpt-4o
- name: end
type: EndNodeWire Tools
Tools are standalone executables. Write them in any language — no SDK required.
#!/usr/bin/env python3
import sys, json
args = json.load(sys.stdin)
query = args.get("query", "")
results = search(query)
json.dump({
"results": results
}, sys.stdout)Run
Compile, validate, and execute — all from one command.
$ specrun run flow.yaml \
--tools-dir ./tools \
--input '{"query": "quantum computing"}'
▸ Starting flow: research-assistant
▸ Agent calling: web_search
▸ Agent calling: web_search
▸ Flow complete
{"result": "Quantum computing uses..."}Everything you need
Declarative Workflows
Define multi-step agent flows in YAML/JSON. No boilerplate code.
Graph-Based Execution
Flows compile into directed graphs with control and data flow edges, validated before running.
LLM Tool-Calling Loop
Agents autonomously call tools in a loop until the task is done. Up to 10 rounds per node.
Any-Language Tools
Tools are subprocesses that speak JSON over stdin/stdout. Use Bash, Python, Go — anything.
Provider Agnostic
Works with OpenAI, vLLM, Ollama, or any OpenAI-compatible endpoint.
Interactive Chat Mode
Debug and explore flows with persistent multi-turn conversations via --chat.
From zero to agent in 30 seconds
# Install via npm
$ npm install -g @specrun/cli
# Or via Homebrew
$ brew install spichen/tap/specrun
# Create and run your first agent
$ specrun init my-agent
$ specrun run my-agent/flow.json --tools-dir my-agent/tools