Agent Orchestration with NoETL
NoETL can serve as both a registry and orchestrator for AI agents, where each agent is defined as a NoETL playbook. More broadly, NoETL is evolving toward a distributed business operating system: a shell, catalog, scheduler, execution fabric, event log, and observability plane for business tasks described in the NoETL DSL.
The analogy is intentional. A distributed operating system presents many networked nodes as one coherent system. NoETL applies the same idea to business execution: many playbooks, workers, credentials, AI tools, APIs, and Kubernetes resources appear to users as one programmable workspace for running work.
NoETL as a Distributed Business Operating System
In NoETL, the "process" is a playbook execution. The "program" is a DSL playbook. The "shell" is the CLI, GUI prompt, API, or scheduler that launches work. The "kernel services" are catalog registration, dispatch, credentials, event sourcing, execution projections, and worker coordination.
| Distributed OS Concept | NoETL Business OS Equivalent |
|---|---|
| Single system image | One catalog and prompt surface across server, workers, schedules, and APIs |
| Process management | Playbook execution, status, cancellation, rerun, and supervision |
| Inter-process communication | NATS subjects, nested playbooks, event streams, and result references |
| Resource management | Kubernetes workers, credentials, external systems, storage, and AI models |
| Transparency | Users run noetl tasks without manually choosing every worker or service |
| Reliability | Event sourcing in noetl.event, command projection in noetl.command, and execution projection in noetl.execution |
| Scheduling | Cron, event-driven triggers, external API calls, and user prompt commands |
| Observability | Execution dashboard, event detail, diagnostics, reports, and AI-assisted analysis |
NoETL does not replace Kubernetes. Kubernetes is the substrate for container scheduling and cluster operations. NoETL sits above it as the domain operating layer for business work: discovering tasks, binding credentials, invoking tools, coordinating AI agents, and explaining what happened.
The NoETL Prompt
The NoETL prompt is the interactive shell for that workspace:
noetl@kind:/execution$ run fixtures/playbooks/hello_world
noetl@kind:/execution$ executions failed
noetl@kind:/execution$ report 612955956145554347
noetl@kind:/execution$ fix 612955956145554347
The prompt should always show the active context, such as local kind, a server API, or a gateway-backed environment. Over time it becomes the primary operating interface for users and agents to run playbooks, inspect status, retrieve reports, schedule work, and repair failed executions.
Why NoETL Maps to Agent Orchestration
NoETL already has the primitives that multi-agent orchestration requires:
| NoETL Primitive | Agent Orchestration Equivalent |
|---|---|
| Playbook | Agent definition (instructions, tools, constraints) |
| Steps with tools (http, python, container) | Agent tool calls (Bash, Read, Write, API calls) |
| Iterator / loops | Multi-turn agent reasoning loops |
| NATS messaging | Inter-agent communication |
| Server dispatch / Worker execution | Registry (server knows agents) / Runtime (worker runs them) |
| render_context / variables | Agent context and memory injection |
| Credential caching / keychain | Agent authentication for external services |
| Nested playbooks | Agent-to-agent delegation |
| Conditionals | Branching based on agent results |
What an Agent Registry Needs
A minimal agent registry provides five capabilities:
- Registration — declare that an agent exists, what it does, what tools it has
- Discovery — find agents by capability or name
- Invocation — start an agent with a goal and context
- State tracking — know if an agent is running, idle, succeeded, or failed
- Communication — agents exchange results or hand off work
NoETL's server already handles capabilities 1 through 4 for playbooks. NATS provides capability 5.
How NoETL Covers Each
Registration: Every playbook deployed to the server is registered. Agent playbooks keep YAML kind: Playbook for DSL execution, but can be stored in noetl.catalog with catalog kind agent. Adding metadata.agent: true and metadata.capabilities makes them discoverable as agents specifically.
Discovery: The server tracks versioned resources through noetl.catalog.kind and noetl.resource.name. Resource kinds include playbook, agent, mcp, memory, and credential, with capability filters for agent discovery.
Invocation: noetl exec <playbook> --set key=value already starts a playbook with input parameters. An agent invocation is the same call, with resource_kind: "agent" when the catalog entry is registered as an agent.
State tracking: noetl status shows running, completed, and failed executions. Agent status is playbook execution status.
Communication: NATS subjects allow playbooks to publish results and subscribe to each other's outputs. A parent playbook can aggregate results from child agent playbooks.
Agent-as-Playbook: What It Looks Like
A Single Agent Playbook
An agent playbook wraps AI reasoning (via the Claude Agent SDK) into a NoETL step:
kind: playbook
name: code-reviewer
description: Reviews pull requests using Claude
metadata:
agent: true
capabilities:
- code-review
- security-audit
model: claude-sonnet
input:
repo: string
pr_number: integer
steps:
- name: fetch-pr-diff
tool: http
action: GET
url: "https://api.github.com/repos/{{ input.repo }}/pulls/{{ input.pr_number }}"
headers:
Authorization: "Bearer {{ credentials.github_token }}"
- name: analyze-code
tool: python
script: |
from anthropic_agent_sdk import query
review = await query(
prompt=f"Review this PR diff for bugs and security issues:\n{fetch_pr_diff.output.diff}",
model="claude-sonnet-4-20250514",
system="You are a senior code reviewer. Be concise and actionable."
)
return {"approved": review.approved, "comments": review.comments}
- name: post-review
tool: http
action: POST
url: "https://api.github.com/repos/{{ input.repo }}/pulls/{{ input.pr_number }}/reviews"
headers:
Authorization: "Bearer {{ credentials.github_token }}"
body:
event: "{{ 'APPROVE' if analyze_code.output.approved else 'REQUEST_CHANGES' }}"
body: "{{ analyze_code.output.comments }}"
The agent is a regular playbook. The AI reasoning happens in a python step that calls the Claude Agent SDK. Everything else (HTTP calls, credential management, error handling) uses existing NoETL tools.
Multi-Agent Orchestration via Parent Playbooks
A parent playbook coordinates multiple agent-playbooks, the same way NoETL already coordinates nested playbooks:
kind: playbook
name: release-coordinator
description: Orchestrates a full release cycle using specialized agents
metadata:
agent: true
capabilities:
- release-management
input:
repo: string
pr_number: integer
target_env: string
steps:
- name: review-code
playbook: code-reviewer
input:
repo: "{{ input.repo }}"
pr_number: "{{ input.pr_number }}"
- name: run-tests
playbook: test-runner
input:
repo: "{{ input.repo }}"
branch: main
- name: check-results
tool: python
script: |
approved = review_code.output.approved
tests_passed = run_tests.output.all_passed
return {"ready": approved and tests_passed}
- name: deploy
playbook: deployer
condition: "{{ check_results.output.ready }}"
input:
repo: "{{ input.repo }}"
target: "{{ input.target_env }}"
- name: notify
tool: http
action: POST
url: "{{ credentials.slack_webhook }}"
body:
text: "Release {{ input.repo }} PR #{{ input.pr_number }} → {{ input.target_env }}: {{ 'deployed' if check_results.output.ready else 'blocked' }}"
The agents do not need to know about each other. The parent playbook handles sequencing, conditionals, and result aggregation. This is standard NoETL orchestration.
Architecture
┌─────────────────────────┐
│ NoETL Server │
│ (Agent Registry) │
│ │
│ Deployed playbooks: │
│ - code-reviewer [agent]│
│ - test-runner [agent]│
│ - deployer [agent]│
│ - release-coord [agent]│
└──────────┬──────────────┘
│ dispatch via NATS
┌──────────▼──────────────┐
│ NoETL Workers │
│ (Agent Runtime) │
│ │
│ Execute agent playbooks │
│ Call ADK/LangChain SDKs │
│ Run tools (http, py..) │
└──────────┬──────────────┘
│ results via NATS
┌──────────▼──────────────┐
│ NATS │
│ (Agent Message Bus) │
│ │
│ Inter-agent results │
│ Status updates │
│ Event streaming │
└─────────────────────────┘
How It Maps to Existing NoETL Components
| Component | Current Role | Agent Role |
|---|---|---|
| Server | Receives playbook executions, dispatches to workers | Agent registry: knows all agents, their capabilities, dispatches invocations |
| Worker | Executes playbook steps using tools | Agent runtime: executes agent logic including LLM calls |
| NATS | Message transport between server and workers | Agent bus: inter-agent communication, status, results |
| Gateway | External API (GraphQL/REST) | Agent API: external systems invoke agents via HTTP |
| PostgreSQL | Execution state, history | Agent state: execution history, audit trail |
| Credentials/Keychain | API keys for tools | Agent auth: LLM API keys, service credentials |
What Needs to Be Built
| Component | Exists Today | What Is Needed |
|---|---|---|
| Playbook execution engine | Yes | No change needed |
| Agent metadata in playbooks | Yes | Implemented via metadata.agent and metadata.capabilities extraction on catalog registration |
| Discovery by capability | Yes | Implemented via /catalog/list filters and /catalog/agents/list endpoint |
| Catalog resource kinds | Yes | noetl.resource defines playbook, agent, mcp, memory, and credential; playbook and agent are executable |
| ADK/LangChain bridge tool | Yes (initial) | Implemented as tool.kind: agent with `framework: adk |
| MCP bridge tool | Yes (initial) | Implemented as tool.kind: mcp so agent playbooks can call MCP servers while retaining execution/event traceability |
| Agent memory read/write | No | Steps that read/write to ai-meta memory/ or NATS KV store |
| Inter-agent messaging | Partially (NATS exists) | Standardize a message schema for agent-to-agent results |
| Agent status in dashboard | Partially (noetl status) | Extend to show agent-specific metadata and capabilities |
The Critical New Piece: tool.kind: agent
NoETL now has a first-class agent runtime bridge. Instead of writing raw Python to call SDKs directly, an agent step can be declared like:
- step: analyze
tool:
kind: agent
framework: langchain
entrypoint: "agents.reviewer:build_chain"
entrypoint_mode: factory
payload:
goal: "Review this diff: {{ fetch_pr.output.diff }}"
For ADK-style runners, pass keyword payload fields such as user_id, session_id, and new_message; the runtime bridge maps payload keys to the callable signature and materializes async generator event streams into step output.
MCP Servers as Agent Tools
MCP servers should be reached through playbook execution rather than direct GUI calls. In this pattern, the playbook is the agent and kind: mcp is one of its tools:
metadata:
name: kubernetes_runtime_agent
path: automation/agents/kubernetes/runtime
agent: true
capabilities:
- kubernetes-observability
- mcp:kubernetes
workflow:
- step: call_mcp
tool:
kind: mcp
server: kubernetes
endpoint: "{{ workload.endpoint }}"
method: "{{ workload.method }}"
tool: "{{ workload.tool }}"
arguments: "{{ workload.arguments }}"
The GUI terminal can translate k8s pods noetl into a normal /execute request for this playbook with resource_kind: "agent". The Kubernetes MCP server is then called by the worker, and every activity is tracked as NoETL execution state: noetl.event is the event log, noetl.command is the worker command projection, and noetl.execution is the execution-state projection.
For local deployment, the Kubernetes MCP server is installed by the ops playbook automation/development/mcp_kubernetes.yaml, and the runtime agent is registered from automation/agents/kubernetes/runtime.yaml. The GUI then exposes it as a scoped workspace such as /mcp/kubernetes, where commands like pods noetl still become auditable NoETL executions. See Kubernetes MCP Local Kind Setup for the end-to-end runbook.
How This Connects to ai-meta
The ai-meta repository provides the team coordination layer that sits above the agent runtime:
- Agent playbook definitions live in their respective repos (e.g.,
repos/noetl,repos/ops) - Cross-agent orchestration docs live in
ai-meta/playbooks/andai-meta/sync/ - Agent memory is shared through
ai-meta/memory/(Git-tracked, cross-session) - Submodule pointers pin the exact version of each agent playbook deployed
ai-meta (coordination + memory)
│
├── memory/ shared agent knowledge
├── sync/ cross-agent change tracking
├── playbooks/ orchestration runbooks
│
└── repos/
├── noetl/ agent execution engine + core agent playbooks
├── ops/ deployment agent playbooks
├── server/ registry (server component)
└── worker/ runtime (worker component)
Getting Started
To experiment with the agent-as-playbook pattern today:
- Write an agent playbook with
tool.kind: agentandframework: adk|langchain|custom - Deploy it to your NoETL server like any other playbook
- Invoke it with
noetl exec <agent-playbook> --set goal="..." - Orchestrate multiple agents by writing a parent playbook that calls agent playbooks as nested steps
- Track results with
noetl statusand persist decisions toai-meta/memory/
The remaining gaps are memory/state conventions and a standardized inter-agent message schema; agent execution and discovery primitives are now available in the runtime/server.