Part Three: Deep Dive
Block 37

Agentic Orchestrators — The Governance Landscape

LIVE DEMO CONTEXT — This page provides background for the live demo segment. Not linked from the main site navigation.

Five leading agentic orchestrators, each with different governance postures. Understanding what each provides — and what none of them provide — frames why Web4Web4Open governance ontology for trust-native entity interactions governance matters.

A critical distinction first: In every tool below, what runs on your machine is the orchestrator— heuristic code that manages sessions, files, permissions, and tool execution. The cognition — and therefore the agency, the actual decisions about what actions to take — lives in the LLM, which is typically remote. Your machine runs the steering wheel. The driver is in a data center. Governance of the orchestrator is necessary but not sufficient. Governance of the agent — the decision-maker behind the APIApplication Programming InterfaceStandard interface for software communication — is what's missing.

Note: This page is a snapshot. All five orchestrators are evolving at AIArtificial IntelligenceSystems that learn, adapt, and act with real-world impact speed — features ship weekly, architectures shift monthly. What's described here reflects the state as of late March 2026.

OrchestratorMakerPostureLicense
Claude CodeAnthropicStrongest orchestrator governanceProprietary
Claude CoworkAnthropicBroader surface, less documentationProprietary
OpenClawPeter SteinbergerMinimal — broad defaults, no hierarchyMIT
Claude FlowRuvnetMulti-agent coordination, inherited trustMIT
Hermes AgentNous ResearchProcedural learning, regex approvalApache 2.0

Claude Code (Anthropic)

The dominant agentic coding tool. Terminal-native, with VS Code and JetBrains integrations. Reads your codebase, edits files, runs commands, manages git. The tool your developers are probably running right now.

What it governs well:

  • 4-tier permission system with 6 modes (default → plan → auto → bypass)
  • 25+ lifecycle hooks — PreToolUse, PostToolUse, PermissionRequest, SessionStart/End, SubagentStop, ConfigChange
  • Enterprise managed settings via MDM/plist/registry — IT can lock down permissions, hooks, MCP servers, and marketplace access
  • OS-level sandboxing for Bash commands (filesystem + network isolation)
  • Auto mode with prose-based classifier rules for trusted infrastructure
  • Recently adding: agent orchestration and hierarchy (sub-agents, agent teams), skills (context scripts), and powers — the governance surface area is expanding rapidly

What it doesn't govern:

  • Trust is binary — allow or deny. No trust evolution from behavior. A tool permitted today has the same access as one permitted six months ago with a perfect track record.
  • No identity binding — sessions are OAuth-authenticated but not hardware-attested. No cryptographic agent identity.
  • No audit chain integrity — session transcripts are plain JSONL files. No hash-linking, no tamper detection, no signing.
  • No cross-agent trust — when agents spawn sub-agents, each inherits the parent's permissions wholesale. No trust propagation or decay.
  • Governance config is a JSON file the agent can write — the governed entity can modify its own governance.

License: Proprietary (public repo, commercial terms). 84K+ stars.

Claude Cowork (Anthropic)

Brings Claude Code's agentic capabilities to the desktop for non-technical knowledge work. File access, task scheduling, computer use (screen interaction), multi-device continuity. Targets knowledge workers, not developers.

Governance delta from Claude Code:

  • Broader attack surface — computer use means screen access, mouse/keyboard control, application interaction
  • Likely shares Claude Code's managed settings infrastructure, but Anthropic has not published detailed governance documentation for Cowork specifically
  • Same fundamental gaps: binary trust, no identity binding, no audit integrity, no cross-agent governance

License: Proprietary. Research preview.

OpenClaw (Peter Steinberger)

An independent open-source personal AIArtificial IntelligenceSystems that learn, adapt, and act with real-world impact assistant.Not a fork of Claude Code — a completely separate codebase. Connects to messaging platforms (WhatsApp, Telegram, Discord, Signal, Slack). Supports multiple LLM backends (Claude, GPT, DeepSeek). Runs on your own devices 24/7.

History: Published as “Clawdbot” (Nov 2025) → renamed “MoltBot” after Anthropic trademark complaint (Jan 2026) → renamed “OpenClaw” (Jan 2026). Creator joined OpenAI Feb 2026; project moving to a foundation.

Governance posture: minimal.

  • Broad permissions by default — email, calendar, messaging, file access, browser automation
  • No enterprise managed settings. No permission hierarchy.
  • Hooks exist but were non-functional at launch — we submitted an early PR to fix them. Our governance plugin uses these hooks, but was rejected by maintainers as “not a priority.”
  • Skills-based extensibility (SKILL.md files) — but a Cisco researcher found a third-party skill performing data exfiltration
  • Marketed as “runs on your computer” while intelligence is in remote LLM APIApplication Programming InterfaceStandard interface for software communication calls. Your machine runs the orchestrator. The decisions are made by a model in a data center.
  • On 24/7. Calendar tasks take seconds. What happens with the rest of the time? No logging, no witnessing, no audit trail.

License: MIT. 247K stars, 47K forks. Chinese authorities restricted it on government computers (March 2026).

Claude Flow / Ruflo (Ruvnet)

A multi-agent orchestration layer that sits on top of Claude Code. Coordinates multiple Claude Code instances working together. 60+ agent types, swarm topologies (hierarchical, mesh, ring, adaptive), consensus algorithms (Raft, Byzantine, Gossip, CRDT), persistent vector memory.

What it adds:

  • Multi-agent coordination — queen-led hierarchies, anti-drift checkpoints, shared memory namespaces
  • 3-tier model routing — WASM agent booster (<1ms, $0) → Haiku (~500ms) → Sonnet/Opus (2-5s) based on task complexity
  • Self-learning pattern recognition across sessions

What it doesn't govern:

  • Trust between agents is not modeled — coordination assumes all agents are equally trusted
  • No trust propagation when agents delegate to other agents
  • Inherits Claude Code's permission model without extending it

License: MIT. 6K+ commits. dp-web4 fork includes a Web4Web4Open governance ontology for trust-native entity interactions governance plugin (T3 trust tensors, policy entities, witnessing chains, R6 audit chain) — PR closed by upstream with no comment.

Hermes Agent (Nous Research)

A self-improving personal AIArtificial IntelligenceSystems that learn, adapt, and act with real-world impact agent with a built-in learning loop. CLI terminal interface plus messaging platform gateway (Telegram, Discord, Slack, WhatsApp, Signal, Email, Matrix, Home Assistant). Model-agnostic — supports 200+ models via OpenRouter, plus direct OpenAI, Anthropic, and custom endpoints. 60+ tools including terminal execution, file management, browser automation, code sandbox, vision, and subagent delegation.

History: Absorbing the OpenClaw community — includes built-in migration tooling from OpenClaw to Hermes. Marketed as running on a “$5/month VPS” — but that's the orchestrator only. The LLMLarge Language ModelNeural network trained on text — the engine behind modern AI agents inference (the actual intelligence) is a separate, typically much larger, per-token bill to whichever provider you choose.

What it governs well:

  • Destructive command detection — regex heuristics flag rm, sed -i, output redirects for user approval before execution
  • Prompt injection scanning on context files, memory content, and skills — detects invisible unicode, exfiltration patterns, shell injection attempts
  • Skill guard validates new skills before creation (execution loops, injection payloads, tool reference checks)
  • DM pairing for messaging platform authentication
  • No recursive delegation — subagents cannot spawn grandchildren, preventing runaway branching
  • Procedural learning loop — creates skills from experience, persists as markdown files, injects into future sessions. The closest any orchestrator comes to behavioral memory.

What it doesn't govern:

  • No permission hierarchy — approval is per-command regex matching, not role-based or contextual
  • No enterprise managed settings — config is a YAML file the agent can modify
  • Skills (the learning mechanism) are markdown files on disk — any process can read, write, or inject them. A compromised skill is a persistent backdoor.
  • Memory files (MEMORY.md, USER.md) are plain text — no integrity verification, no signing, no tamper detection
  • Session history in SQLite with FTS5 — rich and queryable, but no hash-linking or audit chain
  • Multi-platform gateway means the agent has presence across Telegram, Discord, Slack, Email simultaneously — broad attack surface with uniform trust across all platforms

License: Apache 2.0. ~3K commits, ~270K lines. NousResearch (the team behind Hermes fine-tuned models).

The Common Gap

CapabilityClaude CodeCoworkOpenClawClaude FlowHermes
Permission system✅ 4-tier⚠️ Shared⚠️ Inherited⚠️ Regex
Enterprise managed settings✅ MDM
Lifecycle hooks✅ 25+⚠️ Fixed by PR⚠️ 17⚠️ Callbacks
Sandbox✅ OS-level⚠️ Code only
Trust evolution
Hardware identity
Tamper-evident audit
Cross-agent trust
Policy as entity
Config integrity

The line in the table is the governance gap. Above it: access control, permission management, sandboxing — things the industry has built and that Claude Code does well. Below it: trust, identity, accountability, witnessing — things no tool in this landscape provides.

Every tool in this space constrains the tool. None governs the agent. The row of ❌ below the line is what Web4Web4Open governance ontology for trust-native entity interactions fills.

What We Demonstrate

The live demo takes Claude Code — the tool with the strongest existing governance — and adds the missing layer. Not a different tool. The same tool, with witnessed trust evolution, tamper-evident audit chains, role-contextual permissions, and governance configuration that the agent cannot modify.

You leave with a link to install it on your setup. Monday morning, your agents have witnessed provenance and trust evolution. Same Claude Code. Different accountability.

Current: Vercel as Ungoverned Orchestrator

As of this week, Vercel — the deployment platform hosting thousands of production applications — disclosed a breach originating from an OAuth trust chain with a third-party AIArtificial IntelligenceSystems that learn, adapt, and act with real-world impact tool. The OAuth connection became an unmonitored lateral movement path into Vercel's infrastructure. Vercel is an orchestrator. The third-party tool was a governance subject nobody was governing. See Block 30 for the full analysis.