The Agency Problem and the Markov Relevancy Horizon
To hold something accountable for an action, we need to determine where agency actually lies. Where did the action originate? Who — or what — is the accountable entity?
Consider a traffic accident. The car hit a pedestrian. But we don't hold the car accountable. The steering wheel turned — but we don't hold the steering wheel accountable. A hand turned it — but the hand was responding to neural impulses from the brain. And the neurons... we can't hold individual neurons accountable for a traffic accident.
Instead, through observation, we decide that the human— a complex collection of cells, of which neurons are a key but small part — is the agentic entity in the context of driving a car. And we design governance around what's meaningful to that entity. Not to the neurons, but to the person. Fines. License suspension. Criminal liability. These are consequences that matter at the scale where coherent agency exists.
This is putting a scope on context.
In Web4Web4Open governance ontology for trust-native entity interactions, we call this the Markov Relevancy Horizon (MRHMarkov Relevancy HorizonFractal context scoping — defines where governance applies). The name comes from a formal principle: a subsystem becomes a meaningful unit when its internal state transitions are more relevant to each other than to the external environment. When internal coherence exceeds external coupling, you've found a natural boundary for governance.
This isn't an arbitrary line. It's measurable.
And it's fractal. All entities are composed of elements that combine according to governance rules. MRHMarkov Relevancy HorizonFractal context scoping — defines where governance applies defines the scale at which a group of elements becomes a functional unit. Enterprises already do this routinely:
- The business unit is accountable for its function
- Within the unit, departments are accountable
- Within departments, teams are accountable
- Within teams, individuals are accountable
None of these layers are wrong. They're different MRHMarkov Relevancy HorizonFractal context scoping — defines where governance applies scopes applied to the same fractal structure. And a mature governance system operates at all of them simultaneously.
This resolves a deadlock in the current AIArtificial IntelligenceSystems that learn, adapt, and act with real-world impactgovernance conversation. People argue about whether to hold the model accountable, or the developer, or the deploying company, or the user who prompted it — as if there's one correct answer. There isn't. All of those are valid MRHMarkov Relevancy HorizonFractal context scoping — defines where governance applies scopes. A mature governance system operates at all of them, just like an enterprise already does with its human employees.
We didn't invent MRHMarkov Relevancy HorizonFractal context scoping — defines where governance applies. We observed how existing successful systems operate — from biology to organizational design — and gave formal structure to what they already do.
The principle: Agency is scale-dependent and fractal. Governance requires scoping accountability to the level where coherent action emerges. MRHMarkov Relevancy HorizonFractal context scoping — defines where governance appliesprovides a formal, measurable boundary — not an arbitrary judgment call.