Table of Contents
Fetching ...

The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence

Biplab Pal, Santanu Bhattacharya

Abstract

Agentic artificial intelligence (AI) in organizations is a sequential decision problem constrained by reliability and oversight cost. When deterministic workflows are replaced by stochastic policies over actions and tool calls, the key question is not whether a next step appears plausible, but whether the resulting trajectory remains statistically supported, locally unambiguous, and economically governable. We develop a measure-theoretic Markov framework for this setting. The core quantities are state blind-spot mass B_n(tau), state-action blind mass B^SA_{pi,n}(tau), an entropy-based human-in-the-loop escalation gate, and an expected oversight-cost identity over the workflow visitation measure. We instantiate the framework on the Business Process Intelligence Challenge 2019 purchase-to-pay log (251,734 cases, 1,595,923 events, 42 distinct workflow actions) and construct a log-driven simulated agent from a chronological 80/20 split of the same process. The main empirical finding is that a large workflow can appear well supported at the state level while retaining substantial blind mass over next-step decisions: refining the operational state to include case context, economic magnitude, and actor class expands the state space from 42 to 668 and raises state-action blind mass from 0.0165 at tau=50 to 0.1253 at tau=1000. On the held-out split, m(s) = max_a pi-hat(a|s) tracks realized autonomous step accuracy within 3.4 percentage points on average. The same quantities that delimit statistically credible autonomy also determine expected oversight burden. The framework is demonstrated on a large-scale enterprise procurement workflow and is designed for direct application to engineering processes for which operational event logs are available.

The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence

Abstract

Agentic artificial intelligence (AI) in organizations is a sequential decision problem constrained by reliability and oversight cost. When deterministic workflows are replaced by stochastic policies over actions and tool calls, the key question is not whether a next step appears plausible, but whether the resulting trajectory remains statistically supported, locally unambiguous, and economically governable. We develop a measure-theoretic Markov framework for this setting. The core quantities are state blind-spot mass B_n(tau), state-action blind mass B^SA_{pi,n}(tau), an entropy-based human-in-the-loop escalation gate, and an expected oversight-cost identity over the workflow visitation measure. We instantiate the framework on the Business Process Intelligence Challenge 2019 purchase-to-pay log (251,734 cases, 1,595,923 events, 42 distinct workflow actions) and construct a log-driven simulated agent from a chronological 80/20 split of the same process. The main empirical finding is that a large workflow can appear well supported at the state level while retaining substantial blind mass over next-step decisions: refining the operational state to include case context, economic magnitude, and actor class expands the state space from 42 to 668 and raises state-action blind mass from 0.0165 at tau=50 to 0.1253 at tau=1000. On the held-out split, m(s) = max_a pi-hat(a|s) tracks realized autonomous step accuracy within 3.4 percentage points on average. The same quantities that delimit statistically credible autonomy also determine expected oversight burden. The framework is demonstrated on a large-scale enterprise procurement workflow and is designed for direct application to engineering processes for which operational event logs are available.

Paper Structure

This paper contains 16 sections, 27 equations, 5 figures, 3 tables.

Figures (5)

  • Figure 1: Markov reliability model for scoped autonomy in enterprise workflows. A workflow state is mapped to an action through policy $\pi(a_t\mid s_t)$, the process evolves under transition kernel $P(s_{t+1}\mid s_t,a_t)$, and a human-in-the-loop (HITL) gate escalates states with inadequate support, high branching entropy, or elevated risk.
  • Figure 2: Dominant empirical transitions in the BPI 2019 purchase-to-pay process. Edge widths are proportional to empirical transition probabilities. The log contains recurrent loops and exception-handling branches; the mean case length is 6.34 events, the 99th percentile is 24 events, the maximum observed case length is 990, and the transition-level self-loop rate is 15.7%.
  • Figure 3: Coverage on the full BPI 2019 log. (A) State blind-spot mass $\hat{B}_n(\tau)$ remains modest even under refined state abstractions. (B) State-action blind mass $\hat{B}^{SA}_{\pi,n}(\tau)$ increases substantially faster, particularly once value and actor context are included. The dashed red curve reports the corresponding risk-weighted blind mass for the refined transition model.
  • Figure 4: Scoped autonomy on the full BPI 2019 log under the refined state abstraction with $N(s)\ge 50$ and $w(s)\le 0.6$. Event-level autonomy remains substantially higher than end-to-end case-level autonomy because local ambiguity compounds along the trajectory.
  • Figure 5: Theory-versus-agent comparison on the chronological held-out split of BPI 2019. (A) Theoretical autonomous-step surrogate $m(s)=\max_a \hat{\pi}(a\mid s)$ versus realized held-out step accuracy as the entropy threshold $h_0$ varies, with mean absolute gap 3.4 percentage points. (B) Reliability-cost frontier induced by the same gate: the x-axis is mean human touches per case, the y-axis is safe case completion, and the theoretical curve is conservative but monotone relative to the held-out agent.