Table of Contents
Fetching ...

Optimal Sampling and Actuation Policies of a Markov Source over a Wireless Channel

Mehrdad Salimnejad, Anthony Ephremides, Marios Kountouris, Nikolaos Pappas

Abstract

This paper studies efficient data management and timely information dissemination for real-time monitoring of an $N$-state Markov process, enabling accurate state estimation and reliable actuation decisions. First, we analyze the Age of Incorrect Information (AoII) and derive closed-form expressions for its time average under several scheduling policies, including randomized stationary, change-aware randomized stationary, semantics-aware randomized stationary, and threshold-aware randomized stationary policies. We then formulate and solve constrained optimization problems to minimize the average AoII under a time-averaged sampling action constraint, and compare the resulting optimal sampling and transmission policies to identify the conditions under which each policy is most effective. We further show that directly using reconstructed states for actuation can degrade system performance, especially when the receiver is uncertain about the state estimate or when actuation is costly. To address this issue, we introduce a cost function, termed the Cost of Actions under Uncertainty (CoAU), which determines when the actuator should take correct actions and avoid incorrect ones when the receiver is uncertain about the reconstructed source state. We propose a randomized actuation policy and derive a closed-form expression for the probability of taking no incorrect action. Finally, we formulate an optimization problem to find the optimal randomized actuation policy that maximizes this probability. The results show that the resulting policy substantially reduces incorrect actuator actions.

Optimal Sampling and Actuation Policies of a Markov Source over a Wireless Channel

Abstract

This paper studies efficient data management and timely information dissemination for real-time monitoring of an -state Markov process, enabling accurate state estimation and reliable actuation decisions. First, we analyze the Age of Incorrect Information (AoII) and derive closed-form expressions for its time average under several scheduling policies, including randomized stationary, change-aware randomized stationary, semantics-aware randomized stationary, and threshold-aware randomized stationary policies. We then formulate and solve constrained optimization problems to minimize the average AoII under a time-averaged sampling action constraint, and compare the resulting optimal sampling and transmission policies to identify the conditions under which each policy is most effective. We further show that directly using reconstructed states for actuation can degrade system performance, especially when the receiver is uncertain about the state estimate or when actuation is costly. To address this issue, we introduce a cost function, termed the Cost of Actions under Uncertainty (CoAU), which determines when the actuator should take correct actions and avoid incorrect ones when the receiver is uncertain about the reconstructed source state. We propose a randomized actuation policy and derive a closed-form expression for the probability of taking no incorrect action. Finally, we formulate an optimization problem to find the optimal randomized actuation policy that maximizes this probability. The results show that the resulting policy substantially reduces incorrect actuator actions.

Paper Structure

This paper contains 12 sections, 2 theorems, 35 equations, 6 figures.

Key Result

Lemma 1

For an $N$-state DTMC information source, the average AoII under the threshold-aware randomized stationary policy is given by: where $F(p^{th}_{\alpha},n)$ and $G(p^{th}_{\alpha},n)$ are given by: Furthermore, under the RS policy, the average AoII can be expressed as follows: Moreover, the average AoII for the SARS policy can be calculated as follows: For the CARS policy, the average AoII is o

Figures (6)

  • Figure 1: Real-time monitoring of a Markovian source over a wireless channel.
  • Figure 2: Minimum average AoII as a function of $\eta$ for $q = 0.1$.
  • Figure 3: Minimum average AoII as a function of $\eta$ for $q = 0.8$.
  • Figure 4: Comparison between the optimal and non-optimal $P_{\Delta_{0}}$ when the source is sampled using the optimal RS policy as a function of $q$ for $\mu = 1$.
  • Figure 5: Comparison between the optimal and non-optimal $P_{\Delta_{0}}$ when the source is sampled using the optimal SARS policy as a function of $q$ for $\mu = 1$.
  • ...and 1 more figures

Theorems & Definitions (2)

  • Lemma 1
  • Lemma 2