Table of Contents
Fetching ...

Optimal Switching in Networked Control Systems: Finite Horizon

Abdullah Y. Etcibasi, C. Emre Koksal, Eylem Ekici

Abstract

In this work, we first prove that the separation principle holds for switched LQR problems under i.i.d. zero-mean disturbances with a symmetric distribution. We then solve the dynamic programming problem and show that the optimal switching policy is a symmetric threshold rule on the accumulated disturbance since the most recent update, while the optimal controller is a discounted linear feedback law independent of the switching policy.

Optimal Switching in Networked Control Systems: Finite Horizon

Abstract

In this work, we first prove that the separation principle holds for switched LQR problems under i.i.d. zero-mean disturbances with a symmetric distribution. We then solve the dynamic programming problem and show that the optimal switching policy is a symmetric threshold rule on the accumulated disturbance since the most recent update, while the optimal controller is a discounted linear feedback law independent of the switching policy.

Paper Structure

This paper contains 17 sections, 13 theorems, 180 equations, 11 figures.

Key Result

Theorem 1

Consider the system eqn:system_Model–eqn:Switch_info and the optimization problem $P_{1}$. The optimal controller is given by the linear feedback law We assume a sufficiently large horizon length so that the steady-state Riccati solution is applicable. The extension to the transient case is straight where

Figures (11)

  • Figure 1: Block diagram of the closed-loop control system
  • Figure 2: Dynamic programming diagram illustrating the evolution of the value functions across time $k$ and budget levels $Q$ for horizon $N-1=7$, fixed delay $\tau=2$, and initial budget $Q_0=3$. At each stage, only two actions are admissible, with the corresponding costs shown on each branch. Infeasible transitions are indicated in red.
  • Figure 3: Running-average LQR cost versus time under Gaussian disturbances. All policies are tuned to satisfy the same target switching rate $r_s = 0.4$. The optimal symmetric threshold policy (OPT) achieves the lowest cost, followed closely by symmetric policies, while periodic and random strategies incur higher cost. Results are averaged over 100 Monte Carlo runs.
  • Figure 4: Steady-state LQR cost versus the open-loop gain $a$ under Gaussian disturbances. All policies satisfy $r_s = 0.25$. ZOH-based policies become unstable for $a > 0.9$, while the optimal policy consistently achieves the lowest cost. The symmetric impulsive policy closely tracks the optimal performance across all $a$.
  • Figure 5: Steady-state LQR cost versus the target switching rate $r_s$ for $a = 1$ under Gaussian disturbances. ZOH policies become unstable for $r_s \lesssim 0.4$, while the optimal policy achieves the lowest cost across all rates. As $r_s \to 1$, all policies converge to similar performance.
  • ...and 6 more figures

Theorems & Definitions (31)

  • Remark 1
  • Theorem 1
  • proof
  • Theorem 2
  • proof
  • Lemma 1
  • proof
  • Definition 1
  • Definition 2
  • Proposition 1
  • ...and 21 more