Table of Contents
Fetching ...

Predictor-Based Output-Feedback Control of Linear Systems with Time-Varying Input and Measurement Delays via Neural-Approximated Prediction Horizons

Luke Bhan, Miroslav Krstic, Yuanyuan Shi

Abstract

Due to simplicity and strong stability guarantees, predictor feedback methods have stood as a popular approach for time delay systems since the 1950s. For time-varying delays, however, implementation requires computing a prediction horizon defined by the inverse of the delay function, which is rarely available in closed form and must be approximated. In this work, we formulate the inverse delay mapping as an operator learning problem and study predictor feedback under approximation of the prediction horizon. We propose two approaches: (i) a numerical method based on time integration of an equivalent ODE, and (ii) a data-driven method using neural operators to learn the inverse mapping. We show that both approaches achieve arbitrary approximation accuracy over compact sets, with complementary trade-offs in computational cost and scalability. Building on these approximations, we then develop an output-feedback predictor design for systems with delays in both the input and the measurement. We prove that the resulting closed-loop system is globally exponentially stable when the prediction horizon is approximated with sufficiently small error. Lastly, numerical experiments validate the proposed methods and illustrate their trade-offs between accuracy and computational efficiency.

Predictor-Based Output-Feedback Control of Linear Systems with Time-Varying Input and Measurement Delays via Neural-Approximated Prediction Horizons

Abstract

Due to simplicity and strong stability guarantees, predictor feedback methods have stood as a popular approach for time delay systems since the 1950s. For time-varying delays, however, implementation requires computing a prediction horizon defined by the inverse of the delay function, which is rarely available in closed form and must be approximated. In this work, we formulate the inverse delay mapping as an operator learning problem and study predictor feedback under approximation of the prediction horizon. We propose two approaches: (i) a numerical method based on time integration of an equivalent ODE, and (ii) a data-driven method using neural operators to learn the inverse mapping. We show that both approaches achieve arbitrary approximation accuracy over compact sets, with complementary trade-offs in computational cost and scalability. Building on these approximations, we then develop an output-feedback predictor design for systems with delays in both the input and the measurement. We prove that the resulting closed-loop system is globally exponentially stable when the prediction horizon is approximated with sufficiently small error. Lastly, numerical experiments validate the proposed methods and illustrate their trade-offs between accuracy and computational efficiency.

Paper Structure

This paper contains 9 sections, 6 theorems, 79 equations, 2 figures.

Key Result

Theorem 1

Consider the plant eq:dynamics-1, eq:dynamics-2 such that Assumptions assumption:delay-time-positive-and-uniformly-bounded and assumption:invertibility are satisfied. Then, there exist constants $M, C> 0$ such that all solutions of the plant with the exact feedback control law eq:control-law-1-eq:co

Figures (2)

  • Figure 1: Three-stage observer--predictor feedback law and corresponding horizons.
  • Figure 2: Example of feedback control for the problem in \ref{['eq:numerical-problem-formulation']} with the FNO approximation of $\hat{\psi}$. The parameters for the delay function $D_1$ were $(a_1, b_1, \alpha_1, \omega_1, \varphi_1) = (0.4, 0.31, -0.10, 4.95, 0.95)$ and for $D_2$ were $(a_2, b_2, \alpha_2, \omega_2, \varphi_2) = (0.28, 0.15, -0.06, 1.28, 0.82)$. The initial condition of the plant was $Z(0) = [-1, 1]$ and the observer $\xi(0) = [5, -5]$. The input history was fixed to ensure $0$ input for the initial condition and the plant history was set to $Z(0)$ for all times $t \leq 0$.

Theorems & Definitions (11)

  • Theorem 1
  • Definition 1
  • Definition 2
  • Lemma 1: Lipschitz dependence of $\Psi$ on $D$
  • Theorem 2: Explicit Euler approximation of $\psi$ atkinson2009numerical
  • Definition 3: Neural Operators
  • Theorem 3
  • Theorem 4
  • Corollary 1
  • proof
  • ...and 1 more