Table of Contents
Fetching ...

From Gaussian Fading to Gilbert-Elliott: Bridging Physical and Link-Layer Channel Models in Closed Form

Bhaskar Krishnamachari, Victor Gutierrez

Abstract

Dynamic fading channels are modeled at two fundamentally different levels of abstraction. At the physical layer, the standard representation is a correlated Gaussian process, such as the dB-domain signal power in log-normal shadow fading. At the link layer, the dominant abstraction is the Gilbert-Elliott (GE) two-state Markov chain, which compresses the channel into a binary ``decodable or not'' sequence with temporal memory. Both models are ubiquitous, yet practitioners who need GE parameters from an underlying Gaussian fading model must typically simulate the mapping or invoke continuous-time level-crossing approximations that do not yield discrete-slot transition probabilities in closed form. This paper provides an exact, closed-form bridge. By thresholding the Gaussian process at discrete slot boundaries, we derive the GE transition probabilities via Owen's $T$-function for any threshold, reducing to an elementary arcsine identity when the threshold equals the mean. The formulas depend on the covariance kernel only through the one-step correlation coefficient $ρ= K(D)/K(0)$, making them applicable to any stationary Gaussian fading model. The bridge reveals how kernel smoothness governs the resulting link-layer dynamics: the GE persistence time grows linearly in the correlation length $T_c$ for a smooth (squared-exponential) kernel but only as $\sqrt{T_c}$ for a rough (exponential/Ornstein--Uhlenbeck) kernel. We further quantify when the first-order GE chain is a faithful approximation of the full binary process and when it is not, reconciling two diagnostics, the one-step Markov gap and the run-length total-variation distance, that can trend in opposite directions. Monte Carlo simulations validate all theoretical predictions.

From Gaussian Fading to Gilbert-Elliott: Bridging Physical and Link-Layer Channel Models in Closed Form

Abstract

Dynamic fading channels are modeled at two fundamentally different levels of abstraction. At the physical layer, the standard representation is a correlated Gaussian process, such as the dB-domain signal power in log-normal shadow fading. At the link layer, the dominant abstraction is the Gilbert-Elliott (GE) two-state Markov chain, which compresses the channel into a binary ``decodable or not'' sequence with temporal memory. Both models are ubiquitous, yet practitioners who need GE parameters from an underlying Gaussian fading model must typically simulate the mapping or invoke continuous-time level-crossing approximations that do not yield discrete-slot transition probabilities in closed form. This paper provides an exact, closed-form bridge. By thresholding the Gaussian process at discrete slot boundaries, we derive the GE transition probabilities via Owen's -function for any threshold, reducing to an elementary arcsine identity when the threshold equals the mean. The formulas depend on the covariance kernel only through the one-step correlation coefficient , making them applicable to any stationary Gaussian fading model. The bridge reveals how kernel smoothness governs the resulting link-layer dynamics: the GE persistence time grows linearly in the correlation length for a smooth (squared-exponential) kernel but only as for a rough (exponential/Ornstein--Uhlenbeck) kernel. We further quantify when the first-order GE chain is a faithful approximation of the full binary process and when it is not, reconciling two diagnostics, the one-step Markov gap and the run-length total-variation distance, that can trend in opposite directions. Monte Carlo simulations validate all theoretical predictions.

Paper Structure

This paper contains 41 sections, 3 theorems, 54 equations, 7 figures, 1 table.

Key Result

Proposition 1

Let $\varepsilon = 1 - \rho$. Then and consequently $\blacktriangleleft$$\blacktriangleleft$

Figures (7)

  • Figure 1: Two diagnostics for the matched first-order GE chain ($S/\sigma=0$, both kernels). (a) Maximum Markov gap: a local, one-step measure that decreases for the squared-exponential kernel but increases for the exponential kernel. (b) Run-length TV distance: a global, path-shape measure that increases with $T_c/D$ for both kernels. The opposing trends in the squared-exponential case reflect the distinction between local conditional accuracy and global run-length fidelity.
  • Figure 2: Sample paths of the stationary Gaussian fading process for three values of $T_c$. Left: squared-exponential kernel \ref{['eq:kernel']} (smooth paths). Right: exponential kernel \ref{['eq:kernel-exp']} (rougher paths due to the non-differentiable covariance at the origin). Both panels use the same $T_c$ values and random seed to highlight the smoothness difference.
  • Figure 3: Matched GE transition probability $p_{01}=p_{10}$ versus $T_c$ for $S/\sigma=0$, both kernels. Solid and dashed curves are the arcsin formula \ref{['eq:arcsin-p']} with $\rho$ from \ref{['eq:rho-both']}; markers are Monte Carlo estimates with 95% CIs. Note that the two kernels map the same $T_c$ to different values of $\rho$, so this figure compares GE parameters at matched $T_c$, not matched $\rho$.
  • Figure 4: Expected persistence time $\mathbb{E}[T_{\mathrm{GE}}]$ versus $T_c$ for $S/\sigma=0$. Left: squared-exponential kernel with exact theory \ref{['eq:exact-final']} (solid), linear asymptote \ref{['eq:linear-approx']} (dashed), and Monte Carlo (circles). Right: exponential kernel with exact theory (solid), $\sqrt{T_c}$ asymptote \ref{['eq:sqrt-approx']} (dashed), and Monte Carlo (circles). The contrasting growth rates (linear vs. $\sqrt{T_c}$) are clearly visible.
  • Figure 5: Effect of the threshold level on $\mathbb{E}[T_{\mathrm{GE}}]$, both kernels. Left: squared-exponential. Right: exponential. Solid lines are the closed-form theory \ref{['eq:exact-final']}; markers show Monte Carlo means with 95% CIs. Only nonnegative thresholds are shown (the mapping is symmetric under $S/\sigma \mapsto -S/\sigma$). The square-root scaling of the exponential kernel is evident in the sublinear curvature of the right panel.
  • ...and 2 more figures

Theorems & Definitions (3)

  • Proposition 1: Crossing probability asymptotics
  • Corollary 1: Squared-exponential kernel: linear scaling
  • Corollary 2: Exponential kernel: square-root scaling