Table of Contents
Fetching ...

A Hybrid NUTS-Gibbs Sampler with State Space Marginalization for Estimation of Dynamic Structural Equation Models with Binomial Outcomes

Øystein Sørensen, Ethan M. McCormick

Abstract

Dynamic structural equation modeling (DSEM) is widely used for analyzing intensive longitudinal data (ILD). Although many ILD have categorical (Bernoulli or binomially distributed) responses, currently available Metropolis-within-Gibbs samplers for estimating DSEMs are limited to using the probit link and the Bernoulli distribution. These samplers scale poorly with increasing model complexity and/or data size. Here, we present a hybrid sampler -- alternating between one step of the No-U-Turn Sampler (NUTS) and one Gibbs step -- which solves both of these problems: the Gibbs step naturally handles Pólya-Gamma distributed latent variables arising from binomially distributed responses with a logit link, and the NUTS step utilizes a Kalman filter to exactly marginalize over latent states, alleviating the need to sample these variables. We demonstrate in simulation experiments that the proposed sampler is more efficient than alternative algorithms, and that it makes DSEM estimation with binomial data feasible for larger data and models than what has previously been possible. We also illustrate its use in an example application of predicting panic attacks.

A Hybrid NUTS-Gibbs Sampler with State Space Marginalization for Estimation of Dynamic Structural Equation Models with Binomial Outcomes

Abstract

Dynamic structural equation modeling (DSEM) is widely used for analyzing intensive longitudinal data (ILD). Although many ILD have categorical (Bernoulli or binomially distributed) responses, currently available Metropolis-within-Gibbs samplers for estimating DSEMs are limited to using the probit link and the Bernoulli distribution. These samplers scale poorly with increasing model complexity and/or data size. Here, we present a hybrid sampler -- alternating between one step of the No-U-Turn Sampler (NUTS) and one Gibbs step -- which solves both of these problems: the Gibbs step naturally handles Pólya-Gamma distributed latent variables arising from binomially distributed responses with a logit link, and the NUTS step utilizes a Kalman filter to exactly marginalize over latent states, alleviating the need to sample these variables. We demonstrate in simulation experiments that the proposed sampler is more efficient than alternative algorithms, and that it makes DSEM estimation with binomial data feasible for larger data and models than what has previously been possible. We also illustrate its use in an example application of predicting panic attacks.

Paper Structure

This paper contains 22 sections, 1 theorem, 46 equations, 10 figures, 19 tables.

Key Result

Theorem 1

For maximum lag $L \ge 1$, the within-level model linking the latent states to the linear predictor is exactly equivalent to the state space transition model where the augmented state vector tracks the latent states and continuous linear predictors, $\tilde{\bm{\eta}}_{1,it} = [\bm{\eta}_{1,it}^{T}, \dots, \bm{\eta}_{1,i,t-L+1}^{T}, (\bm{y}_{1,it}^{*})^{T}, \dots, (\bm{y}_{1,i,t-L+1}^{*})^{T}]^{T

Figures (10)

  • Figure 1: Bulk efficiency for the five-indicator AR(1) model with participant-invariant dynamics. Means across five runs are shown, together with results from each individual run as semi-transparent points
  • Figure 2: Bulk efficiency for the five-indicator AR(1) model with participant-varying dynamics
  • Figure 3: Missingness pattern in the EMA data. The presence of panic attacks is indicated in red
  • Figure S1: Bulk efficiency plot for the five-indicator AR(1) model with participant-invariant dynamics with logit link.
  • Figure S2: Tail efficiency plot for the five-indicator AR(1) model with participant-invariant dynamics with probit link.
  • ...and 5 more figures

Theorems & Definitions (6)

  • Definition 1: Lag Operator
  • Definition 2: Polynomial Matrices
  • Definition 3: Strictly Lagged Polynomial Matrices
  • Definition 4: Coefficient Extraction
  • Theorem 1
  • proof