Table of Contents
Fetching ...

Likelihood-Free Inference via Structured Score Matching

Haoyu Jiang, Yuexi Wang, Yun Yang

Abstract

In many statistical problems, the data distribution is specified through a generative process for which the likelihood function is analytically intractable, yet inference on the associated model parameters remains of primary interest. We develop a likelihood-free inference framework that combines score matching with gradient-based optimization and bootstrap procedures to facilitate parameter estimation together with uncertainty quantification. The proposed methodology introduces tailored score-matching estimators for approximating likelihood score functions, and incorporates an architectural regularization scheme that embeds the statistical structure of log-likelihood scores to improve both accuracy and scalability. We provide theoretical guarantees and demonstrate the practical utility of the method through numerical experiments, where it performs favorably compared to existing approaches.

Likelihood-Free Inference via Structured Score Matching

Abstract

In many statistical problems, the data distribution is specified through a generative process for which the likelihood function is analytically intractable, yet inference on the associated model parameters remains of primary interest. We develop a likelihood-free inference framework that combines score matching with gradient-based optimization and bootstrap procedures to facilitate parameter estimation together with uncertainty quantification. The proposed methodology introduces tailored score-matching estimators for approximating likelihood score functions, and incorporates an architectural regularization scheme that embeds the statistical structure of log-likelihood scores to improve both accuracy and scalability. We provide theoretical guarantees and demonstrate the practical utility of the method through numerical experiments, where it performs favorably compared to existing approaches.

Paper Structure

This paper contains 47 sections, 7 theorems, 75 equations, 3 figures, 13 tables, 1 algorithm.

Key Result

Theorem 1

Under Assumption ass:uniform_sm_err_single and other mild regularity assumptions in the Appendix, it holds with probability converging to $1$ as $n\to\infty$ that $\widehat{s}(\cdot, {\mathbf X}_n^\ast)$ has a unique root $\widehat{\theta}_n$ within $\mathcal{B}(\theta^\ast; r_0)$, and $\widehat{\th

Figures (3)

  • Figure 1: Comparison of the Quasi-Newton algorithm and vanilla gradient descent
  • Figure 2: Q--Q plot comparing the fitted g-and-k/normal distribution with the observed data
  • Figure 3: Histogram of the log-return data

Theorems & Definitions (12)

  • Example 1: A toy example
  • Remark 1: A two-round procedure
  • Remark 2: Choice of score matching scheme
  • Remark 3: Comparison of three options
  • Theorem 1: Existence, Uniqueness and Consistency
  • Theorem 2: $\widehat{\theta}_n$ is close to $\widehat{\theta}^{\text{\rm MLE}}_n$
  • Theorem 3: Asymptotic normality of $\widehat{\theta}_n$
  • Theorem 4: Bootstrap consistency
  • Theorem 5: Algorithmic convergence
  • Lemma 1: Lemma 8 of jiang2025simulation
  • ...and 2 more