Table of Contents
Fetching ...

Regularized Regression by Composition: Identifiability, Structured Penalization, and Statistical Guarantees for Multi-Flow Distributional Models

Safaa K. Kadhem

Abstract

Regression by composition provides a flexible framework for constructing conditional distributions through sequential group actions. However, when multiple flows act on the same distribution, the model becomes non-identifiable, leading to flat likelihood regions and unstable estimates. We introduce a structured regularization framework that resolves this issue by assigning flow-specific penalties. The resulting estimator is defined as a penalized maximum likelihood problem with heterogeneous regularization across flows. We establish theoretical properties, including identifiability under penalization, uniqueness of the minimizer via strict convexification, and asymptotic consistency. For the adaptive Lasso, we further prove the oracle property. An efficient proximal gradient algorithm handles non-smooth penalties. Extensive simulation studies evaluate performance under varying sample sizes, correlation structures, and signal-to-noise ratios, demonstrating that regularized methods (Lasso and Elastic Net) successfully break non-identifiability and achieve low estimation error with controlled false positive rates. An application to NHANES data on asthma and lead exposure illustrates the practical utility: the unregularized estimator yields implausible coefficients, whereas regularized estimators produce stable and interpretable models and automatically select the relevant risk transformation. The Labbe plots derived from regularized estimators indicate a protective effect of reducing lead exposure. The proposed framework bridges identifiability theory with penalized estimation and opens the door to high-dimensional and longitudinal extensions.

Regularized Regression by Composition: Identifiability, Structured Penalization, and Statistical Guarantees for Multi-Flow Distributional Models

Abstract

Regression by composition provides a flexible framework for constructing conditional distributions through sequential group actions. However, when multiple flows act on the same distribution, the model becomes non-identifiable, leading to flat likelihood regions and unstable estimates. We introduce a structured regularization framework that resolves this issue by assigning flow-specific penalties. The resulting estimator is defined as a penalized maximum likelihood problem with heterogeneous regularization across flows. We establish theoretical properties, including identifiability under penalization, uniqueness of the minimizer via strict convexification, and asymptotic consistency. For the adaptive Lasso, we further prove the oracle property. An efficient proximal gradient algorithm handles non-smooth penalties. Extensive simulation studies evaluate performance under varying sample sizes, correlation structures, and signal-to-noise ratios, demonstrating that regularized methods (Lasso and Elastic Net) successfully break non-identifiability and achieve low estimation error with controlled false positive rates. An application to NHANES data on asthma and lead exposure illustrates the practical utility: the unregularized estimator yields implausible coefficients, whereas regularized estimators produce stable and interpretable models and automatically select the relevant risk transformation. The Labbe plots derived from regularized estimators indicate a protective effect of reducing lead exposure. The proposed framework bridges identifiability theory with penalized estimation and opens the door to high-dimensional and longitudinal extensions.

Paper Structure

This paper contains 56 sections, 13 theorems, 50 equations, 22 figures, 8 tables.

Key Result

Proposition 3.1

If flows $\mathbb{V}_1$ and $\mathbb{V}_2$ commute weakly (i.e., $p \cdot v_1 \cdot v_2 = p \cdot v_2 \cdot v_1$ for all $p$), then the model is unidentifiable without additional constraints.

Figures (22)

  • Figure 1: Boxplots of estimation error, true positive rate (TPR) and false positive rate (FPR) for the reference scenario ($n=100, p=10, \rho=0.5, \text{SNR}=1$).
  • Figure 2: Effect of the number of covariates $p$ on the estimation error for the reference setting ($n=100, \rho=0.5, \text{SNR}=1$).
  • Figure 3: Effect of the covariate correlation $\rho$ on the estimation error for $n=100, p=10, \text{SNR}=1$.
  • Figure 4: Effect of the signal‑to‑noise ratio (SNR) on the estimation error for $n=100, p=10, \rho=0.5$.
  • Figure 5: Cross‑validation curves for the selection of the tuning parameter $\lambda$ in the reference scenario. The optimal values are indicated by vertical dashed lines.
  • ...and 17 more figures

Theorems & Definitions (23)

  • Definition 3.1: Identifiability
  • Proposition 3.1
  • Proposition 3.2: Non‑identifiability of the multi‑flow model
  • proof
  • Proposition 3.3: Identifiability under regularization
  • proof
  • Theorem 3.1: Consistency
  • Theorem 3.2: Oracle property for adaptive Lasso
  • Theorem 4.1: Global convergence of proximal gradient
  • proof
  • ...and 13 more