Table of Contents
Fetching ...

Two Sample Test for Eigendecompositions of Functional Data

Angel Garcia de la Garza, Britton Sauerbrei, Jeff Goldsmith

Abstract

Neuron-level firing data is believed to be governed by latent activation patterns during task completion. Analysing repeated trials of a task allows us to study these patterns, typically by averaging in-vivo neural spikes across trials. However, estimates of underlying latent activation patterns show trial-to-trial variability. Our aim is to determine whether this variation arises from observed data differences or changes in the latent activation patterns themselves. The latter would imply that current approaches overlook meaningful activation changes, necessitating adjustments in dimension reduction and downstream analysis. We propose a test that compares the eigendecompositions of two samples of functional data based on the covariance matrix of scores derived from a functional principal component analysis of the pooled data. Initially developed for independent samples, we later extend the test to paired samples, as necessary for our data. Simulation studies demonstrate its superior power compared to leading methods across various scenarios. In an experiment with 157 trials, we analyse all pairwise comparisons using a permutation approach to test the null hypothesis of shared latent activation patterns across trials. Our findings reveal trial-to-trial variation in latent activation patterns that cannot be attributed to sampling noise.

Two Sample Test for Eigendecompositions of Functional Data

Abstract

Neuron-level firing data is believed to be governed by latent activation patterns during task completion. Analysing repeated trials of a task allows us to study these patterns, typically by averaging in-vivo neural spikes across trials. However, estimates of underlying latent activation patterns show trial-to-trial variability. Our aim is to determine whether this variation arises from observed data differences or changes in the latent activation patterns themselves. The latter would imply that current approaches overlook meaningful activation changes, necessitating adjustments in dimension reduction and downstream analysis. We propose a test that compares the eigendecompositions of two samples of functional data based on the covariance matrix of scores derived from a functional principal component analysis of the pooled data. Initially developed for independent samples, we later extend the test to paired samples, as necessary for our data. Simulation studies demonstrate its superior power compared to leading methods across various scenarios. In an experiment with 157 trials, we analyse all pairwise comparisons using a permutation approach to test the null hypothesis of shared latent activation patterns across trials. Our findings reveal trial-to-trial variation in latent activation patterns that cannot be attributed to sampling noise.

Paper Structure

This paper contains 14 sections, 11 equations, 10 figures.

Figures (10)

  • Figure 1: Panel A displays a lasagna plot of the activation of six example neurons across 174 timepoints and 157 trials. Light blue indicates that the neuron is activate at that specific instance.
  • Figure 2: Panel A1-4 displays scenarios in which the FPCs across groups are orthogonal. Panels B1-4 show data simulations in which the FPCs across groups are not orthogonal. Panels A1 and B1 depict the true data-generating FPCs used in the simulations. Panel A2 and B2 display the true score covariance matrix used to generate the data. Panels A3 and B3 show the reconstructed FPCs from a pooled FPCA. Panels A4 and B4 demonstrate the score covariance matrix obtained from the pooled FPCA decomposition.
  • Figure 3: Empirical rejection rates for independent datasets across simulation settings. We run 1000 simulations for each simulation scenario, and reject the null hypothesis at $\alpha = 0.05$. Our proposed test is in dark blue. Leading competing methods include the test given in panaretos_second-order_2010 (in orange) and pomann_two-sample_2016 (in yellow). Each column displays a different effect size, and the rows display the baseline variance shared by both groups where $\lambda_3^{(1)} = \gamma$ and $\lambda_3^{(2)} = \gamma + \delta$
  • Figure 4: Empirical rejection rates for paired datasets across simulation settings. We run 1000 simulations for each simulation scenario, and reject the null hypothesis at $\alpha = 0.05$. Our proposed paired test is in dark blue. Competing methods include the proposed independent test is in light blue, and the tests given in panaretos_second-order_2010 (in orange) and pomann_two-sample_2016 (in yellow). Each column displays a different effect size, and the rows display the correlation between any two pairs of simulated functions. Across all simulations, $\text{var}(\xi_{i3}^{(1)}) = 0.5$ and $\text{var}(\xi_{i3}^{(1)}) = 0.5 + \delta$
  • Figure 5: Spaghetti plots of FPCA decompositions of trial-level data. Each curve represents an estimate for a trial. The panels show the first three FPCs in descending order of most variance explained. On average, these five FPCs explain 96.2% of the total variability within each trial. The red line is the LOESS average across all trials.
  • ...and 5 more figures