Table of Contents
Fetching ...

Identifying Connectivity Distributions from Neural Dynamics Using Flows

Timothy Doyeon Kim, Ulises Pereira-Obilinovic, Yiliu Wang, Eric Shea-Brown, Uygar Sümbül

Abstract

Connectivity structure shapes neural computation, but inferring this structure from population recordings is degenerate: multiple connectivity structures can generate identical dynamics. Recent work uses low-rank recurrent neural networks (lrRNNs) to infer low-dimensional latent dynamics and connectivity structure from observed activity, enabling a mechanistic interpretation of the dynamics. However, standard approaches for training lrRNNs can recover spurious structures irrelevant to the underlying dynamics. We first characterize the identifiability of connectivity structures in lrRNNs and determine conditions under which a unique solution exists. Then, to find such solutions, we develop an inference framework based on maximum entropy and continuous normalizing flows (CNFs), trained via flow matching. Instead of estimating a single connectivity matrix, our method learns the maximally unbiased distribution over connection weights consistent with observed dynamics. This approach captures complex yet necessary distributions such as heavy-tailed connectivity found in empirical data. We validate our method on synthetic datasets with connectivity structures that generate multistable attractors, limit cycles, and ring attractors, and demonstrate its applicability in recordings from rat frontal cortex during decision-making. Our framework shifts circuit inference from recovering connectivity to identifying which connectivity structures are computationally required, and which are artifacts of underconstrained inference.

Identifying Connectivity Distributions from Neural Dynamics Using Flows

Abstract

Connectivity structure shapes neural computation, but inferring this structure from population recordings is degenerate: multiple connectivity structures can generate identical dynamics. Recent work uses low-rank recurrent neural networks (lrRNNs) to infer low-dimensional latent dynamics and connectivity structure from observed activity, enabling a mechanistic interpretation of the dynamics. However, standard approaches for training lrRNNs can recover spurious structures irrelevant to the underlying dynamics. We first characterize the identifiability of connectivity structures in lrRNNs and determine conditions under which a unique solution exists. Then, to find such solutions, we develop an inference framework based on maximum entropy and continuous normalizing flows (CNFs), trained via flow matching. Instead of estimating a single connectivity matrix, our method learns the maximally unbiased distribution over connection weights consistent with observed dynamics. This approach captures complex yet necessary distributions such as heavy-tailed connectivity found in empirical data. We validate our method on synthetic datasets with connectivity structures that generate multistable attractors, limit cycles, and ring attractors, and demonstrate its applicability in recordings from rat frontal cortex during decision-making. Our framework shifts circuit inference from recovering connectivity to identifying which connectivity structures are computationally required, and which are artifacts of underconstrained inference.

Paper Structure

This paper contains 42 sections, 85 equations, 11 figures, 1 algorithm.

Figures (11)

  • Figure 1: Connector is a framework that infers minimally structured distribution over the connection weights of the lrRNN trained on neural data. It constructs, based on maximum entropy and continuous normalizing flows (CNFs), a generative model of the connectivity distribution, and can sample new lrRNNs that have mean-field dynamics that match the latent dynamics learned by the original lrRNN.
  • Figure 2: Identifiability of connectivity distributions. ( A) Flow field of quadstable attractor dynamics generated from a generalized Hopfield network. Purple lines are example latent trajectories with circles indicating initial conditions. ( B) Connectivity distribution of the generalized Hopfield network is a mixture of four Gaussians. Here, $x$-axis represents the first and second components from the samples $\boldsymbol{m}$ drawn from $p(\boldsymbol{m}, \boldsymbol{n})$, and the $y$-axis represents samples $\boldsymbol{n}$ from $p(\boldsymbol{m}, \boldsymbol{n})$. Color code based on which of the four Gaussians the neuron is drawn from (the neuron's "cell type"). The same colors are used across B-- D based on the ground-truth cell type. ( C) Ground-truth $\boldsymbol{m}_i$'s plotted against $\mathbb{E}[\boldsymbol{n}|\boldsymbol{m}_i]$'s. ( D) Connectivity generated from an arbitrary $p(\boldsymbol{n}|\boldsymbol{m})$ that has the same $\mathbb{E}[\boldsymbol{n}|\boldsymbol{m}]$ as the ground truth connectivity. Connectivity distributions in B-- D are all admissible and generate dynamics nearly identical to the quadstable attractor dynamics in A. ( E) Connectivity inferred from LINT. Color code based on $4$-means clustering. ( F) Connectivity inferred from Connector (our approach). Color code based on $4$-means clustering. ( G) The ground-truth and inferred $p(\boldsymbol{n})$ in B, E, F. ( H) We clustered the neurons based on the learned connectivity using GMM. The 5-fold cross-validated log-likelihood (mean $\pm$ std) was computed to identify the "elbow".
  • Figure 3: Comparisons of dynamics under perturbation. We clustered the neurons into four different cell types for each model. We then silenced neurons that belonged to one of the four cell types in the ground-truth ( A), LINT ( B), and Connector ( C).
  • Figure 4: Connectivity dissimilarity $D(\textrm{True},\textrm{Inferred})$ computed using the measure in valente22 and our measure in Equation (\ref{['eq:our_distance_measure']}). QA: Quadstable Attractors (Figure \ref{['figure-2']}), BA: Bistable Attractors (Figure \ref{['supp-figure-6']}A--F), LC: Limit Cycles (Figure \ref{['supp-figure-6']}G--J), RA: Ring Attractors (Figure \ref{['supp-figure-6']}K--N).
  • Figure 5: Connector-inferred connectivity distribution reveals computational cell types and their roles in neural dynamics. ( A) Dataset from LuoKim2025. The rat listened to a stream of clicks from the left and right speakers and oriented to the side that had more clicks. Neurons from frontal cortex were recorded during this task ($K_{obs}=240$). Figure adapted from kim2025findr. ( B) Connectivity distribution inferred with Connector. Each dot denotes a neuron sampled from the distribution ($K=$ 5,000). ( C) We clustered the neurons based on the connectivity inferred in B using GMM. The 5-fold cross-validated log-likelihood (mean $\pm$ std) was computed to identify the "elbow". Connector-inferred connectivity suggests at least 2 clusters, which we plot in pink and green colors in B. ( D) Flow field (in quiver plot, showing only inside the dotted line---the part traversed by the single-trial latent trajectories---to be consistent with LuoKim2025) with normalized difference showing relative contributions of cell type A and B to the dynamics. The colored trajectories are trial-averaged latent trajectories grouped by the number of clicks (darker red: more right clicks; darker blue: more left clicks).
  • ...and 6 more figures