Pure and applied mathematics across all major areas.
Looking for specific topics?
2601.00766The study of extremal problems for set mappings has a long history. It was introduced in 1958 by Erdős and Hajnal, who considered the case of cliques in graphs and hypergraphs. Recently, Caro, Patkós, Tuza and Vizer revisited this subject, and initiated the systematic study of set mapping problems for general graphs. In this paper, we prove the following result, which answers one of their questions. Let $G$ be a graph with $m$ edges and no isolated vertices and let $f : E(K_N) \rightarrow E(K_N)$ such that $f(e)$ is disjoint from $e$ for all $e \in E(K_N)$. Then for some absolute constant $C$, as long as $N \geq C m$, there is a copy $G^*$ of $G$ in $K_N$ such that $f(e)$ is disjoint from $V(G^*)$ for all $e \in E(G^*)$. The bound $N = O(m)$ is tight for cliques and is tight up to a logarithmic factor for all $G$.
2601.00683We give a presentation of the $\mathrm{GL}_n(\C)$-equivariant cohomology ring with $\Z$-coefficients of the variety $\Hom(\Z^2,\mathrm{GL}_n(\C))\subseteq \mathrm{GL}_n(\C)^2$ for any $n$. It is torsion free and minimally generated as a $H^\ast B\mathrm{GL}_n(\C)$-algebra by $3n$ elements. The ideal of relations is the saturation of an $n$-generator ideal by even powers of the Vandermonde polynomial. For coefficients in a field whose characteristic does not divide $n!$, we also give a presentation of the non-equivariant cohomology ring of $\Hom(\Z^2,\mathrm{GL}_n(\C))$.
We mathematically axiomatise the stochastics of counterfactuals, by introducing two related frameworks, called counterfactual probability spaces and counterfactual causal spaces, which we collectively term counterfactual spaces. They are, respectively, probability and causal spaces whose underlying measurable spaces are products of world-specific measurable spaces. In contrast to more familiar accounts of counterfactuals founded on causal models, we do not view interventions as a necessary component of a theory of counterfactuals. As an alternative to Pearl's celebrated ladder of causation, we view counterfactuals and interventions are orthogonal concepts, respectively mathematised in counterfactual probability spaces and causal spaces. The two concepts are then combined to form counterfactual causal spaces. At the heart of our theory is the notion of shared information between the worlds, encoded completely within the probability measure and causal kernels, and whose extremes are characterised by independence and synchronisation of worlds. Compared to existing frameworks, counterfactual spaces enable the mathematical treatment of a strictly broader spectrum of counterfactuals.
We study the topological stability of Voronoi percolation in higher dimensions. We show that slightly increasing p allows a discretization that preserves increasing topological properties with high probability. This strengthens a theorem of Bollobás and Riordan and generalizes it to higher dimensions. As a consequence, we prove a sharp phase transition for the emergence of i-dimensional giant cycles in Voronoi percolation on the 2i-dimensional torus.
2512.24920We study the Chern-Weil theory for the primitive cohomology of a symplectic manifold. First, given a symplectic manifold, we review the superbundle-valued forms on this manifold and prove a primitive version of the Bianchi identity. Second, as the main result, we prove a transgression formula associated with the boundary map of the primitive cohomology. Third, as an application of the main result, we introduce the concept of primitive characteristic classes and point out a further direction.
2512.24152Sampling based on score diffusions has led to striking empirical results, and has attracted considerable attention from various research communities. It depends on availability of (approximate) Stein score functions for various levels of additive noise. We describe and analyze a modular scheme that reduces score-based sampling to solving a short sequence of ``nice'' sampling problems, for which high-accuracy samplers are known. We show how to design forward trajectories such that both (a) the terminal distribution, and (b) each of the backward conditional distribution is defined by a strongly log concave (SLC) distribution. This modular reduction allows us to exploit \emph{any} SLC sampling algorithm in order to traverse the backwards path, and we establish novel guarantees with short proofs for both uni-modal and multi-modal densities. The use of high-accuracy routines yields $\varepsilon$-accurate answers, in either KL or Wasserstein distances, with polynomial dependence on $\log(1/\varepsilon)$ and $\sqrt{d}$ dependence on the dimension.
2512.24919We use a recent result of Bader and Sauer on coboundary expansion to prove residually finite three-dimensional Poincaré duality groups never have property (T). This implies such groups are never Kähler. The argument applies to fundamental groups of (possibly non-aspherical) compact 3-manifolds, giving a new proof of a theorem of Fujiwara that states if the fundamental group of a compact 3-manifold has property (T), then that group is finite. The only consequence of geometrization needed in the proof is that 3-manifold groups are residually finite.
2512.23425This paper develops a general approach for deep learning for a setting that includes nonparametric regression and classification. We perform a framework from data that fulfills a generalized Bernstein-type inequality, including independent, $φ$-mixing, strongly mixing and $\mathcal{C}$-mixing observations. Two estimators are proposed: a non-penalized deep neural network estimator (NPDNN) and a sparse-penalized deep neural network estimator (SPDNN). For each of these estimators, bounds of the expected excess risk on the class of Hölder smooth functions and composition Hölder functions are established. Applications to independent data, as well as to $φ$-mixing, strongly mixing, $\mathcal{C}$-mixing processes are considered. For each of these examples, the upper bounds of the expected excess risk of the proposed NPDNN and SPDNN predictors are derived. It is shown that both the NPDNN and SPDNN estimators are minimax optimal (up to a logarithmic factor) in many classical settings.
We introduce \textit{basic inequalities} for first-order iterative optimization algorithms, forming a simple and versatile framework that connects implicit and explicit regularization. While related inequalities appear in the literature, we isolate and highlight a specific form and develop it as a well-rounded tool for statistical analysis. Let $f$ denote the objective function to be optimized. Given a first-order iterative algorithm initialized at $θ_0$ with current iterate $θ_T$, the basic inequality upper bounds $f(θ_T)-f(z)$ for any reference point $z$ in terms of the accumulated step sizes and the distances between $θ_0$, $θ_T$, and $z$. The bound translates the number of iterations into an effective regularization coefficient in the loss function. We demonstrate this framework through analyses of training dynamics and prediction risk bounds. In addition to revisiting and refining known results on gradient descent, we provide new results for mirror descent with Bregman divergence projection, for generalized linear models trained by gradient descent and exponentiated gradient descent, and for randomized predictors. We illustrate and supplement these theoretical findings with experiments on generalized linear models.
We show that two-dimensional billiard systems are Turing complete by encoding their dynamics within the framework of Topological Kleene Field Theory. Billiards serve as idealized models of particle motion with elastic reflections and arise naturally as limits of smooth Hamiltonian systems under steep confining potentials. Our results establish the existence of undecidable trajectories in physically natural billiard-type models, including billiard-type models arising in hard-sphere gases and in collision-chain limits of celestial mechanics.
2512.11692Bourke and Garner described how to cofibrantly generate algebraic weak factorisation systems by a small double category of morphisms. However they did not give an explicit construction of the resulting factorisations as in the classical small object argument. In this paper we give such an explicit construction, as the colimit of a chain, which makes the result applicable in constructive settings; in particular, our methods provide a constructive proof that the effective Kan fibrations introduced by Van den Berg and Faber appear as the right class of an algebraic weak factorisation system.
We consider the distribution of the top eigenvector $\widehat{v}$ of a spiked matrix model of the form $H = θvv^* + W$, in the supercritical regime where $H$ has an outlier eigenvalue of comparable magnitude to $\|W\|$. We show that, if $v$ is sufficiently delocalized, then the distribution of the individual entries of $\widehat{v}$ (not, we emphasize, merely the inner product $\langle \widehat{v}, v\rangle$) is universal over a large class of generalized Wigner matrices $W$ having independent entries, depending only on the first two moments of the distributions of the entries of $W$. This complements the observation of Capitaine and Donati-Martin (2018) that these distributions are not universal when $v$ is instead sufficiently localized. Further, for $W$ having entrywise variances close to constant and thus resembling a Wigner matrix, we show by comparing to the case of $W$ drawn from the Gaussian orthogonal or unitary ensembles that averages of entrywise functions of $\widehat{v}$ behave as they would if $\widehat{v}$ had Gaussian fluctuations around a suitable multiple of $v$. We apply these results to study spectral algorithms followed by rounding procedures in dense stochastic block models and synchronization problems over the cyclic and circle groups, obtaining the first precise asymptotic characterizations of the error rates of such algorithms.
2512.11737We analyze two fully time-discrete numerical schemes for the incompressible Navier-Stokes equations posed on evolving surfaces in $\mathbb{R}^3$ with prescribed normal velocity using the evolving surface finite element method (ESFEM). We employ generalized Taylor-Hood finite elements $\mathrm{\mathbf{P}}_{k_u}$-- $\mathrm{P}_{k_{pr}}$-- $\mathrm{P}_{k_λ}$, $k_u=k_{pr}+1 \geq 2$, $k_λ\geq 1$, for the spatial discretization, where the normal velocity constraint is enforced weakly via a Lagrange multiplier $λ$, and a backward Euler discretization for the time-stepping procedure. Depending on the approximation order of $λ$ and weak formulation of the Navier-Stokes equations, we present stability and error analysis for two different discrete schemes, whose difference lies in the geometric information needed. We establish optimal velocity $L^{2}_{a_h}$-norm error bounds ($a_h$ an energy norm) for both schemes when $k_λ=k_u$, but only for the more information intensive one when $k_λ=k_u-1$, using iso-parametric and super-parametric discretizations, respectively, with the help of a newly derived surface Ritz-Stokes projection. Similarly, stability and optimal convergence for the pressures is established in an $L^2_{L^2}\times L^2_{H_h^{-1}}$-norm ($H_h^{-1}$ a discrete dual space) when $k_λ=k_u$, using a novel Leray time-projection to ensure weakly divergence conformity for our discrete velocity solution at two different time-steps (surfaces). Assuming further regularity conditions for the more information intensive scheme, along with an almost weak divergence conformity result at two different time-steps, we establish optimal $L^2_{L^2}\times L^2_{L^2}$-norm pressure error bounds when $k_λ=k_u-1$, using super-parametric approximation. Simulations verifying our results are provided, along with a comparison test against a penalty approach.
If most of the pixels in an $n \times m$ digital image are the same color, must the image contain a large connected component? How densely can a given set of connected components pack in $\mathbb{Z}^2$ without touching? We answer these two closely related questions for both 4-connected and 8-connected components. In particular, we use structural arguments to upper bound the "white" pixel density of infinite images whose white (4- or 8-)connected components have size at most $k$. Explicit tilings show that these bounds are tight for at least half of all natural numbers $k$ in the 4-connected case, and for all $k$ in the 8-connected case. We also extend these results to finite images. To obtain the upper bounds, we define the exterior site perimeter of a connected component and then leverage geometric and topological properties of this set to partition images into nontrivial regions called polygonal tiles. Each polygonal tile contains a single white connected component and satisfies a certain maximality property. We then use isoperimetric inequalities to precisely bound the area of these tiles. The solutions to these problems represent new statistics on the connected component distribution of digital images.
Networks are often modeled using graphs, and within this setting we introduce the notion of $k$-fault-tolerant mutual visibility. Informally, a set of vertices $X \subseteq V(G)$ in a graph $G$ is a $k$-fault-tolerant mutual-visibility set ($k$-ftmv set) if any two vertices in $X$ are connected by a bundle of $k+1$ shortest paths such that: ($i$) each shortest path contains no other vertex of $X$, and ($ii$) these paths are internally disjoint. The cardinality of a largest $k$-ftmv set is denoted by $\mathrm{f}μ^{k}(G)$. The classical notion of mutual visibility corresponds to the case $k = 0$. This generalized concept is motivated by applications in communication networks, where agents located at vertices must communicate both efficiently (i.e., via shortest paths) and confidentially (i.e., without messages passing through the location of any other agent). The original notion of mutual visibility may fail in unreliable networks, where vertices or links can become unavailable. Several properties of $k$-ftmv sets are established, including a natural relationship between $\mathrm{f}μ^{k}(G)$ and $ω(G)$, as well as a characterization of graphs for which $\mathrm{f}μ^{k}(G)$ is large. It is shown that computing $\mathrm{f}μ^{k}(G)$ is NP-hard for any positive integer $k$, whether $k$ is fixed or not. Exact formulae for $\mathrm{f}μ^{k}(G)$ are derived for several specific graph topologies, including grid-like networks such as cylinders and tori, and for diameter-two networks defined by Hamming graphs and by the direct product of complete graphs.
It is well known that any higher order Markov chain can be associated with a first order Markov chain. In this primarily expository article, we present the first fairly comprehensive analysis of the relationship between higher order and first order Markov chains, together with illustrative examples. Our main objective is to address the central question as posed in the title.
2512.01968We prove that if two very general cubic fourfolds are L-equivalent then they are isomorphic, and we observe that there exist special cubic fourfolds which are L-equivalent but not isomorphic. As a consequence, we provide examples of hyper-Kähler manifolds which are L-equivalent but not birational. We also provide further examples in support of the fact that L-equivalent hyper-Kähler manifolds should be D-equivalent, as conjectured by Meinsma.
2512.01966We review the theory of one-sided coupled operator matrices with a focus on evolution equations with inhomogeneous boundary conditions. (The original article had no abstract.)
We investigate the impact of dissipative dynamic boundary conditions applied at one end of a beam, analyzing their influence on model stability within the Euler-Bernoulli framework. Our primary finding is that hybrid dissipation does not alter the decay characteristics of the original model. We examine two scenarios: first, when hybrid dissipation is the sole dissipative mechanism, and second, when it complements other dissipative mechanisms. In the first case, we demonstrate that hybrid dissipation fails to induce exponential decay, instead producing a slow decay rate of $t^{-1/2}$ for large $t$. In the second case, when acting as a complementary mechanism, hybrid dissipation neither enhances nor diminishes the decay behavior of the original model.
In this paper, we explore how different selections of basis functions impact the efficacy of frequency domain techniques in statistical independence tests, and study different algorithms for extracting low-dimensional algebraic relations from dependent data. We examine a range of complete orthonormal bases functions including the Legendre polynomials, Fourier series, Walsh functions, and standard and nonstandard Haar wavelet bases. We utilize fast transformation algorithms to efficiently transform physical domain data to frequency domain coefficients. The main focuses of this paper are the effectiveness of different basis selections in detecting data dependency using frequency domain data, e.g., whether varying basis choices significantly influence statistical power loss for small data with large noise; and on the stability of different optimization formulations for finding proper algebraic relations when data are dependent. We present numerical results to demonstrate the effectiveness of frequency domain-based statistical analysis methods and provide guidance for selecting the proper basis and algorithm to detect a particular type of relations.
Algebraic varieties, stacks, sheaves, schemes, moduli spaces, complex geometry, quantum cohomology.
Commutative rings, modules, ideals, homological algebra, computational aspects, invariant theory, connections to algebraic geometry and combinatorics.
Existence and uniqueness, boundary conditions, linear and nonlinear operators, stability, soliton theory, integrable PDEs.
Homotopy theory, homological algebra, algebraic treatments of manifolds.
Special functions, orthogonal polynomials, harmonic analysis, ODEs, differential relations, calculus of variations, approximations, expansions.
Discrete mathematics, graph theory, enumeration, combinatorial optimization, Ramsey theory, combinatorial game theory.
Enriched categories, topoi, abelian categories, monoidal categories, homological algebra.
Holomorphic functions, automorphic group actions and forms, pseudoconvexity, complex geometry, analytic spaces, analytic sheaves.
Complex, contact, Riemannian, pseudo-Riemannian and Finsler geometry, relativity, gauge theory, global analysis.
Dynamics of differential equations and flows, mechanics, classical few-body problems, iterations, complex dynamics, delayed differential equations.
Banach spaces, function spaces, real functions, integral transforms, theory of distributions, measure theory.
Mathematical material of general interest, broadly accessible to all mathematicians.