Covers models of computation, complexity classes, structural complexity, complexity tradeoffs, upper and lower bounds.
Looking for a broader view? This category is part of:
2604.04830A propositional proof system $P$ has the strong feasible disjunction property iff there is a constant $c \geq 1$ such that whenever $P$ admits a size $s$ proof of $\bigvee_i α_i$ with no two $α_i$ sharing an atom then one of $α_i$ has a $P$-proof of size $\le s^c$. It was proved by K. (2025) that no proof system strong enough admits this property assuming a computational complexity conjecture and a conjecture about proof complexity generators. Here we build on Ilango (2025) and Ren et al. (2025) and prove the same result under two purely computational complexity hypotheses: - there exists a language in class E that requires exponential size circuits even if they are allowed to query an NP oracle, - there exists a P/poly demi-bit in the sense of Rudich (1997).
A notorious open question in circuit complexity is whether Boolean operations of arbitrary arity can efficiently be expressed using modular counting gates only. Håstad's celebrated switching lemma yields exponential lower bounds for the dual problem - realising modular arithmetic with Boolean gates - but, a similar lower bound for modular circuits computing the Boolean AND function has remained elusive for almost 30 years. We solve this problem for the restricted model of symmetric circuits: We consider MOD$_m$-circuits of arbitrary depth, and for an arbitrary modulus $m \in \mathbb{N}$, and obtain subexponential lower bounds for computing the $n$-ary Boolean AND function, under the assumption that the circuits are syntactically symmetric under all permutations of their $n$ input gates. This lower bound is matched precisely by a construction due to (Idziak, Kawałek, Krzaczkowski, LICS'22), leading to the surprising conclusion that the optimal symmetric circuit size is already achieved with depth $2$. Motivated by another construction from (LICS'22), which achieves smaller size at the cost of greater depth, we also prove tight size lower bounds for circuits with a more liberal notion of symmetry characterised by a nested block structure on the input variables.
2604.04188In the noisy $k$-XOR problem, one is given $y \in \mathbb{F}_2^M$ and must distinguish between $y$ uniform and $y = A x + e$, where $A$ is the adjacency matrix of a $k$-left-regular bipartite graph with $N$ variables and $M$ constraints, $x\in \mathbb{F}_2^N$ is random, and $e$ is noise with rate $η$. Lower bounds in restricted computational models such as Sum-of-Squares and low-degree polynomials are closely tied to the expansion of $A$, leading to conjectures that expansion implies hardness. We show that such conjectures are false by constructing an explicit family of graphs with near-optimal expansion for which noisy $k$-XOR is solvable in polynomial time. Our construction combines two powerful directions of work in pseudorandomness and coding theory that have not been previously put together. Specifically, our graphs are based on the lossless expanders of Guruswami, Umans and Vadhan (JACM 2009). Our key insight is that by an appropriate interpretation of the vertices of their graphs, the noisy XOR problem turns into the problem of decoding Reed-Muller codes from random errors. Then we build on a powerful body of work from the 2010s correcting from large amounts of random errors. Putting these together yields our construction. Concretely, we obtain explicit families for which noisy $k$-XOR is polynomial-time solvable at constant noise rate $η= 1/3$ for graphs with $M = 2^{O(\log^2 N)}$, $k = (\log N)^{O(1)}$, and $(N^{1-α}, 1-o(1))$-expansion. Under standard conjectures on Reed--Muller codes over the binary erasure channel, this extends to families with $M = N^{O(1)}$, $k=(\log N)^{O(1)}$, expansion $(N^{1-α}, 1-o(1))$ and polynomial-time algorithms at noise rate $η= N^{-c}$.
2604.03805Alice and Bob are given $n$-bit integer pairs $(x,y)$ and $(a,b)$, respectively, and they must decide if $y=ax+b$. We prove that the randomised communication complexity of this Point--Line Incidence problem is $Θ(\log n)$. This confirms a conjecture of Cheung, Hatami, Hosseini, and Shirley (CCC 2023) that the complexity is super-constant, and gives the first example of a communication problem with constant support-rank but super-constant randomised complexity.
The Tree Evaluation Problem ($\mathsf{TreeEval}$) is a computational problem originally proposed as a candidate to prove a separation between complexity classes $\mathsf{P}$ and $\mathsf{L}$. Recently, this problem has gained significant attention after Cook and Mertz (STOC 2024) showed that $\mathsf{TreeEval}$ can be solved using $O(\log n\log\log n)$ bits of space. Their algorithm, despite getting very close to showing $\mathsf{TreeEval} \in \mathsf{L}$, falls short, and in particular, it does not run in polynomial time. In this work, we present the first polynomial-time, almost logarithmic-space algorithm for $\mathsf{TreeEval}$. For any $\varepsilon>0$, our algorithm solves $\mathsf{TreeEval}$ in time $\mathrm{poly}(n)$ while using $O(\log^{1 +\varepsilon}n)$ space. Furthermore, our algorithm has the additional property that it requires only $O(\log n)$ bits of free space, and the rest can be catalytic space. Our approach is to trade off some (catalytic) space usage for a reduction in time complexity.
While first-order stationary points (FOSPs) are the traditional targets of non-convex optimization, they often correspond to undesirable strict saddle points. To circumvent this, attention has shifted towards second-order stationary points (SOSPs). In unconstrained settings, finding approximate SOSPs is PLS-complete (Kontogiannis et al.), matching the complexity of finding unconstrained FOSPs (Hollender and Zampetakis). However, the complexity of finding SOSPs in constrained settings remained notoriously unclear and was highlighted as an important open question by both aforementioned works. Under one strict definition, even verifying whether a point is an approximate SOSP is NP-hard (Murty and Kabadi). Under another widely adopted, relaxed definition where non-negative curvature is required only along the null space of the active constraints, the problem lies in TFNP, and algorithms with O(poly(1/epsilon)) running times have been proposed (Lu et al.). In this work, we settle the complexity of constrained SOSP by proving that computing an epsilon-approximate SOSP under the tractable definition is PLS-complete. We demonstrate that our result holds even in the 2D unit square [0,1]^2, and remarkably, even when stationary points are isolated at a distance of Omega(1) from the domain's boundary. Our result establishes a fundamental barrier: unless PLS is a subset of PPAD (implying PLS = CLS), no deterministic, iterative algorithm with an efficient, continuous update rule can exist for finding approximate SOSPs. This contrasts with the constrained first-order counterpart, for which Fearnley et al. showed that finding an approximate KKT point is CLS-complete. Finally, our result yields the first problem defined in a compact domain to be shown PLS-complete beyond the canonical Real-LocalOpt (Daskalakis and Papadimitriou)."
We show that, assuming NP $\not\subseteq$ $\cap_{δ> 0}$DTIME$\left(\exp{n^δ}\right)$, the shortest vector problem for lattices of rank $n$ in any finite $\ell_p$ norm is hard to approximate within a factor of $2^{(\log n)^{1 - o(1)}}$, via a deterministic reduction. Previously, for the Euclidean case $p=2$, even hardness of the exact shortest vector problem was not known under a deterministic reduction.
2604.01400In a streaming constraint satisfaction problem (streaming CSP), a $p$-pass algorithm receives the constraints of an instance sequentially, making $p$ passes over the input in a fixed order, with the goal of approximating the maximum fraction of satisfiable constraints. We show near optimal space lower bounds for streaming CSPs, improving upon prior works. (1) Fei, Minzer and Wang (\textit{STOC 2026}) showed that for any CSP, the basic linear program defines a threshold $α_{\mathrm{LP}}\in [0,1]$ such that, for any $\varepsilon > 0$, an $(α_{\mathrm{LP}} - \varepsilon)$-approximation can be achieved using constant passes and polylogarithmic space, whereas achieving $(α_{\mathrm{LP}} + \varepsilon)$-approximation requires $Ω(n^{1/3}/p)$ space. We improve this lower bound to $Ω(\sqrt{n}/p)$, which is nearly tight for a gap version of the problem. (2) For $p=o(\log n)$, we further strengthen the lower bound to $Ω(n\cdot2^{-O_{\varepsilon}(p)})$. Combined with existing algorithmic results, this shows that $α_{\mathrm{LP}}$ is not only the limit of multi-pass polylogarithmic-space algorithms, but also the limit of single-pass sublinear-space algorithms on bounded-degree instances. (3) For certain CSPs, we show that there exists $α< 1$ such that achieving an $α$-approximation requires $Ω(n/p)$ space. Our proofs are Fourier analytic, building on the techniques of Fei, Minzer and Wang (\textit{STOC 2026}) and the Fourier-$\ell_1$-based lower bound method of Kapralov and Krachun (\textit{STOC 2019}).
2604.01386Strassen founded the theory of the asymptotic spectrum of tensors to study the complexity of matrix multiplication. A central challenge in this theory is to explicitly construct new spectral points. In Crelle 1991, Strassen proposed the upper support functionals $ζ^θ$ as candidate spectral points, where $θ$ ranges over a triangle $Θ$. Recent progress, involving tools and ideas from quantum information theory (Christandl-Vrana-Zuiddam, STOC 2018, JAMS 2021) and convex optimization (Hirai, 2025), culminated in the proof that the upper support functionals are indeed spectral points over the complex numbers (Sakabe-Doğan-Walter, 2026). In this paper, we give an even clearer picture of the situation for support functionals when $θ$ lies along the edges of the triangle. We show that not only are these functionals spectral points, but that they are uniquely determined as spectral points by their behavior on matrix multiplication tensors. As our methods are algebraic, as a corollary this establishes for the first time the existence of nontrivial spectral points over arbitrary fields. As part of our argument, we show a close connection between the edge support functionals and Harder-Narasimhan filtrations from quiver representation theory. We thus show, using recent work in algorithmic invariant theory, that these support functionals can be computed in deterministic polynomial time. Other ingredients of our proof include a new criterion for abstractly characterizing asymptotic tensor ranks by spectral points, and a characterization of the edge support functionals in terms of matrix multiplication capacity. As another application of these tools, we prove the existence of spectral points for higher-mode tensors beyond those currently known.
Since the breakthrough superpolynomial multilinear formula lower bounds of Raz (Theory of Computing 2006), proving such lower bounds against multilinear algebraic branching programs (mABPs) has been a longstanding open problem in algebraic complexity theory. All known multilinear lower bounds rely on the min-partition rank method, and the best bounds against mABPs have remained quadratic (Alon, Kumar, and Volk, Combinatorica 2020). We show that the min-partition rank method cannot prove superpolynomial mABP lower bounds: there exists a full-rank multilinear polynomial computable by a polynomial-size mABP. This is an unconditional barrier: new techniques are needed to separate $\mathsf{mVBP}$ from higher classes in the multilinear hierarchy. Our proof resolves an open problem of Fabris, Limaye, Srinivasan, and Yehudayoff (ECCC 2026), who showed that the power of this method is governed by the minimum size $N(n)$ of a combinatorial object called a $1$-balanced-chain set system, and proved $N(n) \le n^{O(\log n/\log\log n)}$. We prove $N(n) = n^{O(1)}$ by giving the chain-builder a binary choice at each step, biasing what was a symmetric random walk into one where the imbalance increases with probability at most $1/4$; a supermartingale argument combined with a multi-scale recursion yields the polynomial bound.
2604.00591In Grochow and Qiao (SIAM J. Comput., 2021), the complexity class Tensor Isomorphism (TI) was introduced and isomorphism problems for groups, algebras, and polynomials were shown to be TI-complete. In this paper, we study average-case algorithms for several TI-complete problems over finite fields, including algebra isomorphism, matrix code conjugacy, and $4$-tensor isomorphism. Our main results are as follows. Over the finite field of order $q$, we devise (1) average-case polynomial-time algorithms for algebra isomorphism and matrix code conjugacy that succeed in a $1/Θ(q)$ fraction of inputs and (2) an average-case polynomial-time algorithm for the $4$-tensor isomorphism that succeeds in a $1/q^{Θ(1)}$ fraction of inputs. Prior to our work, algorithms for algebra isomorphism with rigorous average-case analyses ran in exponential time, albeit succeeding on a larger fraction of inputs (Li--Qiao, FOCS'17; Brooksbank--Li--Qiao--Wilson, ESA'20; Grochow--Qiao--Tang, STACS'21). These results reveal a finer landscape of the average-case complexities of TI-complete problems, providing guidance for cryptographic systems based on isomorphism problems. Our main technical contribution is to introduce the spectral properties of random matrices into algorithms for TI-complete problems. This leads to not only new algorithms but also new questions in random matrix theory over finite fields. To settle these questions, we need to extend both the generating function approach as in Neumann and Praeger (J. London Math. Soc., 1998) and the characteristic sum method of Gorodetsky and Rodgers (Trans. Amer. Math. Soc., 2021).
We study the binary perceptron, a random constraint satisfaction problem that asks to find a Boolean vector in the intersection of independently chosen random halfspaces. A striking feature of this model is that at every positive constraint density, it is expected that a $1-o_N(1)$ fraction of solutions are \emph{strongly isolated}, i.e. separated from all others by Hamming distance $Ω(N)$. At the same time, efficient algorithms are known to find solutions at certain positive constraint densities. This raises a natural question: can any isolated solution be algorithmically visible? We answer this in the negative: no algorithm whose output is stable under a tiny Gaussian resampling of the disorder can \emph{reliably} locate isolated solutions. We show that any stable algorithm has success probability at most $\frac{3\sqrt{17}-9}{4}+o_N(1)\leq 0.84233$. Furthermore, every stable algorithm that finds a solution with probability $1-o_N(1)$ finds an isolated solution with probability $o_N(1)$. The class of stable algorithms we consider includes degree-$D$ polynomials up to $D\leq o(N/\log N)$; under the low-degree heuristic \cite{hopkins2018statistical}, this suggests that locating strongly isolated solutions requires running time $\exp(\widetildeΘ(N))$. Our proof does not use the overlap gap property. Instead, we show via Pitt's correlation inequality that after a random perturbation of the disorder, the number of solutions located close to a pre-existing isolated solution cannot concentrate at $1$.
We give an $O(\log^2 n)$-query algorithm for finding a Tarski fixed point over the $4$-dimensional lattice $[n]^4$, matching the $Ω(\log^2 n)$ lower bound of [EPRY20]. Additionally, our algorithm yields an ${O(\log^{\lceil (k-1)/3\rceil+1} n)}$-query algorithm for any constant $k$, improving the previous best upper bound ${O(\log^{\lceil (k-1)/2\rceil+1} n)}$ of [CL22]. Our algorithm uses a new framework based on \emph{safe partial-information} functions. The latter were introduced in [CLY23] to give a reduction from the Tarski problem to its promised version with a unique fixed point. This is the first time they are directly used to design new algorithms for Tarski fixed points.
2603.29427We introduce a lightweight and accessible approach to computation over the real numbers, with the aim of clarifying both the underlying concepts and their relevance in modern research. The material is intended for a broad audience, including instructors who wish to incorporate real computation into algorithms courses, their students, and PhD students encountering the subject for the first time. Rather than striving for completeness, we focus on a carefully selected set of results that can be presented and proved in a classroom setting. This allows us to highlight core techniques and recurring ideas while maintaining an approachable exposition. In some places, the presentation is intentionally informal, prioritizing intuition and practical understanding over full technical precision. We position our exposition relative to existing literature, including Matousek's lecture notes on ER-completeness and the recent compendium of ER-complete problems by Schaefer, Cardinal, and Miltzow. While these works provide deep and comprehensive perspectives, our goal is to offer an accessible entry point with proofs and examples suitable for teaching. Our approach follows modern formulations of real computation that emphasize binary input, real-valued witnesses, and restricted use of constants, aligning more closely with contemporary complexity theory, while acknowledging the foundational contributions of the Blum--Shub--Smale model.
2603.28954We present several novel encodings for cardinality constraints, which use fewer clauses than previous encodings and, more importantly, introduce new generally applicable techniques for constructing compact encodings. First, we present a CNF encoding for the $\text{AtMostOne}(x_1,\dots,x_n)$ constraint using $2n + 2 \sqrt{2n} + O(\sqrt[3]{n})$ clauses, thus refuting the conjectured optimality of Chen's product encoding. Our construction also yields a smaller monotone circuit for the threshold-2 function, improving on a 50-year-old construction of Adleman and incidentally solving a long-standing open problem in circuit complexity. On the other hand, we show that any encoding for this constraint requires at least $2n + \sqrt{n+1} - 2$ clauses, which is the first nontrivial unconditional lower bound for this constraint and answers a question of Kučera, Savický, and Vorel. We then turn our attention to encodings of $\text{AtMost}_k(x_1,\dots,x_n)$, where we introduce "grid compression", a technique inspired by hash tables, to give encodings using $2n + o(n)$ clauses as long as $k = o(\sqrt[3]{n})$ and $4n + o(n)$ clauses as long as $k = o(n)$. Previously, the smallest known encodings were of size $(k+1)n + o(n)$ for $k \le 5$ and $7n - o(n)$ for $k \ge 6$.
2603.28031Classical complexity theory measures the cost of computing a function, but many computational tasks require committing to one valid output among several. We introduce determination depth -- the minimum number of sequential layers of irrevocable commitments needed to select a single valid output -- and show that no amount of computation can eliminate this cost. We exhibit relational tasks whose commitments are constant-time table lookups yet require exponential parallel width to compensate for any reduction in depth. A conservation law shows that enriching commitments merely relabels determination layers as circuit depth, preserving the total sequential cost. For circuit-encoded specifications, the resulting depth hierarchy captures the polynomial hierarchy ($Σ_{2k}^P$-complete for each fixed $k$, PSPACE-complete for unbounded $k$). In the online setting, determination depth is fully irreducible: unlimited computation between commitment layers cannot reduce their number.
Bounded Variable Addition (BVA) is a central preprocessing method in modern state-of-the-art SAT solvers. We provide a graph-theoretic characterization of which 2-CNF encodings can be constructed by an idealized BVA algorithm. Based on this insight, we prove new results about the behavior and limitations of BVA and its interaction with other preprocessing techniques. We show that idealized BVA, plus some minor additional preprocessing (e.g., equivalent literal substitution), can reencode any 2-CNF formula with $n$ variables into an equivalent 2-CNF formula with $(\tfrac{\lg(3)}{4}+o(1))\,\tfrac{n^2}{\lg n}$ clauses. Furthermore, we show that without the additional preprocessing the constant factor worsens from $\tfrac{\lg(3)}{4} \approx 0.396$ to $1$, and that no reencoding method can achieve a constant below $0.25$. On the other hand, for the at-most-one constraint on $n$ variables, we prove that idealized BVA cannot reencode this constraint using fewer than $3n-6$ clauses, a bound that we prove is achieved by actual implementations. In particular, this shows that the product encoding for at-most-one, which uses $2n+o(n)$ clauses, cannot be constructed by BVA regardless of the heuristics used. Finally, our graph-theoretic characterization of BVA allows us to leverage recent work in algorithmic graph theory to develop a drastically more efficient implementation of BVA that achieves a comparable clause reduction on random monotone 2-CNF formulas.
2603.27128We study the problem of testing whether two tensors in $\mathbb{R}^\ell\otimes \mathbb{R}^m\otimes \mathbb{R}^n$ are isomorphic under the natural action of orthogonal groups $\textbf{O}(\ell, \mathbb{R})\times\textbf{O}(m, \mathbb{R})\times\textbf{O}(n, \mathbb{R})$, as well as the corresponding question over $\mathbb{C}$ and unitary groups. These problems naturally arise in several areas, including graph and tensor isomorphism (Grochow--Qiao, SIAM J. Comp. '21), scaling algorithms for orbit closure intersections (Allen-Zhu--Garg--Li--Oliveira--Wigderson, STOC '18), and quantum information (Liu--Li--Li--Qiao, Phys. Rev. Lett. '12). We study average-case algorithms for orthogonal and unitary tensor isomorphism, with one random tensor where each entry is sampled uniformly independently from a sub-Gaussian distribution, and the other arbitrary. For the algorithm design, we develop algorithmic ideas from the higher-order singular value approach into polynomial-time exact (algebraic) and approximate (numerical) algorithms with rigorous average-case analyses. Following (Allen-Zhu--Garg--Li--Oliveira--Wigderson, STOC '18), we present an algorithm for a gapped version of the orbit distance approximation problem. For the average-case analysis, we work from recent progress in random matrix theory on eigenvalue repulsion of sub-Gaussian Wishart matrices (Christoffersen--Luh--O'Rourke--Shearer and Han, arXiv '25) by extending their results from side lengths of Wishart matrices linearly related to polynomially related.
ICESEE (ICE Sheet statE and parameter Estimator) is a Python-based, open-source data assimilation framework designed for seamless integration with ice sheet and Earth system models. It implements a parallel Ensemble Kalman Filter (EnKF) with full MPI support for scalable assimilation in state and parameter spaces. ICESEE uses a matrix-free update scheme from Evensen (2003), which avoids explicit forecast error covariance construction and eliminates the need for localization in high-dimensional, nonlinear systems. ICESEE also supports four EnKF variants, including a localized version for methodological testing. It enables indirect inference of unobserved model parameters through a hybrid assimilation-inversion strategy. The framework features modular coupling interfaces, adaptive state indexing, and efficient parallel I/O, making it extensible to a variety of modeling environments. ICESEE has been successfully coupled with ISSM, Icepack, and other models. In this study, we focus on applications with ISSM and Icepack, demonstrating ICESEE's interoperability, performance, scalability, and ability to improve state estimates and infer uncertain parameters. Performance benchmarks show strong and weak scaling, highlighting ICESEE's potential for large-scale, observation-constrained ice sheet reanalyses.
We propose a new parameter called proofdoor in an attempt to explain the efficiency of CDCL SAT solvers over formulas derived from circuit (esp., arithmetic) verification applications. Informally, given an unsatisfiable CNF formula F over n variables, a proofdoor decomposition consists of a chunking of the clauses into A1, . . . , Ak together with a sequence of interpolants connecting these chunks. Intuitively, a proofdoor captures the idea that an unsatisfiable formula can be refuted by reasoning chunk by chunk, while maintaining only a summary of the information (i.e., interpolants) gained so far for subsequent reasoning steps. We prove several theorems in support of the proposition that proofdoors can explain the efficiency of CDCL solvers for some class of circuit verification problems. First, we show that formulas with small proofdoors (i.e., where each interpolant is O(n) sized, each chunk Ai has small pathwidth, and each interpolant clause has at most O(log(n)) backward dependency on the previous interpolant) have short resolution (Res) proofs. Second, we show that certain configurations of CDCL solvers can compute such proofs in time polynomial in n. Third, we show that commutativity (miter) formulas over floating-point addition have small proofdoors and hence short Res proofs, even though they have large pathwidth. Fourth, we characterize the limits of the proofdoor framework by connecting proofdoors to the partially ordered resolution proof system: we show that a poor decomposition of arithmetic miter instances can force exponentially large partially ordered resolution proofs, even when a different decomposition (i.e., small proofdoors) permits short proofs.