Back to artifacts

AI for Science: March 2026 Week 10 (Mar 2–8)

Mar 2 – Mar 8, 2026 · 122 papers analyzed · 3 breakthroughs

Summary

3 breakthroughs across AI4Math and AI4Physics from 122 papers analyzed (Mar 2–8 2026): (1) 2603.03511 introduces Orbital Transformers that directly learn wavefunction evolution in real-time TDDFT — first ML surrogate replacing one of quantum chemistry's most expensive computations; (2) 2603.01762 proposes DGNet, a Green's function-grounded discrete network for data-efficient spatiotemporal PDE solving with as few as 6–10 training trajectories; (3) 2603.02889 presents a physics-informed neural framework for inferring solid-state Hamiltonians directly from experimental data, bypassing simulated training. Notable: SO(3)-equivariant GNN quantization (2603.05343), SorryDB Lean benchmark (2603.02668), DynFormer for PDEs (2603.03112), bidirectional curriculum for math LLMs (2603.05120), MatRIS foundation MLIP (2603.02002). Key trend: physics-grounded inductive biases — Green's functions, symmetry equivariance, Hamiltonian structure — are becoming the differentiating factor over generic neural architectures in scientific ML.

Key Takeaway

Week 10 of 2026 is defined by physics-principled ML: Green's functions for PDE solvers, orbital transformers for quantum chemistry, and experimental Hamiltonian learning signal a maturation of the field beyond benchmark-chasing toward scientifically grounded methods.

Breakthroughs (3)

1. Orbital Transformers for Predicting Wavefunctions in Time-Dependent Density Functional Theory

Why Novel: Real-time TDDFT is among the most computationally demanding quantum chemistry methods; prior ML surrogates targeted ground-state properties. This is the first model that directly learns wavefunction temporal evolution in an atomic-orbital basis, enabling downstream prediction of dipole moments and absorption spectra from a single forward pass.

Key Innovations:

  • [object Object]
  • [object Object]
  • [object Object]

Evidence:

  • — undefined
  • — undefined
  • — undefined
  • — undefined
  • — undefined

Impact: Opens the door to ML-accelerated excited-state simulations for photochemistry and materials design, domains where TDDFT is the standard but computational cost prevents large-scale screening.

2. DGNet: Discrete Green Networks for Data-Efficient Learning of Spatiotemporal PDEs

Why Novel: Existing neural PDE solvers (PINNs, neural operators) either require many samples or bake in specific PDE structure. DGNet's inductive bias comes from the mathematical foundation of Green's functions, making it architecture-agnostic yet data-efficient. The discrete Green formulation directly prescribes the update rule rather than learning it from scratch.

Key Innovations:

  • [object Object]
  • [object Object]
  • [object Object]

Evidence:

  • — undefined
  • — undefined
  • — undefined
  • — undefined
  • — undefined

Impact: Provides a principled recipe for data-efficient neural PDE solvers grounded in mathematical theory, directly addressing the sample complexity bottleneck that limits neural operators in scientific simulation settings.

3. Learning Hamiltonians for solid-state quantum simulators

Why Novel: Hamiltonian identification in real quantum hardware has relied on tomography or Bayesian inference with assumed model forms. This framework is generalizable and data-driven — it infers the full Hamiltonian structure from experimental measurements, making it applicable to systems where the microscopic model is unknown or inaccessible.

Key Innovations:

  • [object Object]
  • [object Object]

Evidence:

  • — undefined
  • — undefined

Impact: Enables characterization of quantum simulator hardware without simulation-based supervision — a key capability for validating and calibrating near-term quantum devices.

Trends

  • Physics-grounded inductive biases (Green's functions, Hamiltonian structure, SO(3) equivariance) are becoming the differentiating factor over generic neural architectures in scientific ML — the week's top papers all derive their novelty from principled physics embedding rather than scale.

  • Real-time TDDFT and excited-state quantum chemistry are emerging as the next frontier for ML surrogate models, following the maturation of ground-state DFT and force field surrogates.

  • Lean formalization is crossing into real-world theorem proving: SorryDB marks a shift from competition benchmarks to production-grade open-source math repositories as evaluation targets for AI provers.

  • Data efficiency remains a dominant theme: DGNet (6–20 trajectories), bidirectional curriculum, and multi-fidelity MLIPs all address the same bottleneck of learning well from limited scientific data.

Notable Papers (5)

1. Preserving Continuous Symmetry in Discrete Spaces: Geometric-Aware Quantization for SO(3)-Equivariant GNNs

First quantization method for SO(3)-equivariant GNNs that preserves rotational equivariance via a Geometric Straight-Through Estimator (Geometric STE), with a proof that gradient updates remain orthogonal to feature vectors and thus on the S2S^2 manifold.

2. SorryDB: Can AI Provers Complete Real-World Lean Theorems?

Introduces a dynamically-updating benchmark of 1000+ real-world Lean proof gaps ('sorries') from 78 active GitHub formalization projects, revealing current provers solve at most ~30% pass@32 and are highly complementary — advancing evaluation of AI theorem provers on authentic mathematical tasks.

3. From Complex Dynamics to DynFormer: Rethinking Transformers for PDEs

DynFormer reduces PDE transformer complexity from O(N4)\mathcal{O}(N^4) to O(M2logM)\mathcal{O}(M^2 \log M) via spectral embedding and achieves state-of-the-art on multiple PDE benchmarks across Tiny/Medium/Large model sizes.

4. Bidirectional Curriculum Generation: A Multi-Agent Framework for Data-Efficient Mathematical Reasoning

A multi-agent curriculum that generates problems bidirectionally (forward from answers, backward from questions) improves LLM mathematical reasoning data efficiency, showing a favorable scaling law vs. baselines across six math benchmarks.

5. MatRIS: Toward Reliable and Efficient Pretrained Machine Learning Interaction Potentials

A foundation MLIP that achieves top-tier F1 and DAF on Matbench-Discovery with improved reliability — outperforming CHGNet and MACE-MP-0 while maintaining competitive efficiency across diverse material systems.

Honorable Mentions

  • On Multi-Step Theorem Prediction via Non-Parametric Structural Priors ()
  • Reasoning Core: A Scalable Procedural Data Generation Suite for Symbolic Pre-training and Post-Training ()
  • Machine Learning the Strong Disorder Renormalization Group Method for Disordered Quantum Spin Chains ()
  • Mathematicians in the age of AI ()