Covers neural networks, connectionism, genetic algorithms, artificial life, adaptive behavior.
Looking for a broader view? This category is part of:
Balanced spiking networks can transition between silent, asynchronous-irregular, and oscillatory states depending on interacting synaptic and temporal time scales, while their joint parameter structure remains incompletely characterized. In this work, we systematically map how postsynaptic decay (τs), conduction delay (d), and plasticity rate (λp) jointly shape oscillatory regimes in recurrent leaky integrate-and-fire networks. By combining Brian2 simulations across the (τs, d, λp) space with a coarse Hopf-reference boundary, we construct regime maps that directly visualize SIL-AI-OSC transitions and corresponding spectral prominence landscapes. The mapped results show that increasing λp expands oscillatory regions toward shorter τs and moderate-to-long delays, while prominence maps identify parameter regions with the strongest rhythmic coherence. Representative control experiments further connect this global landscape to local rhythm-forming mechanisms, showing that STDP freezing weakens rhythmic coherence whereas delay jitter enhances it with minimal change in mean firing rate. As a result, these findings provide a useful reference for operating-point selection, synchrony modulation studies, and future biologically grounded spiking-network modeling within similar balanced-network settings.
Existing evolutionary algorithms for Constrained Multi-objective Optimization Problems (CMOPs) typically treat all constraints uniformly, overlooking their distinct geometric relationships with the true Constrained Pareto Front (CPF). In reality, constraints play different roles: some directly shape the final CPF, some create infeasible obstacles, while others are irrelevant. To exploit this insight, we propose a novel algorithm named RCCMO, which sequentially performs unconstrained exploration, single-constraint exploitation, and full-constraint refinement. The core innovation of RCCMO lies in a constraint prioritization method derived from these geometric insights, seamlessly coupled with a unique dual-directional search mechanism. Specifically, RCCMO first prioritizes constraints that constitute the final CPF, approaching them from the evolutionary direction (optimizing objectives) to locate the CPF directly shaped by single-constraint boundaries. Subsequently, for constraints that merely hinder the population's progress, RCCMO searches from the anti-evolutionary direction (targeting the infeasible boundaries where hindering constraints intersect with the CPF) to effectively discover how these constraints obstruct and form the final CPF. Meanwhile, irrelevant constraints are intentionally bypassed. Furthermore, a series of specialized mechanisms are proposed to accelerate the algorithm's execution, reduce heuristic misjudgments, and dynamically adjust search directions in real time. Extensive experiments on 5 benchmark test suites and 29 real-world CMOPs demonstrate that RCCMO significantly outperforms seven state-of-the-art algorithms.
The rapid growth of nature-inspired metaheuristics has exposed a persistent gap between metaphorical novelty and genuine algorithmic advancement. Motivated by the biophysics of chromatin loop extrusion -- a well-characterized genome-folding process driven by SMC motor complexes and conditional barriers -- we introduce the Loop-Extrusion Linkage (LEL) operator, a structure-learning wrapper that combines online variable-interaction estimation, spectral seriation via the Fiedler vector, and adaptive interval-based subspace search. LEL constructs a sparse interaction graph from successful optimization steps, derives a heuristic one-dimensional variable ordering, and generates overlapping evaluation subsets through stochastic interval growth modulated by learned boundary-crossing probabilities. We evaluate LEL on six synthetic diagnostic functions at d=96 designed to probe specific structural hypotheses -- contiguous blocks, permuted blocks, overlapping windows, banded chains, separable controls, and dense rotated couplings -- across 10^4 and 5 x 10^4 evaluation budgets with 15 independent seeds. Results are assessed via the Wilcoxon signed-rank test with Holm-Bonferroni correction and Vargha-Delaney A12 effect sizes. At 10^4 evaluations, Full LEL achieves the best median log-gap on 3 of 6 functions significantly outperforming all ablations and jSO on the structured tasks. At 5 x 10^4 evaluations, simpler ablations and baselines often surpass the full method, indicating that the adaptive barrier mechanism may over-constrain late-stage search on uniformly partitioned landscapes. The strongest supported finding is that learned spectral ordering consistently improves over graph-only grouping and random variable ordering, suggesting that interaction-graph seriation is the most valuable component of the proposed framework.
2604.04083Parent selection methods are widely used in evolutionary computation to accelerate the optimization process, yet their theoretical benefits are still poorly understood. In this paper, we address this gap by incorporating different parent selection strategies into the $(μ+1)$ genetic algorithm (GA). We show that, with an appropriately chosen population size and a parent selection strategy that selects a pair of maximally distant parents with probability $Ω(1)$ for crossover, the resulting algorithm solves the Jump$_k$ problem in $O(k4^kn\log(n))$ expected time. This bound is significantly smaller than the best known bound of $O(nμ\log(μ)+n\log(n)+n^{k-1})$ for any $(μ+1)$~GA using no explicit diversity-preserving mechanism and a constant crossover probability. To establish this result, we introduce a novel diversity metric that captures both the maximum distance between pairs of individuals in the population and the number of pairs achieving this distance. The crucial point of our analysis is that it relies on crossover as a mechanism for creating and maintaining diversity throughout the run, rather than using crossover only in the final step to combine already diversified individuals, as it has been done in many previous works. The insights provided by our analysis contribute to a deeper theoretical understanding of the role of crossover in the population dynamics of genetic algorithms.
2604.03708Constrained multiobjective optimisation requires fast feasibility attainment together with stable convergence and diversity preservation under strict evaluation budgets. This report documents RDEx-CMOP, the differential evolution variant used in the IEEE CEC 2025 numerical optimisation competition (C06 special session) constrained multiobjective track. RDEx-CMOP integrates an ε-level feasibility schedule, a SPEA2-style indicator-driven fitness assignment, and a fitness-oriented current-to-pbest/1 mutation operator. We evaluate RDEx-CMOP on the official CEC 2025 CMOP benchmark using the median-target U-score framework and the released trace data. Experimental results show that RDEx-CMOP achieves the highest total score and the best overall average rank among all released comparison algorithms, with strong target-attainment behaviour and near-zero final violation on most problems.
Hyper-heuristics have become a popular approach for solving dynamic flexible job shop scheduling (DFJSS) problems. They use gradient-free optimization techniques like Genetic Programming (GP) to evolve non-differentiable heuristics. However, conventional GP methods tend to converge slowly because they rely solely on evolutionary search to find good heuristics. Existing multitask GP methods can solve multiple tasks simultaneously and speed up the search by transferring knowledge across similar tasks. But they mostly exchange heuristic building blocks without truly generating heuristics conditioned on task information. In this paper, we aim to accelerate convergence and enable task-specific heuristic generation by incorporating a task-conditioned Transformer model. The Transformer works in two ways. First, it learns the distribution of elite heuristics, biasing the search toward promising regions of the heuristic space. Second, through conditional generation, it produces heuristics tailored to specific tasks, allowing the model to handle multiple scheduling tasks at once and improving overall optimization efficiency. Based on these ideas, we propose TransGP, a Task-Conditioned Transformer-Guided GP framework. This evolutionary paradigm integrates generative modeling with GP, enabling efficient multitask heuristic learning and knowledge transfer. We evaluate TransGP on a range of DFJSS scenarios. Experimental results show that TransGP consistently outperforms multitask GP baselines, widely used handcrafted heuristics, and the pure Transformer model, achieving faster convergence, superior solution quality, and enhanced robustness.
Recently, evolutionary multitasking has been employed to generate a ``set of Pareto sets" (SOS) for machine learning models, addressing diverse task settings across heterogeneous environments. This involves creating a repository of compact, specialized solution models that are collectively tailored to each specific task setting and environment, enabling users to select the most suitable model based on particular specifications and preferences. In this paper, we further demonstrate the versatility and applicability of the SOS concept across diverse domains, focusing on three real-world problems: engineering design problems, inventory management problems, and hyperparameter optimization problems. Additionally, as evolutionary multitasking has proven effective in generating the SOS, we investigate the performance of current evolutionary multitasking methods on these real-world problems. Subsequently, we present visualizations of the generated SOS in both decision and objective spaces, complemented by the development of a measurement to gauge the similarity between different Pareto sets corresponding to diverse tasks. Finally, we show that by systematically examining the shifts in Pareto optimal designs across different task settings though the SOS solutions, users can gain deeper understandings on the dynamic interplay between design solutions and their performance in different settings or contexts.
Spiking Neural Networks (SNNs) promise significant advantages over conventional Artificial Neural Networks (ANNs) for applications requiring real-time processing of temporally sparse data streams under strict power constraints -- a concept known as the Neuromorphic Advantage. However, the limited availability of neuromorphic hardware creates a substantial simulation-to-hardware gap that impedes algorithmic innovation, hardware-software co-design, and the development of mature open-source ecosystems. To address this challenge, we introduce Yet Another Neuromorphic Accelerator (YANA), an FPGA-based digital SNN accelerator designed to bridge this gap by providing an accessible hardware and software framework for neuromorphic computing. YANA implements a five-stage, event-driven processing pipeline that fully exploits temporal and spatial sparsity while supporting arbitrary SNN topologies through point-to-point neuron connections. The architecture features an input preprocessing scheme that maintains steady event processing at one event per cycle without buffer overflow risks, and implements hardware-efficient event-driven neuron updates using lookup tables for leak calculations. We demonstrate YANA's sparsity exploitation capabilities through experiments on the Spiking Heidelberg Digits dataset, showing near-linear scaling of inference time with both spatial and temporal sparsity levels. Deployed on the accessible AMD Kria KR260 platform, a single YANA core utilizes 740 LUTs, 918 registers, 7 BRAMS and 24 URAMs, supporting up to $2^{17}$ synapses and $2^{10}$ neurons. We release the YANA framework as an open-source project, providing an end-to-end solution for training, optimizing, and deploying SNNs that integrates with existing neuromorphic computing tools through the Neuromorphic Intermediate Representation (NIR).
Spiking neural networks encode information in spike timing and offer a pathway toward energy efficient artificial intelligence. However, a key challenge in spiking neural networks is realizing nonlinear and expressive computation in compact, energy-efficient hardware without relying on additional circuit complexity. In this work, we examine nonlinear computation in a CMOS+X spiking neuron implemented with a magnetic tunnel junction connected in series with an NMOS transistor. Circuit simulations of a multilayer network solving the XOR classification problem show that three intrinsic neuronal properties enable nonlinear behavior: threshold activation, response latency, and absolute refraction. Threshold activation determines which neurons participate in computation, response latency shifts spike timing, and absolute refraction suppresses subsequent spikes. These results show that magnetization dynamics of MTJ devices can support nonlinear computation in compact neuromorphic hardware.
Bilevel optimization is a field of significant theoretical and practical interest, yet solving such optimization problems remains challenging. Evolutionary methods have been employed to address these problems in the black-box setting; however, they incur high computational cost due to the nested nature of bilevel optimization. Although previous methods have attempted to reduce this cost through various heuristic techniques, such approaches limit versatility on challenging optimization landscapes, such as those with multimodality and significant interaction between upper- and lower-level decision variables. In this study, we propose an efficient framework that exploits the invariance of rank-based evolutionary algorithms to monotonic transformations, thereby reducing the computational burden of the lower-level optimization loop. Specifically, our method directly approximates the rankings of the upper-level value function, bypassing the need to run the lower-level optimizer until convergence for each upper-level iteration. We apply this framework to the setting where both levels are continuous, adopting CMA-ES as the optimizer. We demonstrate that our method achieves competitive performance on standard bilevel optimization benchmarks and can solve problems that are intractable with previously proposed methods, particularly those with multimodality and strong inter-variable interactions.
2604.02849We show that the error-gated Hebbian rule for PCA (EGHR-PCA), a three-factor learning rule equivalent to Oja's subspace rule under Gaussian inputs, can be systematically derived from Oja's subspace rule using frame theory. The global third factor in EGHR-PCA arises exactly as a frame coefficient when the learning rule is expanded with respect to a natural frame on the space of symmetric matrices. This provides a principled, non-heuristic derivation of a biologically plausible learning rule from its mathematically canonical counterpart.
We introduce chaos-controlled Reservoir Computing (cc-RC) for living neural cultures: dynamically rich substrates of unique potential for adaptive computation. To account for intrinsic biological variability, cc-RC combines: (i) pre-training identification of each culture's dynamical signature and phase-portrait attractor; (ii) low-power optical chaos control to stabilize spontaneous and stimulus-evoked activity; (iii) readout training within this controlled regime. Across hundreds of neural samples, cc-RC enables robust learning and pattern classification, improving both accuracy and model longevity by approximately 300% over standard RC. We further propose Knowledge Transplant (KT), for which the reservoir map learned by an expert culture is transplanted to an attractor-equivalent student culture, reducing training time to minutes while improving performance. By enabling cross-substrate, reusable learned models, KT paves the way for knowledge accumulation and sharing across neural populations, transcending biological lifespan limits.
Associative memory systems enable content-addressable storage and retrieval of patterns, a capability central to biological neural computation and artificial intelligence. Classical implementations such as Hopfield networks face fundamental limitations in memory capacity, scaling at most linearly with network size. We present an associative memory architecture based on Kuramoto oscillator networks with honeycomb topology in which memories are encoded as stable phase-locked configurations. The honeycomb network consists of multiple cycles that share nodes in a chain-like arrangement, creating a one-dimensional lattice of chained+loops. We prove that this architecture achieves exponential memory capacity: a network of $N$ oscillators can store $(2\lceil n_c/4 \rceil - 1)^m$ distinct patterns, where $m$ honeycomb cycles each contain $n_c$ oscillators. Moreover, we fully characterize all stable configurations and prove that each memory's basin of attraction maintains a guaranteed minimum size independent of network scale. Simulations using charge-density-wave (CDW) oscillators validate predicted phase-locking behavior, demonstrating practical realizability in neuromorphic hardware.
The optimization of over-parameterized deep neural networks represents a large-scale, high-dimensional, and strongly non-convex decision problem that challenges existing optimization frameworks. Current evolutionary and gradient-based pruning methods often struggle to scale to such dimensionalities, as they rely on flat search spaces, scalarized objectives, or repeated retraining, leading to premature convergence and prohibitive computational cost. This paper introduces a hierarchical importance-guided evolutionary framework that reformulates convolutional network pruning as a tractable large-scale multi-objective optimization problem. In the first phase, a continuous evolutionary search performs coarse exploration of weight-wise pruning thresholds to shrink the search space and identify promising regions of the Pareto set. The second phase applies a fine-grained binary evolutionary optimization constrained to the surviving weights, where importance-aware sampling and adaptive variation operators refine local search in the sparse region of the Pareto set. This hierarchical design combines global exploration and localized exploitation to achieve a well-distributed Pareto set of networks balancing compactness and accuracy. Empirical results on CIFAR-10 and CIFAR-100 using ResNet-56 and ResNet-110 confirm the method's effectiveness compared to existing evolutionary approaches: pruning achieves up to 51.9\% and 38.9\% parameter reductions with almost no accuracy loss compared to state-of-the-art evolutionary DNN pruning methods. The proposed method contributes a scalable evolutionary approach for solving very-large-scale multi-objective optimization problems, offering a general paradigm extendable to other domains where the decision space is exponentially large, objective functions are conflicting, and efficient trade-off discovery is essential.
Spiking neural networks (SNNs) support energy-efficient machine intelligence because event-driven computation and sparse activity map naturally to low-power digital hardware. In practical implementations, however, membrane states, synaptic weights, and thresholds are represented with finite-precision integer arithmetic. Quantization, clipping, and overflow can therefore alter network dynamics, not just approximate a higher-precision model. This paper adopts an integer-state dynamical perspective, modeling a hardware-oriented SNN as a deterministic map on a bounded integer lattice. Under this view, recurrence, periodic orbits, and regime changes become intrinsic properties of the system. We introduce a lightweight update rule with integer-valued states and shift-based leakage, and demonstrate the approach through exploratory simulations with network sizes N = 30-130, connection densities 0.1-0.9, and bit widths 4/8/16 over T = 1000 steps. The results show bounded and recurrent temporal structure with strong quantization sensitivity. The observed regimes depend heavily on representation semantics and scaling choices. These findings suggest that numerical precision acts as a dynamical design variable and highlight integer-state analysis as a useful framework for hardware-aware SNN co-design, motivating future work on attractor analysis, precision-aware training, and FPGA/ASIC validation.
The L infinity star discrepancy is a measure for how uniformly a point set is distributed in a given space. Point sets of low star discrepancy are used as designs of experiments, as initial designs for Bayesian optimization algorithms, for quasi-Monte Carlo integration methods, and many other applications. Recent work has shown that classical constructions such as Sobol', Halton, or Hammersley sequences can be outperformed by large margins when considering point sets of fixed sizes rather than their convergence behavior. These results, highly relevant to the aforementioned applications, raise the question of how much existing constructions can be improved through size-specific optimization. In this work, we study this question for the so-called Kronecker construction. Focusing on the 3-dimensional setting, we show that optimizing the two configurable parameters of its construction yields point sets outperforming the state-of-the-art value for sets of at least 500 points. Using the algorithm configuration technique irace, we then derive parameters that yield new state-of-the-art discrepancy values for whole ranges of set sizes.
Neural Architecture Search (NAS) has become a pivotal technique in automated machine learning. Evolutionary Algorithm (EA)-based methods demonstrate superior search quality but suffer from prohibitive computational costs, while gradient-based approaches like DARTS offer high efficiency but are prone to premature convergence and performance collapse. To bridge this gap, we propose G-ICSO-NAS, a hybrid framework implementing a three-stage optimization strategy. The Warm-up Phase pre-trains supernet weights ($w$) via differentiable methods while architecture parameters ($α$) remain frozen. The Exploration Phase adopts a hybrid co-optimization mechanism: an Improved Competitive Swarm Optimizer (ICSO) with diversity-aware fitness navigates the architecture space to update $α$, while gradient descent concurrently updates $w$. The Stability Phase employs fine-grained gradient-based search with early stopping to converge to the optimal architecture. By synergizing ICSO's global navigation capability with differentiable methods' efficiency, G-ICSO-NAS achieves remarkable performance with minimal cost. In the context of the DARTS search space, an accuracy of 97.46\% is achieved on CIFAR-10 with a computational budget of just 0.15 GPU-Days. The method also exhibits strong transfer potential, recording accuracies of 83.1\% (CIFAR-100) and 75.02\% (ImageNet). Furthermore, regarding the NAS-Bench-201 benchmark, G-ICSO-NAS is shown to deliver state-of-the-art results across all evaluated datasets.
2604.00502The heavy-tailed mutation operator, proposed by Doerr, Le, Makhmara, and Nguyen (2017) for evolutionary algorithms, is based on the power-law assumption of mutation rate distribution. Here we generalize the power-law assumption using a regularly varying constraint on the distribution function of mutation rate. In this setting, we generalize the upper bounds on the expected optimization time of the $(1+(λ,λ))$ genetic algorithm obtained by Antipov, Buzdalov and Doerr (2022) for the OneMax function class parametrized by the problem dimension $n$. In particular, it is shown that, on this function class, the sufficient conditions of Antipov, Buzdalov and Doerr (2022) on the heavy-tailed mutation, ensuring the $O(n)$ optimization time in expectation, may be generalized as well. This optimization time is known to be asymptotically smaller than what can be achieved by the $(1+(λ,λ))$ genetic algorithm with any static mutation rate. A new version of the heavy-tailed mutation operator is proposed, satisfying the generalized conditions, and promising results of computational experiments are presented.
Large Language Models (LLMs) have demonstrated impressive capabilities in code generation. While an interactive feedback loop can improve performance, writing effective tests is a non-trivial task. Early multi-agent frameworks, such as AgentCoder, automated this process but relied on generated tests as absolute ground truth. This approach is fragile: incorrect code frequently passes faulty or trivial tests, while valid solutions are often degraded to satisfy incorrect assertions. Addressing this limitation, newer methods have largely abandoned test generation in favor of planning and reasoning based on examples. We argue, however, that generated tests remain a valuable signal if we model them as noisy sensors guided by bayesian updates. To this end, we introduce BACE (Bayesian Anchored Co-Evolution), a framework that reformulates synthesis as a Bayesian co-evolutionary process where code and test populations are evolved, guided by belief distributions that are reciprocally updated based on noisy interaction evidence. By anchoring this search on minimal public examples, BACE prevents the co-evolutionary drift typical of self-validating loops. Extensive evaluations on LiveCodeBench v6 (post-March 2025) reveal that BACE achieves superior performance across both proprietary models and open-weight small language models.
The domain of metaheuristic optimization has become vibrant due to a flood of new algorithms using a new nature-inspired metaphor but lacking clear methodological novelty. The Criticism behind the development of these algorithms has reached such an extent that the critics started to assert that all novel algorithms are only copies of already developed ones. In this study, we try to show that the situation is not so black and white. Therefore, we define a strong equivalence theorem for estimating the similarity between two nature-inspired metaheuristics, according to which two algorithms are equivalent if, and only if, the cosine similarity of their phenotypic and genotypic feature vectors, characterizing their behavior by searching for the optimal solutions, is above some threshold. On the theorem basis, a framework is developed for identifying the equivalence between nature-inspired metaheuristics. Extensive experimental work using the framework has shown that searching for conditions to achieve the high similarity of the more well-known nature-inspired metaheuristics is hard, or even not possible to achieve, in the limited computational environments.