OTCE: Hybrid SSM and Attention with Cross Domain Mixture of Experts to construct Observer-Thinker-Conceiver-Expresser
Jingze Shi, Ting Xie, Bingheng Wu, Chunjun Zheng, Kai Wang
TL;DR
OTCE addresses the challenge of modeling long-context language with efficient computation by blending a selective state space model (SSM) with self-attention, bridged by a Rotational Positional Encoding scheme. The Observer-Thinker-Conceiver-Expresser architecture, coupled with cohesive and expansive cross-domain mixtures of experts, enables efficient state aggregation, global dependency capture, and cross-domain knowledge transfer. Empirical results show OTCE competitive with medium-scale open-source models and superior on long-context and associative recall tasks, with notable gains when using Expresser reweighting and joint RoPE usage. The approach offers a scalable framework for long-context language modeling with improved data efficiency and reduced routing bias, potentially impacting practical NLP systems requiring long-context reasoning and cross-domain knowledge integration.
Abstract
Recent research has shown that combining Mamba with Transformer architecture, which has selective state space and quadratic self-attention mechanism, outperforms using Mamba or Transformer architecture alone in language modeling tasks. The quadratic self-attention mechanism effectively alleviates the shortcomings of selective state space in handling long-term dependencies of any element in the sequence. We propose a position information injection method that connects the selective state space model with the quadratic attention, and integrates these two architectures with hybrid experts with cross-sharing domains, so that we can enjoy the advantages of both. We design a new architecture with a more biomimetic idea: Observer-Thinker-Conceiver-Expresser (OTCE), which can compete with well-known medium-scale open-source language models on a small scale in language modeling tasks.
