Table of Contents
Fetching ...

JoyAI-LLM Flash: Advancing Mid-Scale LLMs with Token Efficiency

Aichen Cai, Anmeng Zhang, Anyu Li, Bo Zhang, Bohua Cai, Chang Li, Changjian Jiang, Changkai Lu, Chao Xue, Chaocai Liang, Cheng Zhang, Dongkai Liu, Fei Wang, Guoqiang Huang, Haijian Ke, Han Lin, Hao Wang, Ji Miao, Jiacheng Zhang, Jialong Shi, Jifeng Zhu, Jingjing Qian, Junhui Luo, Junwu Xiong, Lam So, Liang Huang, Ming Ke, Mingyang Li, Panfeng Shi, Peng Hao, Qi Wang, Qian Lai, Qiaoqiao Yuan, Qingyu Yin, Qiong Cao, Qixiang Wang, Rongcheng Bian, Rongduo Han, Shaoqiang Zheng, Shi Hu, Shi Suo, Shijie Ren, Shijin Zhang, Shiying Fan, Shuai Xie, Tianyi Zhang, Wei Liu, Wentao Tan, Xianghan Meng, Xiaodong He, Xing Pan, Xiran Wang, Xuyang Peng, Ya Zhang, Yang Liu, Yangyang Duan, Yanxu Chen, Yicheng Gong, Yidan Huang, Yifei Liu, Yinhao Bai, Yongqiang Liu, Yuesong Zhang, Yuqi Zhang, Zerui Xie, Zhenfang Wang, Zhennan Shen, Zheyuan Liu, Zhuwei Zeng

Abstract

We introduce JoyAI-LLM Flash, an efficient Mixture-of-Experts (MoE) language model designed to redefine the trade-off between strong performance and token efficiency in the sub-50B parameter regime. JoyAI-LLM Flash is pretrained on a massive corpus of 20 trillion tokens and further optimized through a rigorous post-training pipeline, including supervised fine-tuning (SFT), Direct Preference Optimization (DPO), and large-scale reinforcement learning (RL) across diverse environments. To improve token efficiency, JoyAI-LLM Flash strategically balances \emph{thinking} and \emph{non-thinking} cognitive modes and introduces FiberPO, a novel RL algorithm inspired by fibration theory that decomposes trust-region maintenance into global and local components, providing unified multi-scale stability control for LLM policy optimization. To enhance architectural sparsity, the model comprises 48B total parameters while activating only 2.7B parameters per forward pass, achieving a substantially higher sparsity ratio than contemporary industry leading models of comparable scale. To further improve inference throughput, we adopt a joint training-inference co-design that incorporates dense Multi-Token Prediction (MTP) and Quantization-Aware Training (QAT). We release the checkpoints for both JoyAI-LLM-48B-A3B Base and its post-trained variants on Hugging Face to support the open-source community.

JoyAI-LLM Flash: Advancing Mid-Scale LLMs with Token Efficiency

Abstract

We introduce JoyAI-LLM Flash, an efficient Mixture-of-Experts (MoE) language model designed to redefine the trade-off between strong performance and token efficiency in the sub-50B parameter regime. JoyAI-LLM Flash is pretrained on a massive corpus of 20 trillion tokens and further optimized through a rigorous post-training pipeline, including supervised fine-tuning (SFT), Direct Preference Optimization (DPO), and large-scale reinforcement learning (RL) across diverse environments. To improve token efficiency, JoyAI-LLM Flash strategically balances \emph{thinking} and \emph{non-thinking} cognitive modes and introduces FiberPO, a novel RL algorithm inspired by fibration theory that decomposes trust-region maintenance into global and local components, providing unified multi-scale stability control for LLM policy optimization. To enhance architectural sparsity, the model comprises 48B total parameters while activating only 2.7B parameters per forward pass, achieving a substantially higher sparsity ratio than contemporary industry leading models of comparable scale. To further improve inference throughput, we adopt a joint training-inference co-design that incorporates dense Multi-Token Prediction (MTP) and Quantization-Aware Training (QAT). We release the checkpoints for both JoyAI-LLM-48B-A3B Base and its post-trained variants on Hugging Face to support the open-source community.

Paper Structure

This paper contains 38 sections, 7 equations, 10 figures, 4 tables.

Figures (10)

  • Figure 1: Model performance vs. token consumption across different middle-scale LLMs. The accuracy and token consumption averaged across eighteen benchmarks used in post-training evaluation (Table \ref{['tab:instruct_eval']}) are illustrated, where the upper-right region indicates more token-efficient models. Bubble size represents the model parameter count.
  • Figure 2: Agentic trajectory synthesis pipeline
  • Figure 3: Scaling laws for JOYAI-LLM Flash. The plot illustrates the relationship between training compute and model performance. Data points represent empirical observations, while solid lines indicate the power-law fits.
  • Figure 4: Verifiable Environment Pipeline
  • Figure 5: (a) Aggregate gate $g^{\rm agg}$ (Eq. \ref{['eq:gagg']}) with three regimes: pass-through ($|x| \leq C$, slope $1$), rollback ($C < |x| < C^* := (1+T_\tau^{-1})C$, slope $-T_\tau$), and zeroed ($|x| \geq C^*$, output $0$). As $T_\tau$ increases, the rollback zone narrows (width $C/T_\tau$) and $g^{\rm agg}$ approaches a hard clip at $\pm C$. (b) Base weight $\log w_\tau^{\rm base}$ (Eq. \ref{['eq:fiberpo_decomp']}) in $(\log s^+, \log s^-)$-space with asymmetric thresholds. Dashed lines mark the budget boundaries $C^\pm$ (onset of rollback), and dotted lines mark the full-gating thresholds $C^{*\pm}$ (onset of zeroing). The five global regimes follow a non-monotonic pattern: $|\log w|$ rises through the rollback onset (G-II,r), peaks when one channel is fully gated (G-II), declines under mutual rollback (G-III,r), and collapses to zero when both channels are fully gated (G-III, $w^{\rm base}_\tau = 1$).
  • ...and 5 more figures

Theorems & Definitions (1)

  • Definition 3.1: FiberPO