Table of Contents
Fetching ...

Captioning Daily Activity Images in Early Childhood Education: Benchmark and Algorithm

Sixing Li, Zhibin Gu, Ziqi Zhang, Weiguo Pan, Bing Li, Ying Wang, Hongzhe Liu

Abstract

Image captioning for Early Childhood Education (ECE) is essential for automated activity understanding and educational assessment. However, existing methods face two key challenges. First, the lack of large-scale, domain-specific datasets limits the model's ability to capture fine-grained semantic concepts unique to ECE scenarios, resulting in generic and imprecise descriptions. Second, conventional training paradigms exhibit limitations in enhancing professional object description capability, as supervised learning tends to favor high-frequency expressions, while reinforcement learning may suffer from unstable optimization on difficult samples. To address these limitations, we introduce ECAC, a large-scale benchmark for ECE daily activity image captioning, comprising 256,121 real-world images annotated with expert-level captions and fine-grained labels. ECAC is further equipped with a domain-oriented evaluation protocol, the Teaching Toy Recognition Score (TTS), to explicitly measure professional object naming accuracy. Furthermore, we propose RSRS (Reward-Conditional Switch of Reinforcement Learning and Supervised Fine-Tuning), a hybrid training framework that dynamically alternates between RL and supervised optimization. By rerouting hard samples with zero rewards to supervised fine-tuning, RSRS effectively mitigates advantage collapse and enables stable optimization for fine-grained recognition. Leveraging ECAC and RSRS, we develop KinderMM-Cap-3B, a domain-adapted multimodal large language model. Extensive experiments demonstrate that our model achieves a TTS of 51.06, substantially outperforming state-of-the-art baselines while maintaining superior caption quality, highlighting its potential for specialized educational applications.

Captioning Daily Activity Images in Early Childhood Education: Benchmark and Algorithm

Abstract

Image captioning for Early Childhood Education (ECE) is essential for automated activity understanding and educational assessment. However, existing methods face two key challenges. First, the lack of large-scale, domain-specific datasets limits the model's ability to capture fine-grained semantic concepts unique to ECE scenarios, resulting in generic and imprecise descriptions. Second, conventional training paradigms exhibit limitations in enhancing professional object description capability, as supervised learning tends to favor high-frequency expressions, while reinforcement learning may suffer from unstable optimization on difficult samples. To address these limitations, we introduce ECAC, a large-scale benchmark for ECE daily activity image captioning, comprising 256,121 real-world images annotated with expert-level captions and fine-grained labels. ECAC is further equipped with a domain-oriented evaluation protocol, the Teaching Toy Recognition Score (TTS), to explicitly measure professional object naming accuracy. Furthermore, we propose RSRS (Reward-Conditional Switch of Reinforcement Learning and Supervised Fine-Tuning), a hybrid training framework that dynamically alternates between RL and supervised optimization. By rerouting hard samples with zero rewards to supervised fine-tuning, RSRS effectively mitigates advantage collapse and enables stable optimization for fine-grained recognition. Leveraging ECAC and RSRS, we develop KinderMM-Cap-3B, a domain-adapted multimodal large language model. Extensive experiments demonstrate that our model achieves a TTS of 51.06, substantially outperforming state-of-the-art baselines while maintaining superior caption quality, highlighting its potential for specialized educational applications.

Paper Structure

This paper contains 27 sections, 7 equations, 5 figures, 6 tables, 1 algorithm.

Figures (5)

  • Figure 1: Daily Activity Images Across Three Scenario Categories
  • Figure 2: Distribution of images across different regions in the dataset.
  • Figure 3: Overview of the revised KinderMM-Cap framework. The training pipeline comprises three stages: (1) Stage 1 (Base Model), where a general vision–language model generates inaccurate descriptions of Early Childhood Education (ECE) activities; (2) Stage 2 (SFT Warm-up), where supervised fine-tuning improves caption quality but still yields semantic deviations; and (3) Stage 3 (RSRS Core Training), where samples are adaptively split by intra-group rewards—hard samples (zero reward) are optimized via the SFT branch, while others are refined through the GRPO branch to encourage exploration. The final model produces accurate and semantically grounded ECE captions. In the figure, red-highlighted text denotes incorrect descriptions, purple indicates imprecise descriptions, and green represents accurate descriptions.
  • Figure 4: Diagram of the RSRS Framework. In the diagram, O denotes the candidate captions generated by the model given the input image and prompt; R represents the reward computed from the consistency between the generated captions and annotated teaching toys; A refers to the normalized within-group advantage, which measures the relative quality of each candidate caption compared to the group average. In addition, the buffer stores samples whose entire group receives zero reward, and once it reaches the batch size, the SFT branch is activated to prevent advantage collapse and enhance the model’s learning ability on difficult samples.
  • Figure 5: Ablation Study on Training Strategies for ECE Daily Activity Captioning.