Table of Contents
Fetching ...

A Deep Reinforcement Learning Framework for Closed-loop Guidance of Fish Schools via Virtual Agents

Takato Shibayama, Hiroaki Kawashima

Abstract

Guiding collective motion in biological groups is a fundamental challenge in understanding social interaction rules and developing automated systems for animal management. In this study, we propose a deep reinforcement learning (RL) framework for the closed-loop guidance of fish schools using virtual agents. These agents are controlled by policies trained via Proximal Policy Optimization (PPO) in simulation and deployed in physical experiments with rummy-nose tetras (Petitella bleheri), enabling real-time interaction between artificial agents and live individuals. To cope with the stochastic behavior of live individuals, we design a composite reward function to balance directional guidance with social cohesion. Our systematic evaluation of visual parameters shows that a white background and larger stimulus sizes maximize guidance efficacy in physical trials. Furthermore, evaluation across group sizes revealed that while the system demonstrates effective guidance for groups of five individuals, this capability markedly degrades as group size increases to eight. This study highlights the potential of deep RL for automated guidance of biological collectives and identifies challenges in maintaining artificial influence in larger groups.

A Deep Reinforcement Learning Framework for Closed-loop Guidance of Fish Schools via Virtual Agents

Abstract

Guiding collective motion in biological groups is a fundamental challenge in understanding social interaction rules and developing automated systems for animal management. In this study, we propose a deep reinforcement learning (RL) framework for the closed-loop guidance of fish schools using virtual agents. These agents are controlled by policies trained via Proximal Policy Optimization (PPO) in simulation and deployed in physical experiments with rummy-nose tetras (Petitella bleheri), enabling real-time interaction between artificial agents and live individuals. To cope with the stochastic behavior of live individuals, we design a composite reward function to balance directional guidance with social cohesion. Our systematic evaluation of visual parameters shows that a white background and larger stimulus sizes maximize guidance efficacy in physical trials. Furthermore, evaluation across group sizes revealed that while the system demonstrates effective guidance for groups of five individuals, this capability markedly degrades as group size increases to eight. This study highlights the potential of deep RL for automated guidance of biological collectives and identifies challenges in maintaining artificial influence in larger groups.

Paper Structure

This paper contains 29 sections, 10 equations, 8 figures, 2 tables.

Figures (8)

  • Figure 1: Schematic diagram of the closed-loop system architecture. The positions of the live fish are monitored by a front-facing camera and processed by a PC to apply the learned agent policies, which are then rendered as virtual agents on the display.
  • Figure 2: Top-view schematic of the experimental setup. The hatched area indicates the 40 deep (front-to-back distance) section where the live fish were constrained. The partition ensures two-dimensional movement and visibility of the displayed virtual agents by avoiding reflections at the tank-water interface from the fish's perspective.
  • Figure 3: Conceptual diagram of the multi-objective reward design. The cohesion term $r_{\mathrm{school}}$ encourages the virtual agents to maintain proximity to the real fish, while $r_{\mathrm{direction}}$ rewards the progress of the virtual agents toward the target area.
  • Figure 4: Definition of evaluation areas for guidance tasks. The target area is defined as the 30% region from the target end, while the opposite area is the 30% region from the opposite end.
  • Figure 5: Learning curves showing the transition of the mean evaluation value $\bar{R}$ across different training steps $T$ and reward weights $\beta$. Each data point represents the average of 10 independent training trials for the corresponding parameter combination. The baseline represents the policy trained using only the horizontal coordinate of the school's centroid ($r_{\mathrm{base}}$). Each plot compares different ignoring probabilities $p$ for the simulated fish.
  • ...and 3 more figures