Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control
Zifan Liu, Xinran Li, Shibo Chen, Gen Li, Jiashuo Jiang, Jun Zhang
TL;DR
This work addresses the heavy sample requirements and censored demand challenges of applying reinforcement learning to lost-sales inventory control by introducing Reinforcement Learning with Intrinsically Motivated Feedback Graphs (RLFG) and IME. The method tailors a feedback graph to lost-sales dynamics, analyzes how FG reduces sample complexity for Q-learning, and augments learning with an intrinsic reward that directs exploration toward regions rich in side experiences. Theoretical guarantees on update probabilities are complemented by extensive experiments across single-item, multi-item, and multi-echelon IC settings, where Rainbow-FG and TD3-FG achieve substantial gains in sample efficiency and performance over strong baselines. The results demonstrate the practical potential of FG-enabled RL with intrinsic exploration to improve data efficiency and robustness in complex inventory systems.
Abstract
Reinforcement learning (RL) has proven to be well-performed and general-purpose in the inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded due to two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a decision framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first take advantage of the inherent properties of lost-sales IC problems and design the feedback graph (FG) specially for lost-sales IC problems to generate abundant side experiences aid RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Based on the theoretical insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG's power. Experimental results demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at https://anonymous.4open.science/r/RLIMFG4IC-811D/
