Weak Reward Model Transforms Generative Models into Robust Causal Event Extraction Systems
Italo Luis da Silva, Hanqi Yan, Lin Gui, Yulan He
TL;DR
This work tackles the evaluation bottleneck in causal event extraction by training evaluators that approximate human judgments and using them as reward signals to fine-tune generative extractors via PPO. It demonstrates that a DeBERTa based evaluator correlates closely with human judgments and can transfer across datasets, enabling cross-domain reward modeling and RL fine-tuning without excessive labeling. The authors further introduce a weak-to-strong supervision strategy that achieves comparable RL performance with only a subset of labeled data, enhancing data efficiency. Across three datasets, the reinforcement learning approach yields consistent improvements over strong baselines, highlighting the practical impact of aligning generative models with human semantic preferences in causal reasoning tasks.
Abstract
The inherent ambiguity of cause and effect boundaries poses a challenge in evaluating causal event extraction tasks. Traditional metrics like Exact Match and BertScore poorly reflect model performance, so we trained evaluation models to approximate human evaluation, achieving high agreement. We used them to perform Reinforcement Learning with extraction models to align them with human preference, prioritising semantic understanding. We successfully explored our approach through multiple datasets, including transferring an evaluator trained on one dataset to another as a way to decrease the reliance on human-annotated data. In that vein, we also propose a weak-to-strong supervision method that uses a fraction of the annotated data to train an evaluation model while still achieving high performance in training an RL model. Our code is available at https://github.com/oyarsa/event_extraction/tree/causal-event-extraction.
