Table of Contents
Fetching ...

Active Inference with People: a general approach to real-time adaptive experiments

Lucas Gautheron, Nori Jacoby, Peter Harrison

Abstract

Adaptive experiments automatically optimize their design throughout the data collection process, which can bring substantial benefits compared to conventional experimental settings. Potential applications include, among others: computerized adaptive testing (for selecting informative tasks in ability measurements), adaptive treatment assignment (when searching experimental conditions maximizing certain outcomes), and active learning (for choosing optimal training data for machine learning algorithms). However, implementing these techniques in real time poses substantial computational and technical challenges. Additionally, despite their conceptual similarity, the above scenarios are often treated as separate problems with distinct solutions. In this paper, we introduce a practical and unified approach to real-time adaptive experiments that can encompass all of the above scenarios, regardless of the modality of the task (including textual, visual, and audio inputs). Our strategy combines active inference, a Bayesian framework inspired by cognitive neuroscience, with PsyNet, a platform for large-scale online behavioral experiments. While active inference provides a compact, flexible, and principled mathematical framework for adaptive experiments generally, PsyNet is a highly modular Python package that supports social and behavioral experiments with stimuli and responses in arbitrary domains. We illustrate this approach through two concrete examples: (1) an adaptive testing experiment estimating participants' ability by selecting optimal challenges, effectively reducing the amount of trials required by 30--40\%; and (2) an adaptive treatment assignment strategy that identifies the optimal treatment up to three times as accurately as a fixed design in our example. We provide detailed instructions to facilitate the adoption of these techniques.

Active Inference with People: a general approach to real-time adaptive experiments

Abstract

Adaptive experiments automatically optimize their design throughout the data collection process, which can bring substantial benefits compared to conventional experimental settings. Potential applications include, among others: computerized adaptive testing (for selecting informative tasks in ability measurements), adaptive treatment assignment (when searching experimental conditions maximizing certain outcomes), and active learning (for choosing optimal training data for machine learning algorithms). However, implementing these techniques in real time poses substantial computational and technical challenges. Additionally, despite their conceptual similarity, the above scenarios are often treated as separate problems with distinct solutions. In this paper, we introduce a practical and unified approach to real-time adaptive experiments that can encompass all of the above scenarios, regardless of the modality of the task (including textual, visual, and audio inputs). Our strategy combines active inference, a Bayesian framework inspired by cognitive neuroscience, with PsyNet, a platform for large-scale online behavioral experiments. While active inference provides a compact, flexible, and principled mathematical framework for adaptive experiments generally, PsyNet is a highly modular Python package that supports social and behavioral experiments with stimuli and responses in arbitrary domains. We illustrate this approach through two concrete examples: (1) an adaptive testing experiment estimating participants' ability by selecting optimal challenges, effectively reducing the amount of trials required by 30--40\%; and (2) an adaptive treatment assignment strategy that identifies the optimal treatment up to three times as accurately as a fixed design in our example. We provide detailed instructions to facilitate the adoption of these techniques.

Paper Structure

This paper contains 38 sections, 27 equations, 10 figures, 3 tables.

Figures (10)

  • Figure 1: The infrastructure of PsyNet experiments. a) Software. PsyNet experiments are implemented via a modular, object-oriented Python code. Lower-level functions are delegated to Dallinger. b) Deployment. Experiments are deployed on remote servers (self-hosted or cloud-based) and accessible to participants from their web-browser. c) Recruiting services. Multiple participant-recruitment platforms are supported, including Prolific and MTurk.
  • Figure 2: Components of a Bayesian adaptive experiment in PsyNet. The timeline (§\ref{['paragraph:timeline']}) organizes the structure of the experiment. The trial maker (§\ref{['paragraph:trial_makers']}) delivers trials §\ref{['paragraph:trials']}) (i.e. tasks) to participants. Trials (i.e. individual tasks; §\ref{['paragraph:trials']}) are based on stimuli organized in networks of nodes, each node being a trivia question in our example §\ref{['paragraph:network']}). Each trial is associated with a specific participant/answer. The user interface (§\ref{['paragraph:interface']}) displays the stimulus and collects the answer. Finally, an optimization module, called by the trial maker, performs the computational work of the Bayesian optimization procedure for selecting an optimal node at every trial. The code for each module can be accessed via the corresponding icon ().
  • Figure 3: Comparison of the adaptive and static design on experimental data. $(\ast):$ indicates results derived from a counterfactual simulated on human data from (the oracle).
  • Figure 4: Simulation of the active inference approach.
  • Figure 5: Performance of active inference and traditional adaptive treatment assignment policies. Performance is evaluated as the probability that the setup agrees with the oracle about which treatment is optimal (higher is better). The static design is included for reference. Each design is simulated 300 times on the oracle data.
  • ...and 5 more figures