Table of Contents
Fetching ...

Free Information Disrupts Even Bayesian Crowds

Jonas Stein, Shannon Cruz, Davide Grossi, Martina Testori

Abstract

A core tenet underpinning the conception of contemporary information networks, such as social media platforms, is that users should not be constrained in the amount of information they can freely and willingly exchange with one another about a given topic. By means of a computational agent-based model, we show how even in groups of truth-seeking and cooperative agents with perfect information-processing abilities, unconstrained information exchange may lead to detrimental effects on the correctness of the group's beliefs. If unconstrained information exchange can be detrimental even among such idealized agents, it is prudent to assume it can also be so in practice. We therefore argue that constraints on information flow should be carefully considered in the design of communication networks with substantial societal impact, such as social media platforms.

Free Information Disrupts Even Bayesian Crowds

Abstract

A core tenet underpinning the conception of contemporary information networks, such as social media platforms, is that users should not be constrained in the amount of information they can freely and willingly exchange with one another about a given topic. By means of a computational agent-based model, we show how even in groups of truth-seeking and cooperative agents with perfect information-processing abilities, unconstrained information exchange may lead to detrimental effects on the correctness of the group's beliefs. If unconstrained information exchange can be detrimental even among such idealized agents, it is prudent to assume it can also be so in practice. We therefore argue that constraints on information flow should be carefully considered in the design of communication networks with substantial societal impact, such as social media platforms.

Paper Structure

This paper contains 2 sections, 6 equations, 3 figures.

Figures (3)

  • Figure 1: Each agent is represented by a colored figure, with the color indicating their belief: Green-leaning agents are more confident that the true state is A, while red-leaning agents are more confident in the false state B. Step 1: Initial state ( t = 0). Each agent is endowed with a private piece of evidence about the state of the world. Each piece of evidence (green lightbulb for A, red lightbulb for B) comes with a known quality — the probability that the signal is accurate (full/empty lightbulb for high/low quality). Agents use Bayes’ rule to form an initial belief based on this single observation. Some agents start with a correct belief (favoring A), others with an incorrect one (favoring B) due to randomness in their initial evidence. Step 2: Partner selection. At each time step, agents are stochastically matched with another agent to exchange information. The likelihood of being paired increases with belief similarity, a dynamic controlled by homophily: When homophily is high, agents are more likely to interact with others who hold beliefs similar to their own. Step 3: Belief exchange. Agents exchange up to k pieces of evidence, determined by their communication capacity. Each agent selects their best available evidence — i.e., the most accurate signals — and shares them in proportion to their current belief. For example, an agent strongly favoring A will mostly share high-quality signals supporting A, but may also share some evidence supporting B. Note: Exchanges occur in one direction, meaning that one agent might not share information with the same agent they received information from. Step 4: Belief update. After the exchange, each agent updates their belief by applying Bayes’ rule to the new evidence received. Over time, these updates can lead agents to converge toward the true state A, though this depends on both who they interact with (influenced by homophily) and how much information they can exchange (communication capacity).
  • Figure 2: Belief evolution over interactions by homophily $h$ and communication capacity $k$. Panel i & iii: Low communication capacity results in moderate belief diversity and epistemic performance, regardless of the level of homophily. Panel ii: High communication capacity and high homophily result in belief polarization and low epistemic performance. Panel iv: Epistemic performance is maximal at high capacity and no homophily. Green (red) lines represent agents with an initial belief in favor of $A$ ($B$).
  • Figure 3: Overview of a) agents' A and B epistemic gain ($\rho_i$; first column); b) groups' A and B epistemic gain ($W_A, W_B$; middle column); c) population epistemic gain ($W$; top panel, last column); d) inequality across the population in the epistemic gain ($G$; bottom panel, last column), as a function of homophily $h$ and communication capacity $k$.