A Dynamic Model of Social Network Formation
Brian Skyrms, Robin Pemantle
TL;DR
The paper tackles how social networks form and evolve when agents engage in repeated games, with pairings driven by a stochastically evolving network. It develops several reinforcement-based dynamics, from uniform reinforcement to asymmetric and symmetric weight updates, to study how network structure co-evolves with strategy. The results reveal that even simple baseline reinforcement can generate emergent structures such as sparse pairings and symmetric weight patterns, while more complex dynamics (discounting, noise, and nontrivial strategies) produce rich behaviors including pairing, star formations, and coordination outcomes in games like Rousseau's Stag Hunt. The findings underscore that incorporating structural dynamics into game-theoretic models can yield more realistic predictions and a broader range of equilibria, with implications for understanding cooperation and coordination in real networks.
Abstract
We consider a dynamic social network model in which agents play repeated games in pairings determined by a stochastically evolving social network. Individual agents begin to interact at random, with the interactions modeled as games. The game payoffs determine which interactions are reinforced, and the network structure emerges as a consequence of the dynamics of the agents' learning behavior. We study this in a variety of game-theoretic conditions and show that the behavior is complex and sometimes dissimilar to behavior in the absence of structural dynamics. We argue that modeling network structure as dynamic increases realism without rendering the problem of analysis intractable.
