Microeconomic theory, game theory, and mechanism design
Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state.
2604.02559We establish a variant of Monge--Kantorovich duality for a constrained optimal transport problem with a continuum of agents, a finite set of alternatives, and general linear constraints. As an application, we revisit the large-market model of indivisible goods in Azevedo et al. (2013), identify a flaw in the original equilibrium-existence proof stemming from an incorrect compactness claim, and recover equilibrium existence via our duality approach. We also characterize equilibrium prices as minimizers of a potential function, which yields a method for computing equilibrium prices.
We study how artificial intelligence (AI) affects firms' incentives to pursue incremental versus radical knowledge recombinations. We develop a model of recombinant innovation embedded in a Schumpeterian quality-ladder framework, in which innovation arises from recombining ideas across varying distances in a knowledge space. R&D consists of multiple tasks, a fraction of which can be performed by AI. AI facilitates access to distant knowledge domains, but at the same time it also increases the aggregate rate of creative destruction, shortening the monopoly duration that rewards radical innovations. Moreover, excessive reliance on AI may reduce the originality of research and lead to duplication of research efforts. We obtain three main results. First, higher AI productivity encourages more distant recombinations, if the direct facilitation effect is stronger than the indirect effect due to intensified competition from rivals. Second, the effect of increasing the share of AI-automated R&D tasks is non-monotonic: firms initially target more radical innovations, but beyond a threshold of human-AI complementarity, they shift the focus toward incremental innovations. Third, in the limiting case of full automation, the model predicts that optimal recombination distance collapses to zero, suggesting that fully AI-driven research would undermine the very knowledge creation that it seeks to accelerate.
When does consulting one information source raise the value of another, and when does it diminish it? We study this question for Bayesian decision-makers facing finite actions. The interaction decomposes into two opposing forces: a complement force, measuring how one source moves beliefs to where the other becomes more useful, and a substitute force, measuring how much the current decision is resolved. Their balance obeys a localization principle: substitution requires an observation to cross a decision boundary, though crossing alone does not guarantee it. Whenever posteriors remain inside the current decision region, the substitute force vanishes, and sources are guaranteed to complement each other, even when one source cannot, on its own, change the decision. The results hold for arbitrarily correlated sources and are formalized in Lean 4. Substitution is confined to the thin boundaries where decisions change. Everywhere else, information cooperates. Code and proofs: https://github.com/nidhishs/all-substitution-is-local.
2604.01260In this paper, we study aggregation rules with nontrivial symmetric classes of invariant sets (restricted domains), assuming that they, unlike others, have a logical nature. In the simplest case, we provide a complete classification of such rules. Our primary tools are methods of universal algebra and the theory of closed classes of discrete functions.
This paper studies the economic role of persistent dispersion in allocations across agents. We develop a tractable model in which firms allocate resources under imperfect information and behavioral updating, generating sustained heterogeneity in beliefs and actions. While dispersion induces static misallocation, it also fosters decentralized experimentation, allowing the economy to explore a broader set of productive opportunities. We show that the economy converges to a stationary equilibrium with strictly positive dispersion and that, under plausible conditions, such disequilibrium can dominate the perfectly coordinated benchmark. The model provides a novel interpretation of observed dispersion in productivity and returns as reflecting both inefficiency and productive exploration. It also yields testable predictions linking dispersion to growth and innovation dynamics.
This paper examines the optimal contracts in a two-dimensional screening model where one dimension(group identity) is verifiable by agents but not falsifiable. A principal offers contracts to agents who differ in cost types and group membership. Motivated by the United States Federal policy, Work Opportunity Tax Credit, the principal receives tax benefits for hiring agents from protected groups. Under the assumption that the protected agents tend to have higher cost types, the optimal contract induces full separation across both dimensions: agents reveal the cost type and the group identity through contract choice. Furthermore, the principal is willing to hire the trait agents with a higher cost threshold than the non-trait agents, and this threshold increases with the tax credit. Conversely, when the protected agents tend to have lower cost types, the optimal design without tax credits pools groups while separating by cost type. These results demonstrate that both affirmative action and non-discrimination can be optimal depending on the cost distribution ordering across groups.
This paper studies how uncertainty about problem difficulty shapes problem-solving strategies. I develop a dynamic model where an agent solves a problem by brainstorming approaches of unknown quality and allocating a fixed effort budget among them. Success arrives from spending effort pursuing good approaches, at a rate determined by the unknown problem difficulty. The agent balances costly exploration (expanding the set of approaches) with exploitation (pursuing existing approaches). Failures could signal either a bad idea or a hard problem, and this uncertainty generates novel dynamics: optimal search alternates between trying new approaches and revisiting previously abandoned ones. I then examine a principal-agent environment, where moral hazard arises on the intensive margin: how the agent explores. Dynamic commitment leads contracts to frontload incentives, which can be counteracted by the presence of learning. The framework reflects scientific discovery, product development, and other creative work, providing insights into innovation and organizational design.
Industrial policy has returned to the centre of economic governance, particularly in the high-tech sectors where positive network externalities in demand make market dominance self-reinforcing. This paper studies the welfare effects of an industrial policy targeting a sector with network externalities in a two-country model with strategic trade and R&D investment. We show how the welfare consequences of this policy are determined by the interaction between the strength of the externality, the type of R&D, and the degree of product differentiation between the home and the imported goods. When externalities are weak or the goods are close substitutes, the business-stealing effect produces a race to the bottom that dissipates more surplus than it creates. Under sufficiently strong externalities and weak substitutability or complementarity of the goods, industrial policy competition can make both countries simultaneously better off compared to the laissez-faire outcome because of the mutual business-enhancement effect. The case is stronger for the product innovation than for the process innovation, as the former directly affects the demand and triggers a stronger network effects than the latter which operates indirectly through the supply. Thus, the network externalities create an opportunity for a win-win industrial policies, but its realisation depends on the market structure and the nature of innovation.
2603.27868Human decision makers increasingly delegate choices to AI agents, raising a natural question: does the AI implement the human principal's preferences or pursue its own? To study this question using revealed preference techniques, I introduce the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's. I show that the AI's alignment (similarity of human and AI preferences) can be generically identified in two settings: the laboratory setting, where both human and AI choices are observed, and the field setting, where only AI choices are observed.
In many applications of cooperative game theory -- from corporate governance and cartel formation to parliamentary voting -- not all winning coalitions are feasible. Ideological distances, institutional constraints, or pre-electoral agreements may render certain coalitions implausible. Classical power indices ignore this and weight all winning coalitions equally. We introduce cohesion structures to quantify coalition feasibility and axiomatically characterize two families of cohesion-sensitive power indices, represented as expected marginal contributions under Luce-type distributions. In the Banzhaf branch, coalition weights are a power transformation of cohesion; in the Shapley branch, additional axioms separate size from cohesion, recovering the classical size weights with cohesion acting within each size class. All results have been mechanically verified in Lean 4 with Mathlib. We illustrate the framework on the German Bundestag and the French Assemblée Nationale, where cordon sanitaire and double cordon scenarios produce sharp, interpretable power shifts.
2603.26853We develop an axiomatic framework to evaluate income distributions from the perspective of an opportunity-egalitarian social planner. Building on a formal link with the literature on decision theory under ambiguity, we characterize a class of opportunity-sensitive social welfare functions based on a two-stage evaluation: the planner first computes the expected utility of income within each social type, where types consist of individuals sharing the same circumstances beyond their control, and then aggregates these type-specific welfare levels through a transformation reflecting aversion to inequality of opportunity. The evaluation is governed by a single parameter. We provide equivalent representations of the social welfare function, including a mean-divergence form that separates an efficiency term from an inequality term, and we establish an opportunity stochastic dominance criterion. Finally, we derive inequality measures that decompose overall inequality into within-group risk and between-group inequality of opportunity, providing a tractable basis for normative welfare analysis.
We develop a Euclidean path-integral control to characterize optimal firm behavior in an economy governed by Walrasian equilibrium, Pareto efficiency, and non-cooperative Markovian feedback Nash equilibrium. The approach recasts the problem as a Lagrangian stochastic control system with forward-looking dynamics, thereby avoiding the explicit construction of a value function. Instead, optimal policies are obtained from a continuously differentiable Ito process generated through integrating factors, which yields a tractable alternative to conventional solution methods for complex market environments. This construction is useful in settings with nonlinear stochastic differential equations where standard Hamilton-Jacobi-Bellman (HJB) formulations are difficult to implement. Consistent with Feynman-Kac-type representations, the resulting solutions need not be unique. In economies with a large number of firms, the analysis admits a natural comparison with mean-field game formulations. Our main contribution is to derive a noncooperative feedback Nash equilibrium within this path-integral setting and to contrast it with outcomes implied by mean-field interactions. Several examples illustrate the method's applicability and highlight differences relative to solutions based on the Pontryagin maximum principle generated by HJB.
In many institutional settings, $k$ items are selected with the goal of representing the underlying distribution of claims, opinions, or characteristics in a large population. We study environments with two adversarial parties whose preferences over the selected items are commonly known and opposed. We propose the Quantile Mechanism: one party partitions the population into $k$ disjoint subsets, and the other selects one item from each subset. We show that this procedure is optimally representative among all feasible mechanisms, and illustrate its use in jury selection, multi-district litigation, and committee formation.
2603.24526In the Gale-Shapley model of two-sided matching, it is well known that for generic preferences, the outcomes for each side can vary dramatically in the male-optimal vs. female-optimal stable matchings. In this paper, we show that under a widely used characterization of similarity in rankings, even a weak correlation in preferences guarantees assortative matching with high probability as the market size tends to infinity. It follows that the men's average ranking of women and the women's average ranking of men are asymptotically equivalent in all stable matchings with high probability, as long as the market imbalance is not too extreme.
This paper analyzes the macroeconomic consequences of military spending and militarization within a dynamic growth framework. Building on a Keynesian goods-market model, we examine how the allocation of government expenditure between civilian and military sectors affects capital accumulation and technological progress. Military spending generates opposing effects: it stimulates aggregate demand and may support innovation through defense-related research, but it also crowds out civilian investment and creates structural rigidities. We formalize these mechanisms in a stylized endogenous-growth model in which productivity depends on the degree of militarization, producing a non-linear relationship between the military burden and long-run growth. Calibrated simulations show that moderate levels of military spending can temporarily support growth, whereas excessive militarization reduces long-run development. We further illustrate the asymmetric growth costs of conflict using a simple two-country war simulation between an advanced economy and a sanctioned middle-income economy.
Recent advances in generative AI systems have dramatically reduced the cost of digital production, fueling narratives that widespread participation in software creation will yield a proliferation of viable companies. This paper challenges that assumption. We introduce the Builder Saturation Effect, formalizing a model in which production scales elastically but human attention remains finite. In markets with near-zero marginal costs and free entry, increases in the number of producers dilute average attention and returns per producer, even as total output expands. Extending the framework to incorporate quality heterogeneity and reinforcement dynamics, we show that equilibrium outcomes exhibit declining average payoffs and increasing concentration, consistent with power-law-like distributions. These results suggest that AI-enabled, democratised production is more likely to intensify competition and produce winner-take-most outcomes than to generate broadly distributed entrepreneurial success. Contribution type: This paper is primarily a work of synthesis and applied formalisation. The individual theoretical ingredients - attention scarcity, free-entry dilution, superstar effects, preferential attachment - are well established in their respective literatures. The contribution is to combine them into a unified framework and direct the resulting predictions at a specific contemporary claim about AI-enabled entrepreneurship.
We study the distribution of envy in random matching markets under the Deferred Acceptance (DA) algorithm. Using tools from applied probability, we compute the expected number of proposing agents whom nobody envies and those who envy nobody. We obtain an exact finite-market expression for the former, based on a connection with the coupon collector problem, and asymptotic bounds for the latter. To put these quantities into perspective, we compare them to their counterparts under Random Serial Dictatorship (RSD): while RSD assigns a constant fraction of agents to their top choice, both DA and RSD leave exactly $H_n$ proposing agents unenvied in expectation. Our results show that these clearly unimprovable proposing agents constitute a vanishing fraction of the market.
2603.23038We study the existence of stable matchings when agents have choice correspondences instead of preference relations. We extend the framework of Chambers and Yenmez (2017) by weakening the Path Independence assumption. For many-to-many markets, we show that stable matchings exist when choice correspondences satisfy Substitutability and a new General Acyclicity condition. We provide a constructive proof using a Grow or Discard Algorithm that iteratively expands or eliminates contracts until a strongly maximal Individually Rational set is reached. We provide an algorithm to obtain stable matchings in which rejected contracts are not permanently discarded, distinguishing our approach significantly from standard DAA-type algorithms. For one-to-one markets, we introduce a replacement-based notion of stability and provide an algorithm that constructs stable matchings when choice correspondences satisfy Binary Acyclicity, a property weaker than Path Independence. JEL classification: C62, C78, D01, D47 Keywords: choice correspondences, substitutability, general acyclicity, many-to-many matching, matching with contracts, Grow or Discard algorithm, replacement stability, binary acyclicity.
This paper models firm-to-firm trade in a production network as a set of double auctions. Firms have multilateral market power, namely, can affect prices in both input and output markets. The size and division of surplus are endogenous and depend only on technology, network position, and consumer preferences. The standard simplifying assumption of price-taking on input markets (unilateral market power) has systematic effects: it underestimates the final price and overestimates the surplus going upstream. These phenomena affect the model predictions for the welfare impact of mergers.