Table of Contents
Fetching ...

When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods

Keh-Kuan Sun

Abstract

Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depend on can shrink. We build a dynamic model of public good provision where agents contribute by solving problems that other agents posted on a public platform, and the accumulated solutions form a depreciating public archive. AI reduces archive creation through two margins that require different instruments. The flow margin: the posted volume of knowledge-enhancing queries declines as AI resolves more problems privately before they reach the platform. The resolution margin: the probability that posted queries are resolved declines as AI raises contributors' outside options, thinning the contributor pool and creating congestion on the platform. The two margins interact through a self-undermining feedback that can generate low-archive traps. The decomposition yields a diagnostic prediction: in the congested regime, a joint decline in posted volume and conditional resolution requires that supply-side pool thinning is quantitatively present, whereas volume decline with stable or rising resolution indicates that private diversion alone is the dominant force. Encouraging public sharing of AI-assisted solutions offsets the decline associated with private diversion but cannot repair participation-driven deterioration in conditional resolution, which requires maintaining contributor engagement directly.

When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods

Abstract

Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depend on can shrink. We build a dynamic model of public good provision where agents contribute by solving problems that other agents posted on a public platform, and the accumulated solutions form a depreciating public archive. AI reduces archive creation through two margins that require different instruments. The flow margin: the posted volume of knowledge-enhancing queries declines as AI resolves more problems privately before they reach the platform. The resolution margin: the probability that posted queries are resolved declines as AI raises contributors' outside options, thinning the contributor pool and creating congestion on the platform. The two margins interact through a self-undermining feedback that can generate low-archive traps. The decomposition yields a diagnostic prediction: in the congested regime, a joint decline in posted volume and conditional resolution requires that supply-side pool thinning is quantitatively present, whereas volume decline with stable or rising resolution indicates that private diversion alone is the dominant force. Encouraging public sharing of AI-assisted solutions offsets the decline associated with private diversion but cannot repair participation-driven deterioration in conditional resolution, which requires maintaining contributor engagement directly.

Paper Structure

This paper contains 39 sections, 7 theorems, 50 equations, 2 figures, 2 tables.

Key Result

Lemma 1

Fix $K$. The expected knowledge increment satisfies which is increasing in the ratio $q_H^{e}/q_L^{e}$. Therefore $c^{*,\mathrm{AI}}(K)\ge c^{*,\mathrm{HO}}(K)$ iff the posted pool is weakly more $H$-rich under AI: A sufficient condition, holding escalation probabilities fixed ($m_\theta^{\mathrm{AI}}=m_\theta^{\mathrm{HO}}$ for both $\theta$), is that AI disproportionately resolves routine quer

Figures (2)

  • Figure 1: Average knowledge creation rate under the human-only economy ($\phi^{\mathrm{HO}}$, solid) and the AI economy ($\phi^{\mathrm{AI}}$, dashed) for the parametric example in Appendix \ref{['app:parametric']}, with depreciation rate $\lambda=0.15$ (dotted). AI compresses the stable steady state inward ($K_H^{\mathrm{AI}}\approx 1.55$ vs. $K^{\mathrm{HO}}\approx 2.64$) while the structural minimum viable archive is approximately environment-independent ($K_U^{\mathrm{AI}}\approx 0.15\approx K_U^{\mathrm{HO}}$).
  • Figure 2: Effect of conversion rate $\eta$ on the average knowledge creation rate $\phi^{\mathrm{AI}}(K;\eta)$ for the parametric example in Appendix \ref{['app:parametric']}. Solid: no conversion ($\eta=0$). Dashed: moderate conversion ($\eta'=0.25<\bar{\eta}$). Dash-dotted: high conversion ($\eta"=0.77>\bar{\eta}\approx 0.51$), which eliminates the low-archive basin.

Theorems & Definitions (9)

  • Lemma 1: Composition of the posted pool and the answering cutoff
  • Proposition 1: Resolution: composition vs. congestion
  • Remark 1: Discriminating prediction: supply vs. demand shocks
  • Proposition 2: Dynamic crowd-out and lower archive steady states
  • Corollary 1: Two-margin decomposition of crowd-out
  • Proposition 3: Self-undermining feedback and multiple steady states
  • Proposition 4: Conversion and expansion of the viable region
  • Lemma 2: Sufficient condition for inner-loop uniqueness
  • proof : Proof sketch