Table of Contents
Fetching ...

Using Games to Learn How Large Language Models Work

Allison Chen, Isabella Pu

Abstract

While artificial intelligence (AI) technology is becoming increasingly popular, its underlying mechanisms tend to remain opaque to most people. To address this gap, the field of AI literacy aims to develop various resources to teach people how AI systems function. Here we contribute to this line of work by proposing two games that demonstrate principles behind how large language models (LLMs) work and use data. The first game, Learn Like an LLM, aims to convey that LLMs are trained to predict sequences of text based on a particular dataset. The second game, Tag-Team Text Generation, focuses on teaching that LLMs generate text one word at a time, using both predicted probabilities of the data and randomness. While the games proposed are still in early stages and would benefit greatly from further discussion, we hope they can contribute to using game-based learning to teach about complex AI systems like LLMs.

Using Games to Learn How Large Language Models Work

Abstract

While artificial intelligence (AI) technology is becoming increasingly popular, its underlying mechanisms tend to remain opaque to most people. To address this gap, the field of AI literacy aims to develop various resources to teach people how AI systems function. Here we contribute to this line of work by proposing two games that demonstrate principles behind how large language models (LLMs) work and use data. The first game, Learn Like an LLM, aims to convey that LLMs are trained to predict sequences of text based on a particular dataset. The second game, Tag-Team Text Generation, focuses on teaching that LLMs generate text one word at a time, using both predicted probabilities of the data and randomness. While the games proposed are still in early stages and would benefit greatly from further discussion, we hope they can contribute to using game-based learning to teach about complex AI systems like LLMs.

Paper Structure

This paper contains 7 sections, 3 figures.

Figures (3)

  • Figure 1: Two example interfaces of Game 1 (Learn Like an LLM). Top: This is the interface when the player starts. The set of possible shapes is always at the top of the screen. Once the player selects at least 4 shapes, the Submit button becomes clickable. Bottom: A possible interface after the player submits one sequence and is constructing the second. The submitted sequene is on the left side of the screen. The points earned from the first sequence is to the left of the sequence, the color of the points denotes whether the shape was in the hidden set, and the validity of each shape is denoted by the checkmarks and X's under each shape.
  • Figure 2: Possible mapping of shapes to words for Learn Like an LLM. Based on this mapping, the previously submitted sequence in Fig. \ref{['fig:game1_interface']} (bottom) maps to "I see a and" and the sequence in progress maps to "I see a ball".
  • Figure 3: Interface for Tag-Team Text Generation. Top: Interface when it is the player's turn to make the final word selection from a set of 5 words with their estimated probabilities. Here they select one. Bottom: Interface when it is the player's turn to generate a set of probable words. In this example, the player opted to select three from a pool of 10, rather than submitting their own. The computer will select one of the three selected words to add to the response.