Table of Contents
Fetching ...

Sonnet or Not, Bot? Poetry Evaluation for Large Models and Datasets

Melanie Walsh, Anna Preus, Maria Antoniak

TL;DR

A task to evaluate how well LLMs recognize one aspect of English-language poetry--poetic form--which captures many different poetic features, including rhyme scheme, meter, and word or line repetition is developed.

Abstract

Large language models (LLMs) can now generate and recognize poetry. But what do LLMs really know about poetry? We develop a task to evaluate how well LLMs recognize one aspect of English-language poetry--poetic form--which captures many different poetic features, including rhyme scheme, meter, and word or line repetition. By using a benchmark dataset of over 4.1k human expert-annotated poems, we show that state-of-the-art LLMs can successfully identify both common and uncommon fixed poetic forms--such as sonnets, sestinas, and pantoums--with surprisingly high accuracy. However, performance varies significantly by poetic form; the models struggle to identify unfixed poetic forms, especially those based on topic or visual features. We additionally measure how many poems from our benchmark dataset are present in popular pretraining datasets or memorized by GPT-4, finding that pretraining presence and memorization may improve performance on this task, but results are inconclusive. We release a benchmark evaluation dataset with 1.4k public domain poems and form annotations, results of memorization experiments and data audits, and code.

Sonnet or Not, Bot? Poetry Evaluation for Large Models and Datasets

TL;DR

A task to evaluate how well LLMs recognize one aspect of English-language poetry--poetic form--which captures many different poetic features, including rhyme scheme, meter, and word or line repetition is developed.

Abstract

Large language models (LLMs) can now generate and recognize poetry. But what do LLMs really know about poetry? We develop a task to evaluate how well LLMs recognize one aspect of English-language poetry--poetic form--which captures many different poetic features, including rhyme scheme, meter, and word or line repetition. By using a benchmark dataset of over 4.1k human expert-annotated poems, we show that state-of-the-art LLMs can successfully identify both common and uncommon fixed poetic forms--such as sonnets, sestinas, and pantoums--with surprisingly high accuracy. However, performance varies significantly by poetic form; the models struggle to identify unfixed poetic forms, especially those based on topic or visual features. We additionally measure how many poems from our benchmark dataset are present in popular pretraining datasets or memorized by GPT-4, finding that pretraining presence and memorization may improve performance on this task, but results are inconclusive. We release a benchmark evaluation dataset with 1.4k public domain poems and form annotations, results of memorization experiments and data audits, and code.

Paper Structure

This paper contains 71 sections, 11 figures, 12 tables.

Figures (11)

  • Figure 1: We develop a task to evaluate how well LLMs can identify poetic form for more than 20 poetic forms and formal elements in the English language. This is a challenging task because poetic form is determined by a combination of factors: rhyme scheme, meter, repetition, number of lines, and/or subject matter.
  • Figure 2: The proportion of all poems for a given form that were detected (at least one line) in the source data for Dolma. We include only the most frequent forms. Poems can appear in multiple sources and belong to muultiple forms. The Common Crawl dataset dominates, and some sources like Project Gutenberg contain significant percentages of only certain forms like ballads and couplets.
  • Figure 3: The proportions of lines detected in Dolma per poem (only those with at least one line detected). If at least one line from a poem is detected, it is likely that all the lines will be detected somewhere in Dolma.
  • Figure 4: Poetic form classification results (F1 scores) for fixed forms when prompted with only the text of the poem; only the author and title; only the first line; only the last line. Error bars indicate standard deviation across 20 bootstrapped samples of poems. See Figures \ref{['fig:unfixed-form-performance-prompt']}-\ref{['fig:formal-elements-performance-prompt']} in Appendix \ref{['appendix-additional-form-results']} for unfixed forms and formal elements results.
  • Figure 5: Unfixed Forms — Poetry Foundation and Academy of American Poets. These figures show LLM performance (F1 scores) by prompt type on the task of detecting poetic form (in the same way as the human annotation/institution it was collected from) by prompt type: with only the text of the poem; only the author and title; only the first line; only the last line. Error bars indicate standard deviation across 20 bootstrapped samples of poems.
  • ...and 6 more figures