Table of Contents
Fetching ...

Learning to Correct for QA Reasoning with Black-box LLMs

Jaehyung Kim, Dongyoung Kim, Yiming Yang

TL;DR

This paper uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings, and demonstrates that CoBB significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.

Abstract

An open challenge in recent machine learning is about how to improve the reasoning capability of large language models (LLMs) in a black-box setting, i.e., without access to detailed information such as output token probabilities. Existing approaches either rely on accessibility (which is often unrealistic) or involve significantly increased train- and inference-time costs. This paper addresses those limitations or shortcomings by proposing a novel approach, namely CoBB (Correct for improving QA reasoning of Black-Box LLMs). It uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings. Specifically, the adaptation model is initialized with a relatively small open-source LLM and adapted over a collection of sub-sampled training pairs. To select the representative pairs of correct and incorrect reasonings, we formulated the dataset construction as an optimization problem that minimizes the statistical divergence between the sampled subset and the entire collection, and solved it via a genetic algorithm. We then train the adaptation model over the sampled pairs by contrasting the likelihoods of correct and incorrect reasonings. Our experimental results demonstrate that CoBB significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.

Learning to Correct for QA Reasoning with Black-box LLMs

TL;DR

This paper uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings, and demonstrates that CoBB significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.

Abstract

An open challenge in recent machine learning is about how to improve the reasoning capability of large language models (LLMs) in a black-box setting, i.e., without access to detailed information such as output token probabilities. Existing approaches either rely on accessibility (which is often unrealistic) or involve significantly increased train- and inference-time costs. This paper addresses those limitations or shortcomings by proposing a novel approach, namely CoBB (Correct for improving QA reasoning of Black-Box LLMs). It uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings. Specifically, the adaptation model is initialized with a relatively small open-source LLM and adapted over a collection of sub-sampled training pairs. To select the representative pairs of correct and incorrect reasonings, we formulated the dataset construction as an optimization problem that minimizes the statistical divergence between the sampled subset and the entire collection, and solved it via a genetic algorithm. We then train the adaptation model over the sampled pairs by contrasting the likelihoods of correct and incorrect reasonings. Our experimental results demonstrate that CoBB significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.

Paper Structure

This paper contains 24 sections, 8 equations, 9 figures, 19 tables, 2 algorithms.

Figures (9)

  • Figure 1: Different black-box LLM adaptation methods. (a) a model relying on the availability of output token probabilities; (b) a model with increased train- and inference-time costs; (c) CoBB (proposed), not requiring output probabilities and is cost-efficient.
  • Figure 2: An overview of CoBB.CoBB first collects the multiple reasonings from black-box LLM, and labels them based on the correctness. Among all possible pairs of correct (positive) and incorrect (negative) reasonings, CoBB subsample a few pairs that can maintain the characteristic of the entire set. Then, the adaptation model, initialized with an open-sourced LLM, is trained to increase/decrease the likelihood of positive/negative reasonings.
  • Figure 3: Effect of contrasting likelihoods. Change of the likelihood of $\pi_{\theta}$ for positive and negative reasonings in the training dataset (a) without / (b) with the contrastive training objective (Eq. \ref{['eq:orpo']}) on ScienceQA. (c) Test accuracy of the adapted reasonings of gpt-3.5-turbo on ScienceQA with varied coefficient $\lambda$.
  • Figure 4: Qualitative example on ScienceQA. Example of the question, original reasoning from black-box LLM (gpt-3.5-turbo), and the adapted reasoning by CoBB. More examples are presented in Appendix \ref{['app:more_qualitative']}.
  • Figure 5: Examples of datasets. Examples from four QA datasets used in experiments.
  • ...and 4 more figures