Table of Contents
Fetching ...

LIBOPT - An environment for testing solvers on heterogeneous collections of problems - Version 1.0

Jean Charles Gilbert, Xavier Jonsson

TL;DR

Libopt provides a unified framework to benchmark solvers across heterogeneous problem collections by standardizing execution, result capture, and comparison. It combines three core workflows—runopt for execution, addopt for result consolidation, and perfopt for performance comparison via Dolan–Moré profiles, using Libopt lines such as $libopt%solv%coll%prob%token=number$ to encode results. The approach supports diverse problem encodings and languages, and includes detailed mechanisms for extending collections (e.g., CUTEr, Modulopt) and adding solvers, with a structure built around collections, solvers, and a supporting database. The paper also outlines installation practices, startup configuration, and depth-structured components (startup file, lists, solv_coll scripts, and main programs), and discusses future directions such as platform diversification and broader Web interfaces.

Abstract

The Libopt environment is both a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections. These collections can be heterogeneous in the sense that their problems can have common features that differ from one collection to the other. Libopt brings a unified view on this composite world by offering, for example, the possibility to run any solver on any problem compatible with it, using the same Unix/Linux command. The environment also provides tools for comparing the results obtained by solvers on a specified set of problems. Most of the scripts going with the Libopt environment have been written in Perl.

LIBOPT - An environment for testing solvers on heterogeneous collections of problems - Version 1.0

TL;DR

Libopt provides a unified framework to benchmark solvers across heterogeneous problem collections by standardizing execution, result capture, and comparison. It combines three core workflows—runopt for execution, addopt for result consolidation, and perfopt for performance comparison via Dolan–Moré profiles, using Libopt lines such as to encode results. The approach supports diverse problem encodings and languages, and includes detailed mechanisms for extending collections (e.g., CUTEr, Modulopt) and adding solvers, with a structure built around collections, solvers, and a supporting database. The paper also outlines installation practices, startup configuration, and depth-structured components (startup file, lists, solv_coll scripts, and main programs), and discusses future directions such as platform diversification and broader Web interfaces.

Abstract

The Libopt environment is both a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections. These collections can be heterogeneous in the sense that their problems can have common features that differ from one collection to the other. Libopt brings a unified view on this composite world by offering, for example, the possibility to run any solver on any problem compatible with it, using the same Unix/Linux command. The environment also provides tools for comparing the results obtained by solvers on a specified set of problems. Most of the scripts going with the Libopt environment have been written in Perl.

Paper Structure

This paper contains 33 sections, 4 equations, 4 figures.

Figures (4)

  • Figure 1: Typical performance profiles for three solvers
  • Figure 2: Part ot the Libopt hierarchy in the standard distribution
  • Figure 3: A single cell of the $\{$solvers$\}\times\{$collections$\}$ Cartesian product
  • Figure 4: Performance profiles of the diagonal (diag, red solid curve) and scalar (scal, blue dashed curve) running modes of m1qn3 on 143 unconstrained problems of the CUTEr collection, comparing the number of function evaluations