LIBOPT - An environment for testing solvers on heterogeneous collections of problems - Version 1.0
Jean Charles Gilbert, Xavier Jonsson
TL;DR
Libopt provides a unified framework to benchmark solvers across heterogeneous problem collections by standardizing execution, result capture, and comparison. It combines three core workflows—runopt for execution, addopt for result consolidation, and perfopt for performance comparison via Dolan–Moré profiles, using Libopt lines such as $libopt%solv%coll%prob%token=number$ to encode results. The approach supports diverse problem encodings and languages, and includes detailed mechanisms for extending collections (e.g., CUTEr, Modulopt) and adding solvers, with a structure built around collections, solvers, and a supporting database. The paper also outlines installation practices, startup configuration, and depth-structured components (startup file, lists, solv_coll scripts, and main programs), and discusses future directions such as platform diversification and broader Web interfaces.
Abstract
The Libopt environment is both a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections. These collections can be heterogeneous in the sense that their problems can have common features that differ from one collection to the other. Libopt brings a unified view on this composite world by offering, for example, the possibility to run any solver on any problem compatible with it, using the same Unix/Linux command. The environment also provides tools for comparing the results obtained by solvers on a specified set of problems. Most of the scripts going with the Libopt environment have been written in Perl.
