Benchopt
BenchOpt is a benchmarking suite for optimization algorithms. It is built for simplicity, transparency, and reproducibility.
Benchopt is a collaborative framework for creating reproducible benchmarks of optimization algorithms in machine learning. It automates the tedious work of implementing, running, and comparing different solvers across programming languages and hardware, ensuring that benchmarks are transparent and can be easily extended by the community.
The framework handles the complexity of fair comparison—managing dependencies, standardizing interfaces, and collecting performance metrics—so researchers can focus on understanding which methods work best for their problems. Benchmarks cover standard ML tasks like logistic regression, LASSO, and neural network training, providing practical insights beyond theoretical comparisons.
Citation
@inproceedings{moreau2022,
author = {Moreau, Thomas and Massias, Mathurin and Gramfort, Alexandre
and Ablin, Pierre and Bannier, Pierre-Antoine and Charlier, Benjamin
and Dagréou, Mathieu and Dupré la Tour, Tom and Durif, Ghislain and
F. Dantas, Cassio and Klopfenstein, Quentin and Larsson, Johan and
Lai, En and Lefort, Tanguy and Malézieux, Benoit and Moufad, Badr
and T. Nguyen, Binh and Rakotomamonjy, Alain and Ramzi, Zaccharie
and Salmon, Joseph and Vaiter, Samuel},
editor = {Koyejo, S. and Mohamed, S. and Agarwal, A. and Belgrave, D.
and Cho, K. and Oh, A.},
title = {Benchopt: Reproducible, Efficient and Collaborative
Optimization Benchmarks},
booktitle = {Advances in Neural Information Processing Systems},
volume = {35},
pages = {25404-25421},
date = {2022/12/06},
url = {https://papers.nips.cc/paper_files/paper/2022/hash/a30769d9b62c9b94b72e21e0ca73f338-Abstract-Conference.html},
langid = {en},
abstract = {Numerical validation is at the core of machine learning
research as it allows to assess the actual impact of new methods,
and to confirm the agreement between theory and practice. Yet, the
rapid development of the field poses several challenges: researchers
are confronted with a profusion of methods to compare, limited
transparency and consensus on best practices, as well as tedious
re-implementation work. As a result, validation is often very
partial, which can lead to wrong conclusions that slow down the
progress of research. We propose Benchopt, a collaborative framework
to automate, reproduce and publish optimization benchmarks in
machine learning across programming languages and hardware
architectures. Benchopt simplifies benchmarking for the community by
providing an off-the-shelf tool for running, sharing and extending
experiments. To demonstrate its broad usability, we showcase
benchmarks on three standard learning tasks: \$\textbackslash
ell\_2\$-regularized logistic regression, Lasso, and ResNet18
training for image classification. These benchmarks highlight key
practical findings that give a more nuanced view of the
state-of-the-art for these problems, showing that for practical
evaluation, the devil is in the details. We hope that Benchopt will
foster collaborative work in the community hence improving the
reproducibility of research findings.}
}