Concept
Omnibenchmark is a platform for open, continuous community-driven benchmarking. Anyone can start a new or contribute to an existing benchmark . A contribution can be a relevant ground truth dataset, a missing method, an update to an existing method or a new metric. Each of these contributions are single independent modules. Modules represent standalone benchmark components in form of gitlab projects, that contain input and output files bundled into datasets, containerized enviroments, a workflow that describes how output files were generated and can be generated from new input files and a pipeline recipe for continous intergration. Modules are “connected” by sharing datasets (e.g. using an output datasets from an upstream module as input). Dataset files are stored in git LFS and can be accessed from different gitlab projects/modules without storing them multiple times. Finally result can be interactively explored using a shiny dashboard. At the same time all dataset (inputs, outputs, intermediates) can be downloaded and locally explored.

Site
This site gives you an overview on:
Getting started
All important resources to get started with a first module can be found here. Start with the quick-start tutorials and check the example iris benchmark to get an overview of how the system works. If you are looking for more details on how the workflow generation, input file detection or input - output mapping works check the omnibenchmark python module. For details on the project setup and underlying services see the renku documentation.
Explore results
Results of benchmark studies depend a lot on the flavour and weighting of different metrics, as well as their thresholds. Usually these choices are done by the “benchmarker”, althought they depend a lot on the purpose. To give the user a chance to decide about what metrics and thresholds matter in a specific use case we here provide a single bettr app for each benchmark to interactively explore all results.