Skip to content

Administering omnibenchmark


Omnibenchmark is designed as a SaaS. Omnibenchmark's components are modular, and some you can deploy in your own to have extra control. already provides all these components, so this guide only matters if you'd like to replace some of the default components.

  • renkulab and gitlab deployment
  • gitlab runners
    • default: provided by
    • self: registered runners in any architecture (laptops, HPC, GPU-powered machines etc).
  • omnibenchmark triplestore
    • default: apache jena from robinsonlab (ask us for details).
    • self: deploy a jena/fuseki to have full control on your triples.
  • centralized benchmark listing/json.
    • default: robinsonlab (ask us for details, queried by omb-py).
  • bettr deployment
    • default: shiny-server by robinsonlab (ask us for details).
    • self: set up a shiny-server (perhaps using singularity).
  • Web dashboard

Besides, omnibenchmark relies on a set of python modules and R packages, which also need to be cross-compatible and compatible with your renkulab deployment, including:

Configuration guides

Mind some of our docs are missing; drafts are listed as ✓, ready-to-use docs as ✅.

  1. Services overview
  2. Start a new benchmark
  3. Set up a triplestore
  4. Register a runner
  5. Serve bettr
  6. Serve a dashboard
  7. Deploy renkulab


To get tokens/authentication details:,

Services dependencies

Several components, mainly omnibenchmark python, query other components (APIs, triplestore) on runtime.

  • Renku API
    • omnibenchmark python
  • Gitlab API
    • omnibenchmark python
    • gitlab runners
  • Triplestore
    • omnibenchmark python (query)
    • omnibenchmark python (population)
    • anywhere (population, from within the CI/CD job; update token needed)
    • (not yet implemented) git/renku hooks
  • Metric results
    • a cron job from the machine serving bettr deployments


You can run renku projects in renku stealth mode disabling the hook within your gitlab project by browsing Settings -> Webhooks.

Start a new benchmark

This guide relies on the default omnibenchmark components (gitlab runners, triplestore etc).

GUI click/fill instructions have been tested in running GitLab Community Edition 14.10.5 and might change in future releases.


The starting point for a non automated benchmark creation is

Interestingly,'s gitlab is available at If you log in to you'll be logged in to the gitlab too. To switch easily from renku's GUI to gitlab's GUI please notice the projects or gitlab components of these URLs:


Both refer to the same repository.

Gitlab group (and subgroups)

Repository groups can be created by browsing pressing new group. Repository subgroups can be created by browsing a group, i.e. , and pressing new subgroup. . If interested in creating a subgroup below 'known' omnibenchmark groups such as or your user needs to be granted rights; contact the omnibecnhmark team if so.

Tip: you can add other people to your benchmark group/subgroup by pressing (left panel) (sub)group information -> Members.

Tip: it's advisable to register a dedicated gitlab runner when generating a group, and use it as a group runner for CI/CD. For that, check our runner's docs.

Benchmark masked variables/tokens

Once you've created the group or subgroup where your benchmark will leave (or at least your orchestrator will), you'll need to create a token with api_read scope. For that, visit your group Settings -> CI/CD -> Variables setting and create a nonprotected, masked variable, for instance `OMB_ACCESS_TOKEN~. And keep its content stored somewhere for future usage.

User tokens

A personal gitlab token will allow to automate actions. To generate one: - log in at - visit - create one with the adequate scope (i.e. read API)

This token won't be used by omnibenchmark, but can be handy to have.


The core component of omnibenchmark is an orchestrator, which stitches together datasets, methods and metrics. Without an orchestrator there is no benchmark. Still, you can set up the orchestrator last.

Orchestrators are unusual omnibenchmark components. They're mainly a gitlab CI/CD yaml. An example orchestrator looks like this:

  OMNIBENCHMARK_NAME: iris_example

  - build
  - data_run
  - process_run
  - parameter_run
  - method_run
  - metric_run
  - summary_metric_run

  stage: build
  image: docker:stable
    - if: '$CI_PIPELINE_SOURCE == "pipeline"'
      when: never
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN http://$CI_REGISTRY
  script: |
    CI_COMMIT_SHA_7=$(echo $CI_COMMIT_SHA | cut -c1-7)
    docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA_7 .

  stage: data_run 
    - schedules
    project: omnibenchmark/iris_example/iris-dataset
    strategy: depend

The first stanza (variables) defines variables needed for overall git behaviour as well as helping with authentication. The important tokens are:


The second stanza (image_build) generates a renku-powered docker image (i.e. for interactive sessions).

The third stanza (trigger_iris_dataset) is how most of the orchestrator CI/CD tasks look like: they trigger downstream projects CI/CDs; that is, their gitlab-ci.yaml. In the example above, triggers the iris dataset CI/CD.

Terraforming via templates

Omnibenchmark's renku templates help to create new benchmark components.

Terraforming via gitlab API

omnibus helps automating benchmark creation.



Infrastructure-wise, some manual actions need to be done to start a new benchmark.

  • register runner(s)
  • set up bettr endpoint
  • add a triplestore dataset
  • add a triplestore apache reverse proxy


It's advisable to register a dedicated gitlab runner when generating a benchmark, and use it as a group runner for CI/CD. For that, check our runner's docs.

Setting up jena/fuseki

Lorem ipsum

Install gitlab-runner in your (linux) machine

For CPU (not GPU) computing, machines to run group gitlab-runners with docker executors need docker and the gitlab-runner software installed. The machine (server, laptop) running the runners does not need a public IP.


Assuming your system is apt-based (debian, ubuntu):

sudo apt-get update

sudo apt install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common

curl -fsSL | sudo apt-key add -

sudo add-apt-repository \
   "deb [arch=amd64] \
   $(lsb_release -cs) \

sudo apt update

sudo apt-get install -y docker-ce docker-ce-cli

systemctl status docker #; start if needed

groupadd docker
usermod -aG docker YOURUSER


Mind the architecture of your machine; amd64 assumed here (Apple M1s: arm64)

mkdir -p ~/soft/gitlab-runner; cd $_

curl -LJO "${arch}.deb"
sudo dpkg -i gitlab-runner_amd64.deb

sudo gitlab-runner start

# create an user too
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash

Register a gitlab runner in your (linux) machine

On GitLab Community Edition 14.10.5 (might sligthly vary within versions) visit your repository or group of repositories, i.e., go to CI/CD -> Runners (caution, not to Settings -> CI/CD, click Register a group runner, copy to the clipboard the registration token.

Let's assume the token is 94daGGiXCgwthisisnotarealtoken.

In your gitlab-runner machine, run:



sudo gitlab-runner register \
  --non-interactive \
  --url "" \
  --registration-token "$REGISTRATION_TOKEN" \
  --description "$RUNNER_NAME" \
  -locked=false  \
  --run-untagged=true \
  --executor "$EXECUTOR" \
  --description "$RUNNER_NAME" \
  --docker-image docker:stable \
  --docker-network-mode "host" \
  --docker-volumes /var/run/docker.sock:/var/run/docker.sock

To check whether the gitlab runner is running,

sudo gitlab-runner list
sudo journalctl -u gitlab-runner

Tips and troubleshooting

The gitlab runner major.minor version should stay aligned to the gitlab(server) major and minor version. To check the gitlab runner's,

gitlab-runner --version

and to check the gitlab's, visit (or similar, i.e https://gitlab_base_url/help).

To inspect or edit/finetune the concurrency, timeout behaviour, and/or each runners details can be checked at /etc/gitlab-runner/config.toml. An example config file is:

concurrent = 10
check_interval = 0

  session_timeout = 36000

  name = "tesuto-robinsonlab-gitlab-docker"
  url = ""
  token = "YfztssssssssssssssN"
  executor = "docker"
    tls_verify = false
    image = "docker:stable"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    network_mode = "host"
    shm_size = 0

  name = "iris-tesuto-robinsonlab-gitlab-docker"
  url = ""
  token = "xbxxxxxxxxxxxxxKi"
  executor = "docker"
    tls_verify = false
    image = "docker:stable"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    network_mode = "host"
    shm_size = 0

Double check privileged = false, needs to be false. Re: the OOM killer, TLS verify etc: up to you.

concurrent limits how many jobs can run concurrently, across all registered runners; beware gitlab-runner does not know how many cores does each job use. For mostly single-core jobs defining concurrent as your machine's nproc - 2 should work.

Very useful advanced config.toml docs.

Mind that your gitlab repositories list their registered runners at Settings -> CI/CD -> Runners. Beware of the shared runners, group runners (the ones we're configuring here) and other available runners; disable the ones you don't want to use.

Shell executors are also easy to config. Docker-in-docker is not working well due to subpar caching.

Remember to remove old images and vacuum/clean your machine (with a cron job). A recipe to prune old images, containers, networks and volumes (to be cron-ed at midnight):

## Prunes images, containers, networks, volumes (ideally daily, at midnight)
## Does some convoluted checks to keep the most up-to-date image
## largely untested
## 18 Aug 2022
## Izaskun Mallona
## GPLv3

# prune images older than 24h
## the -a means it removes dangling but also those not used by existing containers
# the until-24h means removes those older than 24h

docker network prune  --filter "until=200h" -f

docker volume prune  -f "until=200h"

for diru in $(docker images -a --format "{{.Repository}}" | sort | uniq)
    ## sort by the timestamp, list all except the first/most recent
    for old_thing in $(docker images -a --format \
        "{{.ID}}\t{{.Size}}\t{{.Repository}}\t{{.CreatedAt}}" | \
        grep $diru | \
        sort -k4r | \
        tail -n+2 | \
        cut -f1)
        echo Removing $old_thing from $diru
        docker rmi $old_thing;

#remove dangling
docker image prune -f

Serve bettr

We are currently serving bettr via shiny-server within singularity. So this recipe aims to install singularity on a linux machine, get the bettr image, and set up a cron to retrieve metrics.

The host needs to face the internet. And a firewall.

We are aware rstudioconnect could simplify this, or just running bettr locally.



We assume an apt-apt distribution, i.e. debian or ubuntu.

sudo apt-get update

sudo apt-get update && sudo apt-get install -y \
    build-essential \
    uuid-dev \
    libgpgme-dev \
    squashfs-tools \
    libseccomp-dev \
    wget \
    pkg-config \
    git \
    cryptsetup-bin \

mkdir -p ~/soft/go
cd $_

export VERSION=1.13.5 OS=linux ARCH=amd64 && \

sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz

echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
    echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
    source ~/.bashrc

mkdir -p ~/soft/singularity
cd $_

export VERSION=3.8.3 && # adjust this as necessary \

tar -xzf singularity-*${VERSION}.tar.gz

cd sing*

./mconfig && \
    make -C ./builddir

sudo make -C ./builddir install

# test

singularity exec library://alpine cat /etc/alpine-release


sudo apt install apache2

Also, configure iptables. Or ufw, open port 80/443, 22, and whatever the port the bettr image is going to use below.

bettr image

The bettr image is generated by .

serving the bettr app

To read the bettr image from the registry, an user token with read_registry power is needed; below encoded as INGULARITY_DOCKER_PASSWORD.

mkdir -p ~/bettr_deployer/apps ~/bettr_deployer/logs ~/bettr_deployer/lib ~/bettr_deployer/tmp

cd ~/bettr_deployer
export SINGULARITY_DOCKER_PASSWORD='KxasgasgasgxK' # read_registry granted
export NAMESPACE="omnibenchmark"
export ID="obm_bettr"
export VERSION="1d0b31b"

singularity instance start --env SHINY_LOG_STDERR=1 \
            --bind ./apps:/srv/shiny-server/bettr \
            --bind ./log:/var/log/shiny-server_bettr \
            --bind ./lib:/var/lib/shiny-server \
            --bind ./tmp:/tmp \
            bettr-deployer_"$ID"-"$VERSION".sif "$ID"_"$VERSION"

singularity exec instance://"$ID"_"$VERSION" "shiny-server" &

So apps will be served if placed at ~/bettr_deployer/apps, i.e.:

total 12
-rw-rw-r-- 1 shiny shiny 1614 Jul 27 13:28 app.R
drwxrwxr-x 2 shiny shiny 4096 Sep  1  2022 data
-rw-rw-r-- 1 shiny shiny    0 Sep  1  2022 restart.txt

total 340
-rw-rw-r-- 1 shiny shiny 347951 Jan 19  2023 summary.json

where app.R contains

#!/usr/bin/env R

## Retrieve data

## make recognize the argument as the data that has been just downloaded, for FAIR

resDir   <- 'data'
bstheme  <- 'darkly'
appTitle <- 'bettr'
metrics  <- 'all'

res_files <- list.files(path = paste0(resDir), full.names = TRUE)

## Read result files
out <- jsonlite::read_json(res_files, simplifyVector = TRUE)

## reconverting true false to logical
colnames(out$metricInfo) <- c("Metric", "Group")
out$initialTransforms <- lapply(out$initialTransforms, function(x){
  lapply(x, function(y){

# replacing na value
out$idInfo[$idInfo)] <- "NaN"

# call bettr
  df = out$df,
  idCol = if (is.null(out$idCol)) {
  } else {
  metrics = if (is.null(out$metrics)){
    setdiff(colnames(out$df), out$idCol)
  } else {
  initialTransforms = if (is.null(out$initialTransforms)) {
  } else {
  metricInfo = out$metricInfo,
  metricColors = out$metricColors,
  idInfo = out$idInfo,
  idColors = out$idColors,
  weightResolution = if (is.null(out$weightResolution)) {
  } else {
  bstheme = if (is.null(out$bstheme)) {
  } else {
  appTitle = if (is.null(out$appTitle)) {
  } else {

And the data is either a fixed snapshot (or retrieved periodically via cron) of a suitably formated metrics file, like this one.

firewall / shiny ports

Mind to open port 3840 to fit the docker container shiny-server's port, or update the Dockerfile or the singularity port mapping as needed.

cron to retrieve results

Using the clustering benchmark as an example:

cd /home/shiny/bettr_deployer/apps/omni_clustering/data

wget -q -O summary.json

## to make sure the shiny app reloads
touch ../app.R

Tips and troubleshooting

Serve a dashboard

Lorem ipsum.

Deploy renkulab

Lorem ipsum.

Last update: December 11, 2023