Skip to content

Containers

Experimental

Container support is experimental. MPI is not supported in containers. Single-node CPU and NVIDIA GPU execution have been tested.

Pre-built Apptainer/Singularity containers (~750 MB) let you run THOR without compiling from source.

Quick Start

# Download container
apptainer pull oras://ghcr.io/cbyrohl/thor-omp:dev

# Download example config
wget https://gist.githubusercontent.com/cbyrohl/fe46ace693f398837550d489ba90087f/raw/config.yaml

# Run
apptainer run thor-omp_dev.sif config.yaml

Setting Thread Count

Control the number of OpenMP threads with OMP_NUM_THREADS:

# Run with 8 threads
apptainer run --env OMP_NUM_THREADS=8 thor-omp_dev.sif config.yaml

NVIDIA GPU

The thor-cuda image runs on any NVIDIA GPU supported by the host driver. It uses the AdaptiveCpp SSCP backend which JIT-compiles for the detected GPU at runtime.

# Download GPU container
apptainer pull oras://ghcr.io/cbyrohl/thor-cuda:dev

# Run with NVIDIA GPU access (--nv is required)
apptainer run --nv thor-cuda_dev.sif config.yaml

Your config.yaml must select the GPU device:

device: "gpu"

Requirements

  • NVIDIA driver installed on the host (CUDA toolkit is inside the container)
  • The --nv flag is required — it binds host GPU drivers into the container
  • Set device: "gpu" in your config YAML (default selects CPU)

Available Images

Public Access

Only thor-omp is publicly available for testing. Other images require access to the private repository.

Image Description Precision Access
thor-omp CPU build (AdaptiveCpp OMP backend) FP64 Public
thor-cuda NVIDIA GPU build (AdaptiveCpp SSCP/CUDA) FP64 Private
thor-generic Portable CPU build (generic SYCL) FP64 Private
thor-env-only Build environment only (no THOR binary) Private

Building Custom Containers

For Development Only

This section is for developers who need to build custom containers. Most users should use the pre-built thor-omp image above.

pip install typer pyyaml jinja2
cd apptainer

# List configurations
./build.py list

# Build container
./build.py build omp

# Build from local source (requires initialized submodules)
./build.py build omp --local

Cluster Configurations

Configurations in apptainer/clusters/:

Configuration Description
omp.yaml OMP backend build, FP64 (x86-64-v3)
cuda.yaml NVIDIA GPU build (SSCP/CUDA)
generic.yaml Generic SYCL build, FP64 (x86-64-v3)
env-only.yaml Environment only
example-hpc.yaml Template for custom configs

Create a custom config:

cp apptainer/clusters/example-hpc.yaml apptainer/clusters/my-cluster.yaml
# Edit march for your CPU (e.g., znver3 for AMD EPYC Milan)
./build.py build my-cluster

Common march values: x86-64-v3 (generic), znver2/znver3/znver4 (AMD EPYC), skylake-avx512/icelake-server (Intel Xeon).

Environment Container

For development without rebuilding containers:

./build.py build env-only

# Build and run THOR from source
apptainer exec build/thor-env-only.sif bash -c \
    "cd /path/to/thor && cmake -B build && cmake --build build -j\$(nproc)"
apptainer exec build/thor-env-only.sif /path/to/thor/build/src/thor config.yaml

Accessing Host Data

By default, Apptainer only mounts your home directory and a few system paths inside the container. If your simulation data or Cloudy tables live elsewhere (e.g. /virgotng/, /scratch/, /data/), you must bind-mount those paths:

# Bind a single path
apptainer run --bind /virgotng thor-omp_dev.sif config.yaml

# Bind multiple paths
apptainer run --bind /virgotng,/scratch thor-omp_dev.sif config.yaml

Alternatively, set the APPTAINER_BIND environment variable (useful in job scripts):

export APPTAINER_BIND="/virgotng,/scratch"
apptainer run thor-omp_dev.sif config.yaml

For full details, see the Apptainer Bind Paths documentation.

Troubleshooting

fuse2fs not found / gocryptfs not found warnings: These informational messages from Apptainer can be safely ignored. THOR containers use Docker-based images and do not require EXT3 filesystem mounting or encrypted overlays.

Slow performance: Check march matches your CPU. Set OMP_NUM_THREADS appropriately.