Skip to content

AdaptiveCpp Installation

Docker Alternative

If you're familiar with Docker containers, you can use the official Thor SYCL containers (AdaptiveCpp and Intel) from github.com/thor-rt/syclcontainers instead of building from source. Performance and portability considerations on HPC systems, however, will require explicit installation for production.

You can follow the official instructions here. Alternatively, we provide some guidance here.

Clang

AdaptiveCpp relies on Clang. The clang compiler version requires certain features, and most likely your hpc clang version is not compiled for this. In such case, you need to recompile clang. Download the LLVM source code, and within that directory, consider the following options.

First download, e.g. Clang18 here: https://github.com/llvm/llvm-project/releases/download/llvmorg-18.1.8/llvm-project-18.1.8.src.tar.xz

After unpacking in its root folder, compile with the following options:

mkdir -p build
rm -rf build/*
pushd build
cmake -G "Unix Makefiles" ../llvm -DCMAKE_BUILD_TYPE=Release \
    -DLLVM_BUILD_LLVM_DYLIB=ON \
    -DLLVM_ENABLE_PROJECTS="clang;openmp;lld" \
    -DLLVM_ENABLE_RUNTIMES="compiler-rt" \
    -DLLVM_TARGETS_TO_BUILD="X86" \ 
    -DLIBOMP_ENABLE_SHARED=ON \
    -DLLVM_INCLUDE_BENCHMARKS=0 \
    -DLLVM_INCLUDE_EXAMPLES=0 \
    -DLLVM_INCLUDE_TESTS=0 \
    -DLLVM_ENABLE_ASSERTIONS=OFF \
    -DLLVM_ENABLE_DUMP=OFF \
    -DLLVM_ENABLE_OCAMLDOC=OFF \
    -DLLVM_ENABLE_BINDINGS=OFF

make -j16
make DESTDIR=~/llvm install

add "AMPGPU" for LLVM_TARGETS_TO_BUILD (->"AMDGPU;X86") if amd gpu support is needed. (Nvidia support will be specified only in the next step)

AdaptiveCpp

rm -rf build/*
cd build

export INCL_CLANG=SPECIFY
export EXEC_CLANG=SPECIFY
export LLVM_DIR=SPECIFY
export INSTALL_DIR=SPECIFY


cmake -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR -DLLVM_DIR=$LLVM_DIR -DCLANG_EXECUTABLE_PATH=$EXEC_CLANG -DCLANG_INCLUDE_PATH=$INCL_CLANG ..
make -j16
make install

For AMD GPUs, add -DWITH_ROCM_BACKEND=ON -DROCM_PATH=$ROCM_PATH with the corresponding path for ROCm.

For Nvidia GPUs, add -DWITH_CUDA_BACKEND=ON -DNVCXX_COMPILER=$EXEC_NVCXX -DCUDA_TOOLKIT_ROOT_DIR=$DIR_CUDA with the respective paths for CUDA and NVCXX.

For more details and targets, see the official AdaptiveCPP docs.