All material (C) 2020-2021 by CSC -IT Center for Science Ltd. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License, http://creativecommons.org/licenses/by-sa/4.0/
singularity run-help
etc) should be done in the login nodessinteractive
)~/.singularity
.singularity/cache
can fill up fast if you pull/build containers
$SINGULARITY_CACHEDIR
and $SINGULARITY_TMPDIR
singularity exec
singularity run
can cause complications, i.e. variables not inherited as expected, etcsingularity_wrapper
, remember to bind all necessary directoriessingularity_wrapper
takes care of most common binds
$TMPDIR
if set and directory exists$LOCAL_SCRATCH
if set.$SING_IMAGE
as path to image file if set
Some modules set $SING_IMAGE
. If you have problems, do
module purge
or
unset SING_IMAGE
$SING_FLAGS
--bind
statements--nv
to use host CUDA--cleanev
if necessaryReserve it in the batch job script:
--gres=gpu:v100:<number_of_gpus_per_node>
For some containers you need to add --nv
Some containers include CUDA etc, and don’t need --nv
Reserve it in the batch job script:
#SBATCH --gres=nvme:<local_storage_space_per_node>
$LOCAL_SCRATCH
is set by the system if NVMe resources allocated for the job
singularity_wrapper
, $LOCAL_SCRATCH
is bound automatically--bind $LOCAL_SCRATCH:$LOCAL_SCRATCH
mksquashfs my_dataset my_dataset.sqfs
singularity exec --bind my_dataset.sqfs:/data:image-src=/ image.sif myprog
#!/bin/bash
#SBATCH --job-name=example
#SBATCH --account=project_12345
#SBATCH --partition=small
#SBATCH --time=02:00:00
#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=4000
export SING_IMAGE=/scratch/project_12345/image.sif
export SING_FLAGS="--bind /scratch/project_12345/my_reference:/reference $SING_FLAGS"
singularity_wrapper exec myprog -i input -o output
$PYTHONPATH
or $PERL5LIB
set on host, etc.
.bashrc
--cleanenv
prevents host environment variables from being inherited--cleanenv
export LC_ALL=C
Seems to happen mainly with Python
If you get X11 related error message, try unsetting DISPLAY
unset DISPLAY