Attention: : Confluence is not suitable for the storage of highly confidential data. Please ensure that any data classified as Highly Protected is stored using a more secure platform.
If you have any questions, please refer to the University's data classification guide or contact ict.askcyber@sydney.edu.au

Singularity Pipelines - Artemis

Running Singularity containers on Artemis is experimental and prone to fail for reasons we're still trying to understand. You should only consider using Singularity if you are confident installing your own software and can build and troubleshoot your own containers. ICT cannot help you build Singularity containers and we cannot provide support for anything running inside of a container. We will only grant access to Singularity if our HPC system administrators cannot install the software you require on Artemis on your behalf.

Artemis has a large library of pre-installed software. If you want a program installed on Artemis, submit a High Performance Computing request via the ICT Self-Service portal. After logging in, select ICT Services → Research → High Performance Computing request and complete the form.


This would be the guide for XNAT user to run Singularity. There are two approaches to run XNAT pipeline (container) on Artemis. At the moment, only Approach 1 has been released. Approach 2 is still in development and will be released when available.

  • Approach 1 is for user to logon to Artemis and pull the data from XNAT, process the data on Artemis via Singularity image, and push the data back to XNAT related project
  • Approach 2 is for user to logon to XNAT, and initiate the XNAT Container pipeline, the data will then processed in Artemis and be copied back to XNAT. 

Please confirm that you have an Artemis Account and an active RDMP project so you can submit jobs on Artemis. To get a new account on Artemis please follow the instructions in the Artemis User Guide.

1. Approach 1 Instruction

Step 1. Build an Artemis-compatible Singularity container.

Containers on Artemis need to have /project and /scratch directories, otherwise we cannot bind mount Artemis's /project and /scratch filesystems inside of your container. If you are running GPU jobs, you should also make sure the file /usr/bin/nvidia-smi exists. It doesn't have to be the actual nvidia-smi binary, but can be a blank file.

XNAT REST API Manuals, Artemis (HPC) and examples

Step 3. Edit your PBS script to run the Singularity Image

Sample PBS Script

#!/bin/bash
#PBS -P PROJECTNAME
#PBS -l select=1:ncpus=1:mem=1gb
#PBS -l walltime=1:00:00
#PBS -M EmailAddress
#PBS -m abe
#PBS -j oe

module load singularity
mkdir -p /scratch/PROJECTNAME/NIFTI
singularity exec dcm2niix.simg dcm2niix -o /scratch/PROJECTNAME/NIFTI/ /scratch/PROJECTNAME/DICOM/files


Note: the above line is to execute the command to run singularity. For more information, please refer to singularity user guide

https://singularity.lbl.gov/user-guide


Step 4. When the job has completed, please refer the below document to push the data back to XNAT from Artemis 

XNAT REST API Manuals, Artemis (HPC) and examples

After pushing data back to XNAT from Artemis, remove the data from Artemis.


Filter by label

There are no items with the selected labels at this time.