/
Transitioning from Artemis to NCI Gadi

Attention: Confluence is not suitable for the storage of highly confidential data. Please ensure that any data classified as Highly Protected is stored using a more secure platform.
If you have any questions, please refer to the University's data classification guide or contact ict.askcyber@sydney.edu.au

Transitioning from Artemis to NCI Gadi

Comprehensive training guide

See the training guide with Sydney specific information for using NCI systems and services:

NCI for USyd researchers – USyd NCI Gadi User Guide

Job Submission

NCI Gadi uses the same job scheduler as Artemis but a more modern version (PBSpro 2024.1.1 vs PBSPro_13.1.0). Configuration and user experience options are fairly similar with some slight modifications.

The PBS directive -l storage is required on Gadi for the compute job to access any storage locations other than the project’s scratch. In the example below, /scratch/PANDORA/ is avaiable to the compute job by default, and access to /g/data/PANDORA/ and /scratch/MyProject2/ is requested through the directive.

Gadi

#!/bin/bash #PBS -P PANDORA                  #PBS -l ncpus=1 #PBS –l mem=4GB  #PBS -l walltime=10:00:00  #PBS -l storage=gdata/PANDORA+scratch/MyProject2    module load program    cd "$PBS_O_WORKDIR” # alternatively, can use PBS directive "-l wd" my_program

Artemis

#!/bin/bash #PBS -P PANDORA                  #PBS -l select=1:ncpus=1:mem=4GB  #PBS -l walltime=10:00:00       module load program cd "$PBS_O_WORKDIR” my_program

 

Storage

Storage on Gadi’s /scratch on gadi is essentially “unlimited”, but has an aggressive deletion policy for unused data. See NCI’s Gadi /scratch file management resource for more details. You can increase your /scratch quota by contacting help@nci.org.au. For more persistent storage you can use /g/data/<project> directory. Quota increases are done by the Scheme Manager; contact nci-sydney.scheme@sydney.edu.au for requests.

NCI Gadi

/scratch/<NCIproject> /g/data/<NCIproject>

Artemis

/scratch/<RDSproject> /project/<RDSproject>

Connect to Sydney Research Data Storage (RDS)

NCI Gadi

sftp <unikey>@research-data-ext.sydney.edu.au:/rds/PRJ-<project>

Artemis

/rds/PRJ-<project>

Walltime

All queues on Gadi have at-most a 48 hour walltime in contrast to 21 days for Artemis. This is primarily for easier resource sharing, shorter queue time, and prevention of wasted compute time (if a node fails or a job is not behaving as expected). Tips for running jobs in a short walltime environment.

  • Enable checkpointing in your software.

  • Break long running jobs into shorter chunks of work.

  • Make use of dependent compute jobs (-W depend=afterok:jobid).

  • Use Nirin Cloud for “unlimited” walltime.

 

Internet Access

Compute nodes on Gadi do not have access to the internet.

Use copyq.

Use ARE jobs.

Use Nirin Cloud.

 

Job Arrays

PBS job arrays (#PBS -J 1-10) are not permitted on Gadi. Other means of parallel task execution for embarrassingly parallel workflows are required. An example using OpenMPI and the custom utility ‘nci-parallel’ are demonstrated with this example parallel job.

 

Alternative options to NCI Gadi

The following options are recommended for any researchers unable to transition from Artemis high performance computing (HPC) to use NCI Gadi (particularly researchers working in bioinformatics and neuroimaging). Researchers may need to use these options due to individual needs, for example those relating to legacy workflows and code that was used in Artemis, and potential wall-time limits in NCI Gadi. Further information on the transition from Artemis to NCI Gadi is available on the staff intranet.  

Service Name 

Sign up instructions

  1. NCI Nirin 

Access to the Nirin Cloud - NCI Help - Opus - NCI Confluence   

  1. Pawsey (Pawsey Supercomputing Research Centre) 

Application Portal - Pawsey Supercomputing Centre   

Depending on needs, the following options are also available on a case-by-case assessment. 

Related content