Attention: : Confluence is not suitable for the storage of highly confidential data. Please ensure that any data classified as Highly Protected is stored using a more secure platform.
If you have any questions, please refer to the University's data classification guide or contact ict.askcyber@sydney.edu.au

Artemis Hardware

Artemis is a 7588-core InfiniBand cluster comprised of the following nodes. Some of these nodes are reserved for research groups who won a "Grand Challenge" node allocation, or who purchased compute nodes for their research group or school and are having them hosted in Artemis. See the below table for a summary of available resources in Artemis


Up to 12 hour jobs

Up to 24 hour jobs

Up to 1 week jobs


Up to 3 week jobs

GPU jobs

High memory jobs

Data transfer jobs1Interactive jobs2"Scavenger" jobs3

Number of Nodes

1

24

41

49732

3 nodes:

  • 1 via PBS Pro
  • 2 via "NoMachine"
61

Available cores

24

576

1968

1568

252

28 NVIDIA V100 GPUs (16 GB)

19248241800
Maximum requestable cores per node (or chunk)4243232

36 cores

4 GPUs

642424

RAM requestable per node (GB)

16

123

123

12318561001616123

1 Jobs that are for I/O workloads or data transfer. Compute jobs will be terminated.

2 Interactive jobs are jobs where you have interactive access to a compute node. The PBS Pro node is exactly like a compute node, and the NoMachine nodes are for opening programs with GUIs where graphics processing is done server-side.

3 "Scavenger" jobs are low-priority jobs that run using idle resources in Artemis Grand Challenge scheme winners allocations. All Artemis users are welcome to submit scavenger jobs, however they will be terminated before finishing if a Grand Challenge allocation member submits work to their allocation and requires resources being used by a scavenger job.

Core allocations

Some Artemis nodes are reserved for certain groups. These nodes are either granted to researchers who won dedicated access to compute nodes as part of the Artemis Grand Challenge scheme, or who have ownership of compute nodes and choose to have them hosted in Artemis. As of March 2018, 1800 cores are allocated to Artemis Grand Challenge scheme winners, and 1136 cores and 80 V100 GPUs are owned by groups who have chosen to host compute resources in Artemis.