/
Queue resource limits
Attention: Confluence is not suitable for the storage of highly confidential data. Please ensure that any data classified as Highly Protected is stored using a more secure platform.
If you have any questions, please refer to the University's data classification guide or contact ict.askcyber@sydney.edu.au
Queue resource limits
Queue | Max Walltime | Max Cores Per Job | Max Cores per User | Memory per node (GB) | Cores per chunk | Number of Nodes | |
---|---|---|---|---|---|---|---|
small* | 1 day | 24 | 128 | < 123 | 24 | 59 | 10 |
normal* | 7 days | 96 | 96 | < 123 | 32 | 49 | 10 |
large* | 21 days | 288 | 288 | < 123 | 32 | 61 | 10 |
highmem* | 21 days | 192 | 192 | 123 to 6144 | 64 | 3 | 50 |
gpu* | 7 days | 252 | 252 | < 185 | 36 (4 GPUs) | 7 | 50 |
dtq | 10 days | 2 | 16 | < 16 | 24 | 2 | 0 |
interactive | 4 hours | 4 | 4 | < 123 | 4 | 1 | 100 |
- *The small, normal, large, high memory and GPU queues are all accessed via defaultQ. You cannot directly request these queues.
- The interactive queue is requested using the
qsub -I
command via the command line. You cannot request interactive access with#PBS -q interactive
. - The maximum number of jobs a user can have queued is 200 in defaultQ.
- The maximum number of cores one user can simultaneously use is 600.
- Array jobs are limited to 1000 elements.
- small, normal, large require max 20GB per core.
- N/A = Not Applicable.
, multiple selections available,
Related content
Job Queues
Job Queues
More like this
Job Submission
Job Submission
More like this
Data Transfer Queue (dtq)
Data Transfer Queue (dtq)
More like this
When will my job start?
When will my job start?
More like this
Job Monitoring and Management
Job Monitoring and Management
Read with this
When will my jobs start?
When will my jobs start?
More like this