Attention: : Confluence is not suitable for the storage of highly confidential data. Please ensure that any data classified as Highly Protected is stored using a more secure platform.
If you have any questions, please refer to the University's data classification guide or contact ict.askcyber@sydney.edu.au

Skip to end of banner
Go to start of banner

Artemis Storage

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Current »

Artemis uses a Lustre file system that is accessible from any compute node and provides excellent I/O performance (especially for large files). The Lustre file system contains the following storage spaces:

  • Home (/home/abcd1234, 10 GB per person)
  • Project (/project/PANDORA, 1 TB per project quota)
  • Scratch (/scratch/PANDORA, no quotas applied)

Remember to replace abcd1234 with your UniKey and PANDORA with your abbreviated project name.

Data is not backed up at all, so we cannot offer any ability to retrieve lost data. For long-term, backed-up storage, use the Research Data Store.

/home

Each researcher has their own home directory within Artemis. This can be used to store program code, batch scripts and other files. Please note that the home directory has limited space and is intended for storing code and configuration information. Data on /home is backed up in case of system failure (that is, if the Lustre filesystem fails), but files cannot be recovered if they are accidentally deleted. Important data should be stored in the Research Data Store.

To check your /home usage, type pquota.

/project

The default allocation is 1 TB of project storage, shared between all members of the project. Please note that no additional space will be provided. If you require more disk space for job output, use /scratch instead. Since /project (or /scratch or /home) is not backed up, please back up all important data to the Research Data Store regularly.

To check your /project usage, type pquota.

/scratch

/scratch is a 490 TB pool of shared storage for data that needs to be saved during a job, but can be deleted once the job completes. As this directory is shared by all users, please do not leave your data in this area for longer than absolutely necessary. Transfer important files to /project if you need them after your job finishes. /scratch is not backed up, so please back up important data to the Research Data Store.

To see how much /scratch space is available, type:

df -h /scratch

To see how much /scratch space your project is using, type:

lfs quota -hg RDS-ICT-PANDORA-RW /scratch

Remember to replace RDS-ICT-PANDORA-RW with your project name, as described in the Projects and UniKeys section.

For details about disk usage by specific sub-directories, type:

du -sh /scratch/PANDORA/directory

where the last argument is the directory you want to know the size of.


Storage management procedure

We automatically remove data on Artemis belonging to “inactive” projects, where we define “inactive” using the following criteria:

  • If a project is inactive for three consecutive months, where “inactive” is defined as submitting less than 1 CPU hour of work per month, we will automatically remove all data in /scratch belonging to the inactive project.
  • If a project is inactive for six consecutive months, we will automatically remove all data in /project belonging to that project as well.
  • On 1 February 2018, we will begin measuring Artemis project inactivity. If your project has been inactive (see the descriptions in the above two points), you have until 30 April 2018 before your data will be automatically deleted from /scratch. Similarly, you will have until 31 July 2018 before your data will be automatically deleted from /project.

We also reserve the right to perform one-off cleanups of Artemis /scratch space if we detect large amounts of “old” data being stored there. We have performed such a cleanup once, where we defined “old” data as being any data older than 1 August 2017 on 15 December 2017. We will provide a minimum of two weeks notice via the Artemis Users mailing list before performing automated removal of old data.

  • No labels