Bunya has several spaces users have access to to keep their data and software.
The spaces below are individual spaces. This means, by default, they are only accessible by the user who this space belongs to. These spaces should NOT be shared with any other users. If a shared space is required please see further below in the section on shared spaces.
/home/username
make
and make install
, etc./scratch/user/username
/scratch/user
.Users can use the command rquota
on Bunya to check their current quotas and usage. It provides quotas and usage for /home
, /scratch/user
and /scratch/project
(more information below) they have access too.
$TMPDIR
$TMPDIR
is created automatically for each slurm job and is then automatcally deleted once the slurm job finishes. It is the ideal place for temporary files of jobs.$TMPDIR
provides up to 10TB of temporary space for each user during jobs.$TMPDIR
is pre set. Do not create your own $TMPDIR
or overwrite it with something else in your scripts.$TMPDIR
does not count towards user quotas in /home
or /scratch/user
or project quotas in /scratch/project
.$TMPDIR
is not /scratch/user
.$TMPDIR
is not /tmp
.$TMPDIR
is recommended if calculations produce a very large amount of (often very small) files.$TMPDIR
and NOT /tmp
.To use $TMPDIR
for software that does not allow to set a temporary/scratch directory, change to $TMPDIR
(cd $TMPDIR
), then copy all required input files to $TMPDIR
or use the full path to point to input files in /scratch
or /home
or /QRISdata
(for /QRISdata
restrictions apply, see below). After the calculation copy all ouput needed to /scratch
or /QRISdata
(see below for restrictions on /QRISdata
) and make sure to tar and/or zip output if required.
cd $TMPDIR
cp input-files .
srun ...
cp output-files /scratch/.../. (or /QRISdata/.../.)
/scratch/project
Scratch projects are shared spaces in /scratch
that provide more space and space that is shared by members of a group.
Scratch projects require an access group. This can be an RDM storeage record access group (QNNNN) or a specific access group created by RCC for the scratch project. Users, not RCC, manage the access groups either via the RDM portal for QNNNN groups, or via the QRIScloud Portal.
For RDM storage record access groups (QNNNN) the RDM storage record owner add users as collaborators to the RDM storeage recrod via the RDM portal. For QRIScloud access groups, the group owner or administrator needs to go to the QRIScloud Portal and click on Account to log in. Tnen they need to go to Services Dashboard and there look under Groups for the respective Scratch Project group and click on the link. THis page outlines how to add and remove users from the access group for the proejct.
New users added either way will have to wait for this to take affect as permissions set in the RDM portal need to be propagated to Bunya. In both case user should create a new login to Bunya to have new groups available in their environment. Users can check which groups they belong to by typing the command groups
on Bunya.
Users in the access group who also have access to Bunya will have access to the scratch project on Bunya. They have read-write
access to the scratch project directory. However, directories and files created by other scratch project members in the scratch project directories are only readable (and executable) but not writable (cannot be deleted for files, cannot be written to for directories) to others.
In the below example:
user-2
can change into directory-1
but they will not be able to write to directory-1
or delete files in directory-1
user-1
can change into directory-2
and can write to directory-2
and delete files in directory-2
user-1
can delete or change file-1
user-1
and user-2
can delete or change file-2
user-1
change delete or change executable-1
user-1
and user-2
change delete or change executable-2
directory-1
writable to other users than user-1
, user-1
needs to runchmod -R g+w directory-1
where -R
means that this is run recursively for all subdirectories and files
g+w
adds write permissions to the group.
drwxr-sr-x. 2 user-1 Project_Access_Group 4K Feb 1 19:08 directory-1
drwxrwsr-x. 2 user-2 Project_Access_Group 4K Oct 9 2023 directory-2
-rw-r--r--. 1 user-1 Project_Access_Group 6M Mar 25 13:54 file-1
-rw-rw-r--. 1 user-2 Project_Access_Group 6M Mar 25 13:54 file-2
-rwxr-xr-x. 1 user-1 Project_Access_Group 6M Mar 8 14:01 executable-1
-rwxrwxr-x. 1 user-2 Project_Access_Group 6M Mar 8 14:01 executable-2
/QRISdata
The RDM User Guides provide a lot of information about RDM research records and RDM storage records and how to use and administer these.
RDM storage records, where users selected that the data should be available on HPC when they have applied for the RDM storage record (it cannot be changed afterwards), are automatically available on Bunya in /QRISdata/QNNNN
, where QNNNN
is the storage record number and can be found as part of the short identifier for the RDM storage record.
/QRISdata/QNNNN
are shared spaces with default quotas of 1TB and 1 million files. This can be increased by applying for more storage via the RDM portal.ls /QRISdata/QNNNN/
(the /
at the end is important) or cd /QRISdata/QNNNN
to see the RDM storage record. Due to the automount the RDM storage record needs to be used to be seen.sbatch
or salloc
) from a directory in /QRISdata
./QRISdata
during a calculations. So standard output should not be written to /QRISdata
. A once off read of input at the start and a once off write at the end is permitted. However, the general data workflow should be using /scratch
for input and output of calculations./QRISdata
as accessing software is also continuously accessing /QRISdata
which is not permitted./QRISdata
, unpack these into a directory in /scratch
/QRISdata
, tar or archive these first as lots of (small) files can cause problems (not just for you but also others)./QRISdata
worksThe /QRISdata
filesystem provides access from Bunya to UQ RDM collections (as well as a smaller number of collections that predate UQ RDM).
The storage technology behind /QRISdata
consists of multiple layers of storage, and software that manages the copies of your data within those multiple layers. There are also active links to other caches at St Lucia campus that allow you to drag and drop your file onto the St Lucia R:\ drive and have it appear automatically at the remote computer centre that houses Bunya and the RDM Q collections.
Layer | Purpose | Response Time |
---|---|---|
GPFS Cache | Used for intersite transfers and is mounted onto Bunya HPC | Immediate once mounted onto Bunya |
Zero Watt Storage (ZWS) | Disk drives that operate like tapes. Only powered on when required. | <1 minute to activate a read from off |
Robotic Tape Silo | Deep archive copies | Can take several minutes to commence reading |
The hierarchical storage management (HSM) software will move files downwards when they are not in active use in the top layer. If a file is required, but is not in the top layer, then it will be recalled from ZWS, or tape and copied into place on the GPFS Cache layer.
On a compute node via an onBunya or interactive batch job
Use ls -salh FILEPATH
The output contains the size-occupied-on-disk in the first column and the actual-size in column 6
#This one is in the GPFS cache layer (size on disk matches actual size)
[uquser@bun104 Q0837]$ ls -salh Training.tar
367M -rw-r--r--+ 1 uquser Q0837RW 367M Oct 30 2023 Training.tar
#This one is in the GPFS cache layer too but the size on disk is actually bigger because files occupy at least one block (512)
[uquser@bun104 Q0837]$ ls -salh Readme.md
512 -rw-rw----+ 1 Q0837 Q0837RW 1 Sep 13 2023 Readme.md
#This one is not in GPFS cache (zero on disk but the actual filesize is 1.7MB)
[uquser@bun104 Q0837]$ ls -salh .LDAUtype1.tgz
0 -rw-rw----+ 1 Q0837 Q0837RW 1.7M Dec 16 2019 .LDAUtype1.tgz
On a compute node via an onBunya or interactive batch job
Use /usr/local/bin/recall_medici FILEPATH
Wildcards are also supported.
The recall_medici
command is also available on data.qriscloud.org.au if you don’t have access to Bunya.
At a time between 20 past and 40 past the hour at the hours of 08, 12, 16, and 22, access to the RDMs from Bunya can be delayed. It happens while the service access controls are being updated. Access can appear non-responsive for a few minutes. Best to wait 5 minutes and try again.