We have a small HPC cluster for analysis that consist of approximately 30 compute nodes and includes:
- 1400 Cores
- 6500 GB of RAM
- 70 TB of scratch working space
- 150 TB of working storage
- 1500 TB of Project storage
The cluster is designed for two types of jobs.
- A batch job
- Submitted via a GUI interface where the jobs are known and the data is in the project storage. Data migrates to the cluster, jobs run automatically, results are copied to the project storage and the cluster is automatically cleaned up.
- A slurm job submitted that can run on a group of nodes. Data is manually copied to the cluster, jobs are manually run and results manually copied back to the project storage, before manual cleanup.
- Slurm job that requests resources and then can be assigned to run Singularity containers with jupyter notebooks on a node for interactive jobs.
Send a Help Desk Ticket request in to have your account enabled for cluster access here: https://helpdesk.hli.ubc.ca
We will be glad to assist in helping you with your projects and running on the cluster.
