Skip to content

Welcome to the QMUL HPC blog

Short queue

In addition to the primary queue, there is a queue designed to minimise waiting times for short jobs and interactive sessions, in response to users who requested the ability to quickly obtain qlogin sessions for quick tests and debugging. This short queue runs on a wider selection of nodes and is automatically selected if your runtime request is 1 hour or less.

Deprecated modules

We removed some problematic module files. Please check your job scripts for use of these modules:

  • Python: Due to a number of issues with the module installs of python, older versions below 2.7.14 and 3.6.3 are being removed from Apocrita (python/2.7.13, python/2.7.13-1, python/2.7.13-3, python/3.6.1, python/3.6.2, python/3.6.2-2).
  • Java: version java/1.8.0_121-oracle causes problems with mass thread spawning on the cluster and will be removed. java/1.8.0_152-oracle will remain the default version loaded.

Home and Group Directories

During the summer, home directories were migrated to the new storage platform. This means that quotas have grown slightly as the underlying block size has increased.

The qmquota command will tell you how much space you are using, and that quotas are applied on size as well as the number of files. Each Research group gets a free 1Tb of storage space on the cluster; if your group has not got one then please contact us and we can organise it for your group.

Tier 2 Announcement

QMUL have access to powerful Tier 2 (formerly known as Regional) HPC resources, predominantly for EPSRC researchers. If you run multi-node parallel code (using MPI, for example), you will likely benefit from using the Tier2 clusters.

Deprecated openmpi 2.0.2-gcc module

We identified a problem with the openmpi/2.0.2-gcc module and have removed it as the correct interface was not being used for the MPI communication between nodes. This resulted in potentially much slower communication and consequently jobs taking longer to run.

Programs should be rebuilt against the other available openmpi modules which correctly select the Infiniband interconnect as default for communication. Recent users of this module have been contacted directly.

1PB added storage

We recently added an additional petabyte of storage, it is necessary to move all files to the new storage to benefit from improved performance.