In response to a coordinated security attack on HPC sites world-wide back in
2020, it was necessary to implement some changes to enforce a higher level of
authentication security. In this article, we begin with providing some useful
information to understand key-based authentication, and document the process
for accessing the cluster.
As a result of the large-scale shift to remote working due to the
COVID-19 pandemic, we have been asked various questions relating to
computational research, which we'll try to address below. We've seen an
increase in the number of new account requests for the HPC service, and we
realise there will be quite a few users wishing to run workloads on the cluster
for the first time. Fortunately, the design of the HPC cluster service means
that for many of you, your workflow may remain the same as when you were based
on-campus.
A common first program to write in a new language is a "Hello world" example
where we print a simple line of output. In this tutorial we first look at
examples written in C, C++ and Fortran. To run the examples we'll learn
about interactive sessions on compute nodes, modules and compiling source
code. We'll also look at examples in MATLAB, Python and R. For these
we'll see how to use modules to select suitable interpreters.
Many people rely on compilers, for languages such as C, C++ and Fortran,
to create executable programs from source code. Just like our source code,
compilers themselves may have bugs. In this post we look at common forms
of compiler bug, with examples, and what we can do when our work is affected
by such an issue.
This article presents a selection of useful tips for running successful and
well-performing jobs on the QMUL Apocrita cluster.
In the ITS Research team, we spend quite a bit of time monitoring the Apocrita
cluster and checking jobs are running correctly, to ensure that this valuable
resource is being used effectively. If we notice a problem with your job, and
think we can help, we might send you an email with some recommendations on how
your job can run more effectively. If you receive such an email, please don't
be offended! We realise there are users with a range of experience, and the
purpose of this post is to point out some ways to ensure you get your results
as quickly and correctly as possible, and to ease the learning curve a little
bit.
Compression tools can significantly reduce the amount of disk space consumed by
your data. In this article, we will look at the effectiveness of some
compression tools on real-world data sets, make some recommendations, and
perhaps persuade you that compression is worth the effort.
We are simplifying the way that the multi-node parallel jobs are run on the
cluster.
Currently, users wishing to run multi-node MPI jobs on the public queues
must choose beforehand whether to run on the nxv parallel nodes or the
sdv parallel nodes, and to configure the job accordingly for the number of
cores on each type of node.