Using job statistics to increase job performance and reduce queueing time¶
You may wonder why some jobs start immediately but some wait in the queue for hours or days, even if your job is quite simple. If you notice your job has been queueing for a while, you may want to consider adjusting the requested resources to reduce queueing time and reduce any potential resource wastage as the job runs. Below, we outline two useful tools for you to check the resource usage of previous jobs.
jobstats
command line tool¶
One useful tool is the command line tool jobstats
, which in addition to
displaying a list of your last 25 completed jobs by default, will also
display the job efficiency - this is the overall CPU usage displayed as a
percentage. Also see
this
page on our docs site for more information.
Interpreting jobstats
output¶
First it's important to understand the columns of the output from jobstats
:
JOB ID [TASK]
- The unique job ID. For array jobs, the array job task ID is also shown.NAME
- The job's name (set with-N [NAME]
option) or job script name if this option was not set.SUBMITTED
- Date and time when the job was submitted.STARTED
- Date and time when the job started.ENDED
- Date and time when the job ended.TIME REQ
- Job execution time limit requested.DURATION
- Actual job runtime.MEM R
- Amount of memory requested.MEM U
- Amount of memory job actually used during execution.CORES
- Number of cores requested.GPU
- Number of GPUs requested (GPU jobs only).QUEUE
- Queue name.HOST
- Execution host (node).STATUS
- Job exit status.EFF
- Job efficiency (for CPU jobs only)
Now we know each column's description, lets look at some jobs:
$ jobstats -n 5 -s -l
---------------------------------------------------------------------------------------------------------------------------------------------
LAST 5 JOBS FOR USER abc123
---------------------------------------------------------------------------------------------------------------------------------------------
| JOB ID [TASK] | NAME | ENDED | TIME REQ | DURATION | MEM R | MEM U | CORES | GPU | QUEUE | HOST | STATUS | EFF |
+---------------+------------+-------------------+-----------+-----------+-------+---------+-------+-----+---------+--------+--------+------+
| 2391231 | job_1 | 06/06/22 19:12:48 | 240:00:00 | 120:15:55 | 200G | 1.95G | 1 | - | all.q | ddy58 | 0 | 93% |
| 2391232 | job_2 | 07/06/22 10:32:23 | 01:00:00 | 0:20:01 | 10G | 8.19G | 10 | - | short.q | ddy119 | 0 | 9% |
| 2391233 | job_3 | 07/06/22 16:42:32 | 240:00:00 | 0:13:00 | 50G | 1.01G | 5 | - | all.q | ddy58 | 0 | 19% |
| 2391234 | job_4 | 07/06/22 17:12:02 | 01:00:00 | 0:40:04 | 48G | 42.07G | 48 | - | short.q | ddy116 | 0 | 92% |
| 2391235 | job_5 | 09/06/22 09:19:21 | 01:00:00 | 0:08:01 | 60G | 3.01G | 8 | 1 | short.q | sbg4 | 0 | ~5% |
---------------------------------------------------------------------------------------------------------------------------------------------
In this example, user abc123
has requested the job statistics for their
latest 5 (-n 5
) jobs, which have finished successfully (-s
) displaying
the short output (-l
). Let's analyse the output job by job and see if we
can improve anything.
Job #2391231¶
| JOB ID [TASK] | NAME | ENDED | TIME REQ | DURATION | MEM R | MEM U | CORES | GPU | QUEUE | HOST | STATUS | EFF |
+---------------+------------+-------------------+-----------+-----------+-------+---------+-------+-----+---------+--------+--------+------+
| 2391231 | job_1 | 06/06/22 19:12:48 | 240:00:00 | 120:15:55 | 200G | 1.95G | 1 | - | all.q | ddy58 | 0 | 93% |
As we can see from the output, this job ran for approximately 120 hours with 93% efficiency which is a good result. But when we look at the memory requested and memory used columns, we can see the values differ substantially. This job requested 200G of memory but used only around 2G, so next time we run the same or a similar job, we can reduce the memory request down to 3G. This includes a small amount of extra memory (1G), which would be considered acceptable.
This job will likely start running quicker because the scheduler only needs to allocate 3G of memory, compared with 200G, which may not be currently available. Most nodes on Apocrita contain around 300G of memory - requesting over half of the total memory available in a single node may queue for a while, if the nodes are busy running other jobs.
Job #2391232¶
| JOB ID [TASK] | NAME | ENDED | TIME REQ | DURATION | MEM R | MEM U | CORES | GPU | QUEUE | HOST | STATUS | EFF |
+---------------+------------+-------------------+-----------+-----------+-------+---------+-------+-----+---------+--------+--------+------+
| 2391232 | job_2 | 07/06/22 10:32:23 | 01:00:00 | 0:20:01 | 10G | 8.19G | 10 | - | short.q | ddy119 | 0 | 9% |
This job utilised memory well, as 8.19G of the requested 10G was used. However, looking at the efficiency column, we can see the value is very low. The most probable reason for this is that the job used a single thread only - 9 out of 10 cores were idling during job execution. The solution here is to reduce the core request to 1, to prevent idling cores, which we consider wasteful. If your code/program supports multiple cores, ensure the appropriate threading settings have been configured. Similar to the previous job, requesting a smaller number of cores could reduce queueing time because the scheduler would need to reserve fewer cores before the job can be dispatched. When the cluster is busy running smaller jobs, the larger jobs will likely queue for longer, until resources are available.
Job #2391233¶
| JOB ID [TASK] | NAME | ENDED | TIME REQ | DURATION | MEM R | MEM U | CORES | GPU | QUEUE | HOST | STATUS | EFF |
+---------------+------------+-------------------+-----------+-----------+-------+---------+-------+-----+---------+--------+--------+------+
| 2391233 | job_3 | 07/06/22 16:42:32 | 240:00:00 | 0:13:00 | 50G | 1.01G | 5 | - | all.q | ddy58 | 0 | 19% |
This job is a combination of the first and second cases - only a single core was used (out of 5 requested) and only 1G of memory was used, yet 50G was requested. To run this job (or similar) again, the memory and cores amount should be reduced to 2G and 1 core. One extra suggestion here is to look at the requested time: the job requested 10 days (240 hours) but completed in 13 minutes. Reducing the runtime to 1 hour will allow the job to become eligible for the high priority short queue. By using this queue, your job will likely run shortly after submission, even when the cluster has a large number of jobs waiting in the queue.
Job #2391234¶
| JOB ID [TASK] | NAME | ENDED | TIME REQ | DURATION | MEM R | MEM U | CORES | GPU | QUEUE | HOST | STATUS | EFF |
+---------------+------------+-------------------+-----------+-----------+-------+---------+-------+-----+---------+--------+--------+------+
| 2391234 | job_4 | 07/06/22 17:12:02 | 01:00:00 | 0:40:04 | 48G | 42.07G | 48 | - | short.q | ddy116 | 0 | 92% |
This job is a great example of one that has run perfectly. You can see that almost all the memory requested was used and the efficiency is close to 100%, which means that all requested cores were used correctly.
Job #2391235¶
| JOB ID [TASK] | NAME | ENDED | TIME REQ | DURATION | MEM R | MEM U | CORES | GPU | QUEUE | HOST | STATUS | EFF |
+---------------+------------+-------------------+-----------+-----------+-------+---------+-------+-----+---------+--------+--------+------+
| 2391235 | job_5 | 09/06/22 09:19:21 | 01:00:00 | 0:08:01 | 60G | 3.01G | 8 | 1 | short.q | sbg4 | 0 | ~5% |
In this example job, you can see that only 3G of memory was used, although 60G
was requested. Additionally, the efficiency is only 5%, which would indicate
resources were not used correctly however, as this is a GPU job, we can safely
ignore the CPU efficiency. The jobstats
utility cannot currently calculate
GPU efficiency, but this functionality may be added in the future. In the
meantime, please use other utilities to calculate GPU efficiency. You can view
historical GPU usage on this page, and you
can see the current GPU activity
on a node by running gpu-usage
or nvidia-smi
.
Web statistics page¶
Another tool is the
"View your recent jobs"
page on our stats site. It displays resources
consumed for completed jobs. While many will find jobstats
an ideal tool for
their workflow, this page could be convenient if you prefer interacting with a
web browser instead of the command line. Due to technical reasons, this page is
only able to show limited results, which may vary depending on how many total
jobs have run on the cluster, including jobs by
other users.
Tuning your job¶
Now that we are familiar with common mistakes, we can tune our jobs to help reduce queueing times and achieve maximum performance for our jobs. To ensure your jobs utilise resources correctly, we recommend the following:
- Reduce the memory request to just above the amount that job consistently consumes (some overhead is fine proportional to the job size, e.g. 10-20% above the typical memory consumption).
- Request one core if your job does not support multi-threading.
- If your job runs for only a short time, reduce the job running time request to 1 hour to use the high priority short queue.
If you are not sure of the resources required for your job, we recommend running a series of smaller test jobs, and increasing the requested resources as appropriate, until you find the optimal settings.
Title image: Isaac Smith on Unsplash