Migrating to a new research storage system¶
As the current 2 PetaByte Research Data Storage on Apocrita reaches end of life this summer, we have procured a new storage system, providing 5PB of capacity. This means faster, bigger and cheaper storage for you, the researcher! Read on to discover the benefits...but first, an important notice.
Full Cluster shutdown
We will need to drain all nodes and shut down the cluster to complete the migration. This is planned for 18th July 2022.
To bring the new system live, we have to ensure that all of the data on the current system is copied to the new system. We have done a lot of the groundwork already, but to complete the migration we will need to shut down the whole cluster for a few days, to enable clean unmounting of the storage and to perform data integrity checks before mounting the new system containing the migrated data.
Note that when we reach 8th of July, any queued jobs which request a full 10 day runtime will not be scheduled on the nodes this side of the shutdown, so this is one of the few occasions where requesting a shorter maximum runtime than the usual ten days will be beneficial. Any jobs that haven't had the chance to run in time for 18th July won't be lost - they will be held until the cluster restarts. We try our best to avoid any form of service disruption, and this is the first full cluster shutdown for over 2 years, but in this case we require a completely idle and unmounted filesystem in order to complete this change.
Migration of data¶
Much of the 1.8PB of data and over 400 million files have already been
"pre-fetched" onto the new storage system, to give us a head start. Once the
cluster is shut down, we will perform another sync of the data to ensure that
the copy of the data on the new system is the latest. The cluster will be
idle, and logins disabled during the shutdown, so we know the data won't be
changing under our feet. Then we will mount the new storage, and it will be
available under the /data
mount-point as usual.
Benefits of the new storage system¶
Our current research data storage system is approaching capacity, in time for the end-of-support lifecycle. Storage use is hard to predict so we're pleased that we didn't run out of space before now. The new system brings benefits for everyone:
Storage cost reduction
We are pleased to announce that the price of storage is now £75/TB per year, half of the cost of the current storage. This will be applied for new storage requests and anniversaries from the 1st August 2022
Home directory quotas increased to 100GB
When the migration is completed over summer, we will be able to increase the home directory quotas from 60GB to 100GB. While home directories are not designed for long term storage of your research data (your research lab should have a project folder for that), it will make life easier for those who regularly use conda, pip or R environments
We have changed the block size on the new storage, for performance reasons. Files on a file system are comprised of blocks of a defined size. A file of 1KB on a system with a 4KB block size will occupy 1 block, and actually consume 4KB on disk. Files on the new storage are rounded up to the next 16kB to fill a file fragment (or sub-block), versus 128KB on the current long-term storage. As a result, a typical set of research data may occupy 1-2% less space on the new storage system.
The new storage is significantly faster than the current one - we have been really impressed with the benchmarking results, using the IO500 suite. It performs particularly well for serial write activities and larger files and will complement the scratch storage system which has excellent performance under high load, particularly for random i/o and smaller files, and is the best place to read/write data for cluster jobs.
Title image: imgix on Unsplash