Lewis Storage Performance Update

Great news, we have ordered a new storage system and parts are arriving as we speak! If all goes well we should have an outage at the end of November to switch over to the new system. The move will be transparent and we will move everyone’s /home, /data, and /group files to the new system.

WARNING: /scratch will not be preserved.
WARNING: After the storage upgrade all files in /scratch will be lost.

As many of you have noticed the current storage system is having troubles keeping up with all the new cores and demanding applications. The new storage system is a parallel filesystem (zLustre) built for HPC applications and will be using the latest and fastest servers and networking (100Gbps). The new filesystem will be 960TB of raw storage (about 600TB usable) and is very scalable. The new system will have a theoretical maximum of 40GB/s throughput compared to 5GB/s and should go from hundreds of MB/s typical throughput to GB/s for applications.

This is a reminder that all nodes have /local/scratch/$USER with a few hundred GB of storage (most SSD) for applications that only need storage on the node, not across the cluster. In many applications, this can mean a 10x speedup in performance.

We have also decommissioned the Bonus nodes (32 older nodes) ahead of schedule as there were some serious networking issues. This will also allow us to bring the new storage system online for testing sooner.

You can get the latest information by visiting our news website at https://doit.missouri.edu/research/research-computing-news/

Oct. 18, 2017