RCSS Spring Update 2017

Lewis Cluster

The Lewis upgrade is now complete. The new hardware was in place last fall and by late January we completed the configuration and software installs (including more than 80 new scientific packages.) We are now focusing on users and how we can better serve their needs.

BioCompute Cluster

The BioCompute capacity of Lewis is now configured and online with a large number of bioinformatics software packages installed. Bi-weekly training sessions (in partnership with the IRCF) are now being held on Thursdays to help users get the most out of the cluster.

Teaching/Learning Cluster

The teaching cluster (tc.rnet.missouri.edu) is now available to students on campus for learning about research computing. This cluster has 11 compute nodes with between 96GB and 190GB of RAM.  Instructors and students are welcome to use this as a part of their class with a number of restrictions. Please read the Teaching Cluster Storage policy.
In the next week or two, we will be having a short outage of Lewis to add five more investor nodes and to patch the cluster and make some optimizations to the SLURM resource scheduler. We will post an update to the Lewis announcement list when we have an exact date.

Secure4Cluster

We launched the Secure4 HPC environment. This cluster is for researchers who need to store/use Data Classification Level 3 and 4 (Health Insurance Portability and Accountability Act or HIPAA) data in a High Performance Computing environment.

High Throughput Computing Storage

We have added more than 600TB of new storage to the cluster for High Throughput Computing. This new storage service was launched for researchers requiring large, long-term storage for processing of large datasets where cost is a primary design criteria. This storage is only available inside the Lewis cluster or via rsync/ssh on the login nodes and is designed for large files/samples processed one-at-a-time. For example: DNA sequences, instrument images, video, etc. Storage pricing for FY17 is $120/TB/60 months, paid in advance in increments of 10TB.  For more information, review the service description.

NSF MRI Grant

The NSF MRI grant for HPC out of the College of Engineering is nearing completion.  This stand-alone cluster is being moved into production and we have been adding those nodes into the Lewis cluster so they will be accessible to all users via the General partition. This brings our total to more than 5,000 cores. We also migrated 10 GPU nodes (8 Nvidia K20m and 2 Nvidia K40 GPUS) into Lewis for general use.

General Purpose Research Network (GPRN) /RNet 2.0

The Research Network upgrade (RNet 2.0), Phase I, is nearing implementation. This will include the creation of the GPRN zone for researchers who have laptops and printers that need access to the Research Network.  All Research Network ports will be moved to the campus networking ports (along with the associated port fee) in the next few months and will be completed before FY18.  Phase II will be rolling out over the summer and will include enhanced performance and security as well as an update to our usage policies.

Internet2/Great Plains Network

The Research Network 100 Gbps Layer 2 connection has been moved to the GPN node in Kansas City for regional Layer 2 and Internet2 AL2S connectivity, leveraging our membership in the GPN to reduce operational costs. This connection will provide large dedicated bandwidth for connections to major data repositories such as iPlant/CyVerse. Learn more about the AL2S.

March 20, 2017