High-Performance Computing

High-Performance Computing

Research Computing Support Services hosts a collection of system-wide services that provides the advanced computational, storage, and networking capabilities needed for innovative and collaborative research activities at the University of Missouri. This research computing environment includes a state-of-the-art shared-resource High-Performance Computing (HPC) cluster, a large memory High Throughput Computing (HTC) cluster, an experimental cluster, a teaching and learning cluster for students, and a number of grant-friendly investor services. 

The Research Computing Support Services system is a heterogeneous computing environment hosting a number of general and specialized clusters. The environment consists of more than 5200 cores and 52 TB of RAM that has access to 700 TB of dedicated research data storage (Isilon) and 1 PB of General Purpose Research Storage (Isilon), which is accessible on the cluster and across campus. The cluster is connected to the Internet2 AL2S SDN network at 100Gigabits. See units of measurement chart below*: 

The system runs CentoOS7 and utilizes the SLURM job scheduler. The system supports more than 200 scientific software packages from a large number of disciplines. We will install most open source software packages and libraries on an as-needed basis. We have licenses for SAS, Matlab, Gaussian, Intel compilers and Parallel Studio, as well as others. Training sessions are provided weekly for new and experienced users.

A subset of the cluster capability consists of HPC and BioCompute Investor machines. Investors purchase the node hardware and the space, power, and cooling along with the management of the hardware, operating system, security, and scientific applications is provided by Research Computing Support Services at no cost for five years.  Researchers get prioritized access to their capacity and unused cycles are shared with the community.

HPC Cluster (Lewis):

The Lewis HPC is a shared cluster accessible to researchers and their collaborators at the University of Missouri and across the University of Missouri System. It currently consists of 55 Haswell nodes with 24 cores and 128GB or 256GB of RAM and 81 Broadwell nodes with 28 cores and 256GB of RAM and 37 Broadwell nodes with 512GB of ram for a total of 4942 cores.  It is connected with 10 Gigabit Ethernet and 40 Gigabit QDR Infiniband.  There are currently 14 GPU nodes with 8 Nvidia K20m GPU's, 2 Nvidia K40 GPU's, 1 Nvidia P100, and 4 GTK 1080 TI's.  The Lewis cluster also has 500 older cores for interactive and long jobs.

Experimental Cluster:

The University of Missouri hosts an experimental cluster (NSF MRI Award #1429294) used to provide unique capabilities to researchers working on and with high-performance research computing. This project is a partnership between the Division of IT Research Computing Support Services, faculty from the College of Engineering, and MU Informatics Institute who authored the grant, and the other research projects across campus that the grant supports.

This cluster is designed and configured to be highly flexible and has a governance model to reconfigure the cluster to allow specialized and dedicated environments for the participating researchers. This allows researchers to conduct research at scales not available in lab-scale system or in specialized configurations that are coordinated by an allocation board of faculty, staff, and students. Configurations can test scalability, deploy large specialized environments such as OpenStack, Hadoop, and others along with a traditional HPC configuration. 

The MRI Research Cluster contains a total of 1680 cores and 14TB of RAM consisting of 16 Haswell nodes with 24 cores and 128 or 256GB of RAM, 26 Broadwell nodes with 28 cores, 256GB of RAM and 3TB of local SSD storage, 14 accelerator nodes with 10 NVidia K20m/K40 GPUs and 16 Intel Phi accelerators, and a large memory machine. The cluster supports multiple networking fabrics to facilitate experimentation (multiple 1-Gigabit management networks, 10-Gigabit Ethernet with SDN capability, and 40-Gigabit QDR Infiniband fabrics) and is connected to the campus 100-Gigabit Internet2 AL2S connection. 

Teaching and Learning Cluster

The teaching cluster is meant as a resource for students and instructors for computational computing. The teaching cluster is a full HPC cluster and UM students with a university ID are allowed to run jobs on the head node. For detailed instructions and more information, please read the Teaching Cluster Policy: http://docs.rnet.missouri.edu/Policy/tc

BioCompute Cluster:

This cluster was developed as part of the campus 2016 Cyberinfrastructure Plan Update to support bioinformatic and genomic analysis. The cluster was developed with support from the Office of Research, Graduate Studies and Economic Development, Mizzou Advantage, and a large number of BioCompute Investors in partnership with Research Computing Support Services, the Informatics Research Core Facility,  and Dell.

The cluster consists of 37 Dell R630 Broadwell nodes with 28 cores, 512GB of RAM, and 3TB of local SSD storage for a total of 1036 cores, 19TB of RAM, and 104TB of local SSD storage.

Support

We can provide grant writing support, including equipment and service recommendations, description of facilities, equipment quotes, and letters of commitment for research computing services.

Additionally, individual, one-on-one consulting is available to provide support for all phases of problem evolution from inception to final results. This can include software recommendations, formulation of the problem or optimization of a single processor performance as well as communications support.

There is no student/end-user support for the teaching and learning cluster. All support requests should come from the instructor or TA to rcss-support@missouri.edu. Support is best effort and provided during regular business hours.

 
*Decimal vs. binary units of measurement:
Decimal Decimal Value
Kilobyte KB = 10^3 1,000
Megabyte MB = 10^6 1,000,000
Gigabyte GB = 10^9 1,000,000,000
Terabyte TB = 10^12 1,000,000,000,000
Petabyte PB = 10^15  1,000,000,000,000,000

 

There is no charge associated with this service.

This service is available for individuals, sponsored and non-sponsored researchers, and research groups on all campuses.

The teaching cluster is provided to all UM students with official university credentials (pawprint). Students must read the Teaching Cluster Policy http://docs.rnet.missouri.edu/Policy/tc and be aware of the limitations of the environment. There is no student/end-user support. All support requests should come from the instructor or TA to rcss-support@missouri.edu. Support is best effort and provided during regular business hours.

Faculty and researchers can apply for accounts for themselves and their students by completing and submitting the Computing Resources Account Request Form. You will need to provide a public, secure, shell key that is protected with a strong passphrase (see http://docs.rnet.missouri.edu/HowTo/ssh).

The teaching cluster is meant as a resource for students and instructors for computational computing and does not require an account. UM students with a university ID are allowed to run jobs on the head node.

For additional assistance, contact Tech Support at 573.882.5000.

For additional assistance, contact rcss-support@missouri.edu.