HPC Compute

Description

The Lewis cluster is a High Performance Computing (HPC) cluster that currently consists of 232 compute nodes and 6200 compute cores with around 1.5 PB of storage. The cluster serves on average 165 active users per month with about 3.3 million core hours of compute. Interested parties may invest in Lewis in increments of 10 cores. The investor will purchase the slice of Lewis cores up front, and the maintenance of the hardware, software and other infrastructure will be provided by RCSS for 5 years. 3 TB of group storage is included with investment as well. Larger scale investments (such as at the rack level) are possible and interested parties should contact the RCSS team to discuss their specific requirements further.

Example Use Cases

  1. I need to run a computational fluid dynamics simulation that requires very fast communication across different logical units.
  2. I need to analyze a large pool of gene expression data that far exceeds the processing capacity of my lab’s PC’s.
  3. I want to run a simulation on drug interaction and toxicity without involving live subjects.

Policies

General Use

  • Use of this system is governed by the rules and regulations of the University of Missouri and the University of Missouri System and by the requirements of any applicable granting agencies. Those found in violation of policy are subject to account termination.
  • Users must be familiar with and abide by the UM System acceptable use policy (CRR 110.005)[1] and the UM System Data Classification System (DCL)[2]. Data on the cluster is restricted to DCL1 or DCL2.

Accounts

  • Faculty of the University of Missouri – Columbia, Kansas City, St. Louis, and S&T may request user accounts for themselves, current students, and current collaborators. Account requests require the use of a UM System email address. The exception is for researchers of the ShowMeCI consortium who are not part of the University of Missouri System; they may apply for accounts for themselves and their students using their organization email address.

Collaborator Accounts

  • Faculty requesting accounts for collaborators must first apply for their collaborator to have a Courtesy Appointment thru the faculty’s department using the Personal Action Form[3]. After the Courtesy Appointment approval, faculty can submit an Account Request for their collaborator. Collaborators must submit account requests for their students using the student’s university email address. Collaborators agree to abide by the External Collaborator policy.

External Collaborator Policy

  • Follow the University of Missouri’s rules and policies [4] and your university policies and rules.
  • Data on the cluster is restricted to DCL1 or DCL2 [5].
  • Data storage and computation is for academic research purposes only, no personal, commercial, or administrative use.
  • Follow the Research Computing cluster policy.
  • Under no circumstances may access to the user account be shared or granted to third parties.
  • As a collaborator you will be assigned different priorities and limits from the rest of the users in the cluster.
  • Data is not backed up on the cluster and Research Computing is not responsible for the integrity of the data or data loss or the accuracy of the calculations performed.
  • We ask that you give the University of Missouri, Division of IT, Research Computing Support Services acknowledgment for the use of the computing resources.
  • We ask that you provide us with citations of any publications and/or products that utilized the computing resources.

Account Sharing

  • Direct sharing of account data on the cluster should only be done via a shared group folder. A shared group folder is setup by the faculty adviser or PI. This person is the group owner and can appoint other faculty to be a co-owner. The owners and co-owners approve the members of the group and are responsible for all user additions and removals. The use of collaboration tools, such as GIT, is encouraged for (indirect) sharing and backup of source data. A Gitlabs service is available for this purpose.
  • Sharing of accounts and ssh-keys is strictly prohibited. Sharing of ssh-keys will immediately result in account suspension.

Running Jobs

  • All jobs must be run using the SLURM job scheduler. Long term or resource heavy processes running on the login node are subject to immediate termination.
  • Normal Jobs running on the cluster are limited to two days running time. Jobs up to 7 days may be run after consultation with the RCSS team. Long jobs may be occasionally extended upon request. The number of long jobs running on the cluster is limited to ensure that users can run jobs in a timely manner. All jobs are subject to termination for urgent security or maintenance work or the stability of the cluster.

Investor Policy

  • Investors purchase nodes and, in exchange for idle cycles, the space, power, and cooling as well as management of the hardware, operating system, security, and scientific applications are provided by Research Computing Support Services at no cost for five years. After 5 years the nodes are placed in the Bonus pool for extended life and removed at the discretion of RCSS based on operating conditions. Investors get prioritized access to their capacity via the SLURM FairShare scheduling policy and unused cycles are shared with the community. Investors get 3TB of group storage and help migrating their computational research to the cluster. For large investments (rack scale) we will work with researchers and vendors to test and optimize configurations to maximize performance and value. Information on becoming an investor can be requested via rcss-support@missouri.edu.

Acknowledgements

  • We ask that when you cite any of the RCSS clusters in a publication to send an email to rcss-support@missouri.edu as well as share a copy of the publication with us. To cite the use of any of the RCSS clusters in a publication please use:
    • The computation for this work was performed on the high performance computing infrastructure provided by Research Computing Support Services and in part by the National Science Foundation under grant number CNS-1429294 at the University of Missouri, Columbia MO.

Pricing

Service FY2019 Rate FY2020 Rate Unit Group Storage Support
HPC Compute $2,600 $3,500 Per 10 Cores 3 TB 5 Years

 

Online Order Form