The Lewis cluster is a High Performance Computing (HPC) cluster that currently consists of around 225 compute nodes and 8544 compute cores with around 3 PB of storage. The cluster serves on average 178 active users per month with about 3.5 million core hours of compute. Interested parties may invest in Lewis in increments of 12 cores. The investor will purchase the slice of Lewis cores up front, and the maintenance of the hardware, software and other infrastructure will be provided by RCSS for 5 years. 3 TB of group storage is included with investment as well. Larger scale investments (such as at the rack level) are possible and interested parties should contact the RCSS team to discuss their specific requirements further.
Example Use Cases
I need to run a computational fluid dynamics simulation that requires very fast communication across different logical units.
I need to analyze a large pool of gene expression data that far exceeds the processing capacity of my lab’s PC’s.
I want to run a simulation on drug interaction and toxicity without involving live subjects.
Use of this system is governed by the rules and regulations of the University of Missouri and the University of Missouri System and by the requirements of any applicable granting agencies. Those found in violation of policy are subject to account termination.
Users must be familiar with and abide by the UM System acceptable use policy (CRR 110.005)
and the UM System Data Classification System (DCL)
. Data on the cluster is restricted to DCL1 or DCL2.
Faculty of the University of Missouri – Columbia, Kansas City, St. Louis, and S&T may request user accounts for themselves, current students, and current collaborators. Account requests require the use of a UM System email address. The exception is for researchers of the ShowMeCI consortium who are not part of the University of Missouri System; they may apply for accounts for themselves and their students using their organization email address.
Faculty requesting accounts for collaborators must first apply for their collaborator to have a Courtesy Appointment thru the faculty’s department using the Personal Action Form
. After the Courtesy Appointment approval, faculty can submit an Account Request for their collaborator. Collaborators must submit account requests for their students using the student’s university email address. Collaborators agree to abide by the External Collaborator policy.
External Collaborator Policy
Follow the University of Missouri’s rules and policies 
and your university policies and rules.
Data on the cluster is restricted to DCL1 or DCL2 
Data storage and computation is for academic research purposes only, no personal, commercial, or administrative use.
Follow the Research Computing cluster policy.
Under no circumstances may access to the user account be shared or granted to third parties.
As a collaborator you will be assigned different priorities and limits from the rest of the users in the cluster.
Data is not backed up on the cluster and Research Computing is not responsible for the integrity of the data or data loss or the accuracy of the calculations performed.
We ask that you give the University of Missouri, Division of IT, Research Computing Support Services acknowledgment for the use of the computing resources.
We ask that you provide us with citations of any publications and/or products that utilized the computing resources.
Direct sharing of account data on the cluster should only be done via a shared group folder. A shared group folder is setup by the faculty adviser or PI. This person is the group owner and can appoint other faculty to be a co-owner. The owners and co-owners approve the members of the group and are responsible for all user additions and removals. The use of collaboration tools, such as GIT, is encouraged for (indirect) sharing and backup of source data. A Gitlabs service is available for this purpose.
Sharing of accounts and ssh-keys is strictly prohibited. Sharing of ssh-keys will immediately result in account suspension.
All jobs must be run using the SLURM job scheduler. Long term or resource heavy processes running on the login node are subject to immediate termination.
Normal Jobs running on the cluster are limited to two days running time. Jobs up to 7 days may be run after consultation with the RCSS team. Long jobs may be occasionally extended upon request. The number of long jobs running on the cluster is limited to ensure that users can run jobs in a timely manner. All jobs are subject to termination for urgent security or maintenance work or the stability of the cluster.
Investors purchase nodes and, in exchange for idle cycles, the space, power, and cooling as well as management of the hardware, operating system, security, and scientific applications are provided by Research Computing Support Services at no cost for five years. After 5 years the nodes are placed in the Bonus pool for extended life and removed at the discretion of RCSS based on operating conditions. Investors get prioritized access to their capacity via the SLURM FairShare scheduling policy and unused cycles are shared with the community. Investors get 3TB of group storage and help migrating their computational research to the cluster. For large investments (rack scale) we will work with researchers and vendors to test and optimize configurations to maximize performance and value. Information on becoming an investor can be requested via email@example.com.
||Per 12 Cores