Research Computing News, Page 7

MobaXterm Academic Site License

Jan. 22, 2016 — Research Computing Support Services is pleased to announce that we now have an academic site license for MobaXterm software ( There is no charge for researchers and students on the University of Missouri Columbia campus for research and teaching purposes. MobaXterm is the preferred way to access the Lewis3 cluster from a Windows desktop. You can download and install the software by downloading the installer from

Continue reading MobaXterm Academic Site License >>

Lewis3 Training Cancelled Today

Jan. 20, 2016 — Lewis3 Training will be cancelled today.

Continue reading Lewis3 Training Cancelled Today >>

Cluster Maintenance

Oct. 7, 2015 — The Lewis3 cluster will have a scheduled maintenance window on October 15, to upgrade our Isilon storage system as well as to update and reboot individual nodes. The maintenance window should only last a few hours. The scheduler will make sure that all jobs complete before this time. As the window approaches, jobs may be denied or stuck in the queue if they will not complete in time.

Continue reading Cluster Maintenance >>

Lewis3 Training Update

Sept. 1, 2015 — For the remainder of the semester, our ongoing training series for Lewis 3 will be held on Wednesdays in Stanley 147, from 11:00 a.m. - 12:00 p.m. This series covers the basics of the new hardware, how to use the new scheduler, how to use secure shell key-based authentication, and any other questions you may have. Please bring your own laptop.

Continue reading Lewis3 Training Update >>

Intel’s Parallel Programming Training

Aug. 31, 2015 — Please join us for this all-day Parallel Programming and Optimization with IntelR Xeon PhiT Coprocessors developer training event.

Continue reading Intel’s Parallel Programming Training >>

Before and After – Lewis3

Aug. 28, 2015 — Here are some before and after photos of Lewis3.

Continue reading Before and After – Lewis3 >>

Lewis3 Scheduler Policy Update

Aug. 18, 2015 — The scheduler policy has been updated to make it more flexible for users needs and to allow for longer jobs to be run. The 48-hour time limit is now a soft limit and a long queue has been added for users that need more time.

Continue reading Lewis3 Scheduler Policy Update >>

Lewis3 Account Migration

Aug. 4, 2015 — The Lewis3 upgrade went into production on July 1, 2015, and the first phase of the old hardware that was no longer viable has been decommissioned and removed from the legacy cluster. The remaining nodes will continue to be retired or migrated to the Lewis3 cluster, starting with the nodes that replaced the Clark cluster. The entire legacy cluster will be decommissioned sometime after July 1, 2016, and we will now provide only limited support for this environment to allow us to focus on growing the Lewis3 system. On July 31, 2016, any account that has not been migrated to Lewis3 will be closed and all data permanently removed from the cluster.

Continue reading Lewis3 Account Migration >>

GPRS Storage Now Available

July 22, 2015 — The General Purpose Research Storage (GPRS) system is now available for use with more than 1,000 tebibytes (TiB) of storage available for research projects. This storage is available for $10/TiB/Month and is accessible either for Linux (NFS), Windows (SMB) or directly connected to Lewis3 (HPC).

Continue reading GPRS Storage Now Available >>

Lewis3 Scheduler Policy

June 18, 2015 — The new Lewis3 cluster is nearing production and the final stage is to implement a scheduler policy (the scheduler is currently running without any policy). In order to provide a fair and flexible environment and to ensure that resources are efficiently used, we are implementing the Fair Share allocation policy on the cluster scheduler (SLURM). This scheduler policy encourages users to specify resource requirements (cores, RAM, and compute time) to ensure that the jobs requirements can be met and can be scheduled efficiently. The default allocation will be for 1 core, 1GB of RAM, and for 2 hours. This type of job has historically accounted for between 50% and 80% of the jobs run on the cluster. The maximum job length will be limited to 48 hours for non-investors (historically 99% of jobs run less than 50 hours) and at least 168 hours for cluster investors with the fair-share will have a half-life of 28 days. Jobs that exceed their requested resource allocation will be terminated to protect the system (memory exhaustion, for example) and the scheduler. Investors in the cluster will receive allocation shares based on their investment, and the community resources provided by research computing will be divided up evenly among all the users with a portion allocated for external collaboration. The policy will be revised periodically based on cluster usage patterns and user feedback.

Continue reading Lewis3 Scheduler Policy >>