Skip to content

Recent and Upcoming Changes to CHPC Resources

Posted: February 14th, 2020

Frisco nodes:

This week three of the older frisco (frisco3-5) nodes have been replaced with nodes having more cores and more memory; frisco2 was replaced a while ago.  The specs of the current set of frisco nodes is found at https://www.chpc.utah.edu/documentation/guides/frisco-nodes.php  Note that the new frisco3 does not have a graphics card at this time, so it does not support usage requiring virtualgl.  We are also working to identify a replacement for frisco6. 

In conjunction with this we have also increased the arbiter threshold limits for work on the frisco nodes (see https://www.chpc.utah.edu/documentation/policies/2.1GeneralHPCClusterPolicies.php for details) from  just below 2 cores and 8 GB memory to just below 4 cores and 16 GB memory. Usage above these limits for sufficient time will result in the usage being throttled.

Lonepeak general node pool expansion and node replacement:

As mentioned in the Fall 2019 CHPC newsletter, during Fall the first two phases of the lonepeak changes were completed, resulting in the addition of 96 nodes, lp[133-228], each having 12 cores and 96 GB memory.  Last week the replacement of the first set of existing nodes was completed, with  lp[065-096] being replaced with 12 core, 96 GB nodes. The final set of 64 nodes, lp[017-064, 097-112], have a reservation in place for their replacement.

In addition, we have added 4 nodes, lp[229-232], each with 32 cores and 1TB of memory. Again these are not new nodes, but nodes we that we are repurposing as lonepeak compute nodes.

Ember Retirement:

March 1 has been set as the date on which we will end service on the ember cluster.  There is a reservation in place to clear the nodes of running jobs by Feb 29 at 11:00pm. Any jobs that remain in the queue will be resubmitted on one of the other clusters.

Effective March 1, kingspeak general nodes will no longer use allocation, but will run in an unallocated manner, and only usage on the general nodes of the notchpeak cluster will use general allocations.

As always please let us know of any questions or concerns by sending email to helpdesk@chpc.utah.edu.

Last Updated: 12/17/24