Skip to content

2.3 Adding Nodes to CHPC Clusters

2.3 Adding Nodes to CHPC Clusters

Researchers with sponsored research projects may partner with CHPC to purchase nodes (compute or interactive at CHPC negotiated rates to add to CHPC Clusters where they may have exclusive or priority access. Although CHPC will normally provide the environmental infrastructure (racks, power and cooling) and system administration, some other costs beyond the node hardware may be bundled into the node cost.

Owners will need to sign and return a Faculty Resource User Agreement  when purchasing owner nodes. This agreement outlines the terms of support. If there are any questions or concerns with any of the terms of this agreement, please contact CHPC to discuss.

Below are the policies related to participating in this program:

  1. Please work closely with CHPC to determine specifications and obtain quotes. We often can bundle together purchases from multiple groups to obtain better pricing, so please contact us with your needs. You may in general select the speed and memory for these purchases, but there may be minimum requirements on memory and disk space, as negotiated with CHPC.  In addition, specialized networking cards, switch ports and node/rack health monitoring hardware are required. Please contact CHPC to obtain latest quotes for what you are interested in. 
  2. Once you have decided on a configuration, make an official request that CHPC purchase the decided upon nodes. This request should  include the quote and information about the account(s) to be charged for the cost of the nodes. Send this request by email to helpdesk@chpc.utah.edu.
  3. CHPC will do all the paper work so we can keep track of delivery and inventory.
  4. Once the equipment is received by CHPC, please allow up to two weeks for configuration and burn in before you have general access to your nodes. During this burn in period, if appropriate, members of your group can participate in the burn in process by running some test jobs and providing feedback.
  5. Priority Policies: Priority and policies will be determined on a case by case basis, but usually read as follows:
    1. Users in your group will be given highest priority for running on your nodes and preemptor status.
    2. Users in your group will compete against each other for priority and CHPC will use fairshare and other techniques to determine scheduling. Parallelism will be rewarded.
    3. By default, general users can access owner compute nodes in a preemptable fashion. These jobs will be scheduled and tracked in a special guest account. 
  6. Nodes will be added to the current active CHPC production cluster that is available at that time, unless a newer production cluster is anticipated to be in production within a reasonable time frame, as determined by CHPC.
  7. When contributing nodes to a CHPC cluster, it is expected that CHPC will maintain and house these nodes in the cluster for as long as they are under warranty.
  8. After the warranty period on your node expires, CHPC does not have an obligation to maintain that hardware. Once the warranty period ends (currently five to seven years) CHPC will do basic maintenance on the nodes. If there is a hardware failure of some kind rendering the node offline, CHPC will obtain a quote for needed parts for a repair, and working with you, determine if making the repairs is the best option. Minor hardware repairs will be performed by CHPC on a best effort basis.
  9. CHPC reserves the right to retire nodes that are out of warranty from the clusters in the event that  maintaining them exceeds available resources (staff time, power, cooling, and machine room space).

Please contact CHPC with any questions regarding these procedures.

Last Updated: 7/5/23