You are here:

2.3 Protected Environment HPC Cluster Policies

Access to resources in the protected environment HPC Cluster is restricted to use for projects that require the extra security layers that exist on this cluster.  In order to use these resources the user must demonstrate need for access to these resources to CHPC.  All users with access have to be provisioned with a HIPAA account in addition to their normal CHPC account.   All access to this cluster, including the interactive nodes, must be via the CHPC HIPAA VPN pool or bouncer.chpc.utah.edu, which uses the Duo two factor authentication.

2.3.1 Redwood Job Scheduling Policy

Job Control
Jobs will be controlled through the batch system using Slurm. 

      1. Node sharing. No node sharing.
      2. Allocations. No allocation controls.
      3. Best effort to allocate nodes of same CPU speed.
      4. Max time limit for jobs will be as outlined in the QOS definitions below.
      5. Scheduling is set based on a current highest priority set for every job, plus backfill.
      6. Fairshare boost in priority at user level. Minimal boost to help users who haven't been running recently. Our Fairshare window is two weeks.
      7. Small relative to  time small short jobs are given a boost for length of time in the queue as it relates to the wall time they have requested.
      8. Reward for parallelism. Set at the global level
      9. Special Reservations - upon request.
        1. Partitions
          Partition Name Access Accounts Node/core count Memory  Features  Node Specification
           redwood all with PE accounts <pi>  4/128
          11/308
          190GB;128GB    chpc, c32, c28     rw[029-032,033-043] 
          redwood-gpu by request redwood-gpu 2/64 190GB geforce rw[085-086]
          redwood-freecycle all with PE accounts <pi> 4/128
          11/308 
           190GB;128GB   chpc, c32, c28    rw[029-032,033-043] 
          redwood-guest all with PE accounts owner-guest

          sum of owners

          see owner nodes <owner>, core count  

          use sinfo to see details

          <owner>-rw restricted <owner>-rw, owner-guest

          use sinfo to see details

          use sinfo to see details  <owner>, core count use sinfo to see details
        2. Job priorities. Majority of a job's priority will be set based on a quality of service definition or QOS. The following initial QOSs to be defined:
           QOS  Priority   Preempts  Preempt Mode Flags  Maxwall
           redwood  100000  redwood-freecycle   cancel    3-00:00:00
           redwood-freecycle    10000    cancel  NoReserve   3-00:00:00 
           redwood-guest  10000    cancel  NoReserve 3-00:00:00 
          <owner>-rw  100000 kingspeak-guest  cancel  varies set by owner
          default is 14-00:00:00
Last Updated: 2/13/19