Skip to content

DOWNTIME: Select CHPC systems, Tuesday, March 22, 2022 starting at 7am

Date Posted: March 8, 2022

CHPC will have a downtime on Tuesday,March 22, 2022 starting at 7 am. 
 
This downtime will be used to 
  • update the OS on the windows servers Beehive and Narwhal
  • update drive firmware on select storage trays
  • make a network switch change in the ScienceDMZ to prepare for the new home and scratch storage
  • start the  process of the UPS update in the INSCC machine room
  • start on the OS upgrade to RockyLinux8. During this downtime, the compute and interactive nodes of lonepeak and the frisco nodes will be updated
Impact:
  1. At about 7 am, for about 30 minutes, there will be no internet access (both wired and wireless) in the INSCC building.
  2. Starting at 8 am, Narwhal and Beehive will be down most of the day for OS updates.
  3. Starting at 8 am, the compute and interactive nodes of lonepeak  and the frisco nodes will be moved to RockyLinux8. These resources will be unavailable for most of the day.
    1.  A reservation is in place to drain lonepeak of running jobs before the start of the downtime. 
    2. Pending jobs will remain in the queue and will run after the downtime is complete.  
    3. Users should save any output and close sessions, including ssh, srun, ondemand and fastX sessions, on the these nodes before the start of the downtime.
  4. At about 9 am, the data transfer nodes (DTNs) on the Science DMZ (dtn{05-08}, sdss-dtn) will be unavailable for about 2 hours.
  5. /scratch/general/nfs1 in the general environment and /scratch/general/pe-nfs1 in the protected environment (PE) will be unavailable for a short window (less than 30 minutes expected).  The outage will be short and jobs with I/O to these scratch file systems should be able to run through the outage.
  6. Group spaces on cottonwood-vg9 through vg14file systems will be unavailable for a short window. Again, the outage will be short and jobs with I/O to these spaces should be able to run through the outage.  
    1. You can determine which file system your group space(s) are on by going to a cluster interactive node, cd to the group space in question, and doing a "df -h | grep groupspacename".  this output will give you the file system such as cottonwood-vg9-1-lv1.  Anything with vg9, vg10, vg11, vg12, vg13 or vg14 will be impacted.
Please let us know, via helpdesk@chpc.utah.edu, if you have any questions or concerns.
Last Updated: 12/17/24