Skip to content

CHPC - Research Computing and Data Support for the University

In addition to deploying and operating high performance computational resources and providing advanced user support and training, CHPC serves as an expert team to broadly support the increasingly diverse research computing and data needs on campus. These needs include support for big data, big data movement, data analytics, security, virtual machines, Windows science application servers, protected environments for data mining and analysis of protected health information, and advanced networking.

If you are new to CHPC, the best place to start to get more information on CHPC resources and policies is our Getting Started page.

Upcoming Events:

CHPC Downtime: Tuesday March 5 starting at 7:30am

Posted February 8th, 2024


Two upcoming security related changes

Posted February 6th, 2024


Allocation Requests for Spring 2024 are Due March 1st, 2024

Posted February 1st, 2024


CHPC ANNOUNCEMENT: Change in top level home directory permission settings

Posted December 14th, 2023


CHPC Spring 2024 Presentation Schedule Now Available

CHPC PE DOWNTIME: Partial Protected Environment Downtime  -- Oct 24-25, 2023

Posted October 18th, 2023


CHPC INFORMATION: MATLAB and Ansys updates

Posted September 22, 2023


CHPC SECURITY REMINDER

Posted September 8th, 2023

CHPC is reaching out to remind our users of their responsibility to understand what the software being used is doing, especially software that you download, install, or compile yourself. Read More...

News History...

image 1

How HPC Helped the Utah Symphony Keep Its Doors Open During the Pandemic

By Tony Saad and James C. Sutherland

Department of Chemical Engineering, University of Utah

On June 23, 2020, we received an email from Professor Dave Pershing, former president of the University of Utah, asking if we could help the Utah Symphony/Utah Opera (USUO) to analyze the dispersion of airborne droplets emitted from wind instruments at Abravanel Hall (and later Capitol Theater). Thinking of this problem from an engineering perspective, and based on our knowledge of how viral transmis- sion works, a virus attaches itself to a respiratory droplet which is subsequently exhaled into the air. Although large droplets generally settle and lead to surface contamination, small “aerosolized” droplets become suspended in the air and move with it. This means that these aerosols can be modeled as a tracer in a fluid flow simulation. We were excited to help as this aligns closely with our expertise in Computational Fluid Dynamics (CFD).

Thanks to the generosity of CHPC in granting us over 600K CPU hours (~68 CPU years!), we were able to run over 25 simulations in total using our in-house code, Wasatch, a component of the Uintah Computational Framework. The first step in our analysis was to understand the baseline configuration of the airflow created by the HVAC in Abravanel Hall’s stage along with a proposed seating arrangement for the orchestra. We found significant accumulation of respiratory droplets in the stage area, indicating an increased risk of infection. To mitigate the accumulation of droplets in the baseline configuration, our team considered two “low-cost” mitigation strategies: (1) increasing the volume of air leaving the hall, and (2) rearranging the location of instruments so that super emitter and spreader instruments are located closer to return/exit vents. Combining these led to a decrease in particle concentrations by a factor of 100, a significant improvement over the baseline configuration.

For more information, see our Spring 2021 newsletter here.





System Status

General Environment

last update: 2024-04-18 19:13:03
General Nodes
system cores % util.
kingspeak 943/972 97.02%
notchpeak 3124/3212 97.26%
lonepeak 3140/3140 100%
Owner/Restricted Nodes
system cores % util.
ash 1120/1152 97.22%
notchpeak 8829/18328 48.17%
kingspeak 3284/5340 61.5%
lonepeak 100/416 24.04%

Protected Environment

last update: 2024-04-18 19:10:04
General Nodes
system cores % util.
redwood 200/616 32.47%
Owner/Restricted Nodes
system cores % util.
redwood 968/6200 15.61%


Cluster Utilization

Last Updated: 2/20/24