Skip to content

CHPC - Research Computing and Data Support for the University

In addition to deploying and operating high performance computational resources and providing advanced user support and training, CHPC serves as an expert team to broadly support the increasingly diverse research computing and data needs on campus. These needs include support for big data, big data movement, data analytics, security, virtual machines, Windows science application servers, protected environments for data mining and analysis of protected health information, and advanced networking.

If you are new to CHPC, the best place to start to get more information on CHPC resources and policies is our Getting Started page.

Upcoming Events:

Allocation Requests for Summer 2024 are Due June 1st, 2024

Posted May 1st, 2024


CHPC Downtime: Tuesday March 5 starting at 7:30am

Posted February 8th, 2024


Two upcoming security related changes

Posted February 6th, 2024


Allocation Requests for Spring 2024 are Due March 1st, 2024

Posted February 1st, 2024


CHPC ANNOUNCEMENT: Change in top level home directory permission settings

Posted December 14th, 2023


CHPC Spring 2024 Presentation Schedule Now Available

CHPC PE DOWNTIME: Partial Protected Environment Downtime  -- Oct 24-25, 2023

Posted October 18th, 2023


CHPC INFORMATION: MATLAB and Ansys updates

Posted September 22, 2023


CHPC SECURITY REMINDER

Posted September 8th, 2023

CHPC is reaching out to remind our users of their responsibility to understand what the software being used is doing, especially software that you download, install, or compile yourself. Read More...

News History...

Identify Atypical Wind Events

Pando Object Storage Archive Supports Weather Research

By Brian K. Blaylock1, John D. Horel1&2, Chris Galli1&2

1Department of Atmospheric Sciences, University of Utah; 2Synoptic Data, Salt Lake City, Utah

Terabytes of weather data are generated every day by gridded model simulations and in situ and remotely sensed observations. With this accelerating accumulation of weather data, efficient computational solutions are needed to process, archive, and analyze the massive datasets. The Open Science Grid (OSG) is a consortium of computer resources around the United States that makes idle computer resources available for use by researchers in diverse scientific disciplines. The OSG is appropriate for high-throughput computing, that is, many parallel computational tasks. This work demonstrates how the OSG has been used to compute a large set of empirical cumulative distributions from hourly gridded analyses of the High-Resolution Rapid Refresh (HRRR) model run operationally by the Environmental Modeling Center of the National Centers for Environmental Prediction. The data is being archived within Pando, an archive named after the vast stand of aspen trees in Utah. These cumulative distributions derived from a 3-yr HRRR archive are computed for seven variables, over 1.9 million grid points, and each hour of the calendar year. The HRRR cumulative distributions are used to evaluate near-surface wind, temperature, and humidity conditions during two wildland fire episodes—the North Bay fires, a wildfire complex in Northern California during October 2017 that was the deadliest and costliest in California history, and the western Oklahoma wildfires during April 2018. The approach used here illustrates ways to discriminate between typical and atypical atmospheric conditions forecasted by the HRRR model. Such information may be useful for model developers and operational forecasters assigned to provide weather support for fire management personnel.

Read the article in the Journal of Atmospheric and Oceanic Technology.

System Status

General Environment

last update: 2024-07-27 02:13:05
General Nodes
system cores % util.
kingspeak 828/972 85.19%
notchpeak 2896/3212 90.16%
lonepeak 1468/3060 47.97%
Owner/Restricted Nodes
system cores % util.
ash 792/1152 68.75%
notchpeak 10293/19948 51.6%
kingspeak 508/5324 9.54%
lonepeak 336/416 80.77%

Protected Environment

last update: 2024-07-27 02:10:02
General Nodes
system cores % util.
redwood 0/628 0%
Owner/Restricted Nodes
system cores % util.
redwood 654/6472 10.11%


Cluster Utilization

Last Updated: 5/1/24