Skip to content

Open OnDemand web portal

Open OnDemand is a web portal that provides access to CHPC file systems and clusters. It allows to view, edit, and upload and download files, create, edit, submit and monitor jobs, run GUI applications, and connect via SSH, all via a web browser and with a minimal knowledge of Linux and scheduler commands.

The Ohio Supercomputer Center, which develops Open OnDemand, has a detailed documentation page on all the features, most of which are functional at CHPC as well.

Connecting to Open OnDemand

To connect, point your web browser to https://ondemand.chpc.utah.edu and authenticate with your CHPC user name and password. For classes, use https://ondemand-class.chpc.utah.edu. For the Protected Environment, go to https://pe-ondemand.chpc.utah.edu. After this, a front page is shown with a top menu like this:
OOD top menu

This menu provides access to the OnDemand tools.

 

XDMod integration

CHPC uses XDMoD to report job metrics, such as resource utilization, at xdmod.chpc.utah.edu in the general environment or pe-xdmod.chpc.utah.edu in the Protected Environment.  The OnDemand front dashboard provides links to user utilization and job efficiency data:

OOD XDMoD metrics

 

File Management and Transfer

The Files menu allows one to view and operate on files in the user's home directory and any other directories that they are a part of, such as group spaces and scratch file systems. OSC's File Transfer and Management help page provides details on its use. This is complementary to opening a FastX remote desktop and using that desktop's file manager, and using SCP based remote transfer tools like WinSCP or CyberDuck.

You can easily transfer files from one's personal computer to the CHPC through Open OnDemand. For example, to transfer files to one's home directory, click the 'Files' dropdown menu in the top and click on the 'Home Directory' link. Once in your home directory, you can click the blue 'Upload' button in the top right corner of the file manager window and pop up will open to browse and upload files from your personal computer.

 

Job Management

Jobs can be monitored, created, edited, and submitted with the job management tools under the Jobs menu. OSC's Job Management help page provides more information on the use and features. This serves as a GUI alternative to SLURM scheduler commands.

You can view currently running jobs by clicking Jobs -> Active Jobs. All active jobs running on CHPC clusters can be filtered for your jobs by clicking All Jobs and filtering it for Your Jobs. You can additionally change the filtering criteria from All Clusters to the cluster of your choice.

The Open OnDemand Job Management tools also enable you to write SLURM batch scripts with the help of pre-defined templates. You can access available templates by clicking Jobs -> Job Composer. At its most basic, the From Default Template drop-down allows you to select your cluster, resource needs, and will allow you to edit the command line arguments. We also have a variety of SLURM templates available for certain packages, available in the From Template drop-down menu. If you don't see a job template you would like, contact us at helpdesk@chpc.utah.edu. Lastly, if you have written SLURM scripts at the CHPC in the past and would like to use on of those as a template, you can use the From Specified Path or From Selected Job drop down menus.

Jobs can be submitted, stopped, and scripts deleted with the available SubmitStop, and Delete buttons.

 

Shell Access

The Clusters drop-down menu at the top provides links for shell access to all CHPC clusters via interactive nodes. The shell terminal is similar to many other tools that provide terminal access.

cluster-terminal-access

 

Interactive Applications

The Interactive Apps drop-down menu contains items to launch certain GUI applications interactively on CHPC cluster compute nodes or the Frisco nodes. On the clusters, the Apps are launched in a SLURM scheduled job, that is, they get allocated unique resources on the compute nodes and run like any other SLURM job via the command line. On the Frisco nodes, a new remote desktop login session is initiated, which is subject to the same resource limits as other Frisco login sessions (= monitored by the Arbiter script).

The supported applications include a remote desktop, Ansys Workbench, Ansys Electronics Desktop, Abaqus, COMSOL, IDL, Lumerical, MATLAB, Mathematica, Jupyter Notebook, RELION, RStudio Server, SAS, Star-CCM+, Stata and VS Code. Applications that require a functional X server are only available for the Frisco nodes, and include Coot, IDV, Meshroom, Paraview and VMD. Other applications may be requested viahelpdesk@chpc.utah.edu.

In the general environment, all applications submissions default to the notchpeak-shared-short cluster partition which is designed to provide interactive access to the compute resources with limits of 16 CPU cores and 8 hours wall time per job. However, any cluster partition/allocation or Frisco node can be requested this way, with a few exceptions noted in the respective app's form page.

Similarly, in the protected environment, the redwood-shared-short account and partition are used as default where the maximum is 8 CPU cores for 8 hours.

Most interactive applications share a common form with input fields shown and described further down.

Interactive App form

Choose the appropriate job options or leave them to their defaults of 1 CPU for 1 hour. To submit the job, push the Launch button. A new window, My Interactive Sessions, appears, informing that the job is being staged.

Once the job starts, the window is updated with the Launch button (e.g. Launch Interactive Desktop in the example below). Once we click on the Launch button, a new browser tab opens with the application running at the compute node. Keep in mind that this interactive session will stay alive if one closes the browser tab, as long as it is within the chosen run time (walltime).

To remove the session, push the red Delete button, which cancels the job and the interactive session.

There is also a possibility to get a view only web link if it is supported (View Only button) that one can share with their colleagues who have access to ondemand.chpc.utah.edu. One can also open an ssh session to the node with the job by clicking onto the blue host name box.

For troubleshooting, there is the Session ID, which links to a directory where the job session files are. If something does not work, please, supply us with the Session ID link, which helps us to quickly identify the correct job.

Interactive Session

 

Basic options

There are several required inputs, ClusterAccount and PartitionNumber of cores (per node), and Number of hours. Certain applications also include Program version and a few specific inputs. Below are the descriptions of these required inputs:

Cluster to select the cluster to run the Interactive App on, choose one of the pull-down options. The default is notchpeak since it houses the notchpeak-shared-short partition which targets interactive Open OnDemand jobs.

Cluster

Account and partition - the SLURM account and partition to use. Options automatically change for the selected cluster.

Account and partition

Number of CPUs number of CPUs to use. Note that in case of a multi-node job, this is a number of CPUs per node. Default is 1 CPU, leave it at that unless you know that your application is parallelized.

Number of CPUs

Number of hours - the duration of the Interactive App session. Note that once the time expires, your session automatically ends. Any unsaved changes may not be saved depending on the application.

Number of hours

 

Advanced options

Advanced options are listed after checking the Advanced options checkbox. If one of the advanced options has a non-default value, the form will not allow hiding of the advanced options. If you want to hide the advanced options, make sure to remove the value or set the value to default. The advanced options include:

Number of nodes - number of compute nodes that the job will use. This is only active for programs that support multi-node execution, such as Abaqus, AnsysWB, AnsysEDT, Comsol, Lumerical, and Relion. Note that the Number of CPUs is listed as CPUs per node, so, total number of CPUs for job will be Number of nodes x Number of CPUs.

Number of nodes

Memory per job (in GB) - amount of memory the job needs. This refers to the total memory for the whole job, not per CPU. Default is 2 GB or 4 GB per CPU. If left at 0, the default is used.

Memory per job

GPU type, count - Type of GPU and count of GPU devices to use for the job. Make sure that a partition that includes this GPU is selected, based on the ownership of the CPU (general = owned by CHPC; owner = owned by a specific research group). A floating point numerical precision of the GPU is also listed (SP or DP). See GPU node list for details on GPU features, owners, and counts per node. Specify the number of GPUs requested in the GPU count field.

GPU type, count

Nodelist list of nodes to run the job on (equivalent to the SLURM -w option). Useful for targeting specific compute nodes.

Nodelist

Additional Environment - additional environment settings to be imported into the job, such as loading modules, setting environment variables, etc., in the BASH syntax. Note that this works only for applications that run natively on CHPC systems, not on applications that run in containers.

Additional environment

Constraints - constraints for the job (-C SLURM option). Helpful for targeting less used owner-guest nodes to lower preemption chances, see this page for more information. Also can be used to request specific CPU architecture types, e.g. only AMD or only Intel CPUs. Use the or operator to include more options together.

Constraints

 

Interactive Desktop

The Interactive Desktop app allows users to submit an interactive job to a cluster and attach to it via a remote interactive desktop connection. One thus gets a fully functioning desktop on a cluster compute node, or a Frisco node, from which GUI applications can be run.

To open the desktop, we first select Interactive Apps -> Interactive Desktop menu item, obtaining a form window just as the one showed above where we can specify the job parameters.

After pushing the Launch button, the job gets submitted. A new window, My Interactive Sessions, appears informing that the job is being staged. Once the job starts, the window is updated with the Interactive Desktop launch button (Launch Interactive Desktop). Once we click on the Launch Interactive Desktop, a new browser tab opens with the desktop at the compute node. This interactive session will stay alive if one closes the browser tab.

To remove the session, push the red Delete button, which cancels the job and the interactive session.

There is also a possibility to get a view only web link to this interactive desktop (View Only button) that one can share with their colleagues who have access to ondemand.chpc.utah.edu. One can also open an ssh session to the node with the job by clicking onto the blue host name box. Importantly for troubleshooting, there is the Session ID, which links to a directory where the job session files are. If something does not work, please, supply us with the Session ID when you contact us for help.

 

MATLAB

First choose the Interactive Apps -> MATLAB menu item. A job submission window appears where we can change parameters such as the cluster, MATLAB version, CPU core count, job walltime, and optional memory and GPU requirements. MATLAB numerical calculations on vectors and matrices will benefit from parallelization up to roughly 16 CPU cores. Walltime should be chosen based on the estimated time the work will take. The default parameters are centered around the notchpeak-shared-short or redwood-shared-short limits.


Matlab

If in doubt, leave the defaults and push the blue Launch button, which submits the job. If there are available resources (there should be on notchpeak-shared-short or redwood-shared-short), the job will start, after which the interactive session will show that the job is running.


OOD Matlab job session

Pushing the blue Launch MATLAB button will open a new browser tab which will start an interactive desktop session followed by a launch of the MATLAB GUI. One can then interact with this MATLAB GUI just as they have on their local computer:


OOD matlab session

Please, keep in mind that closing the browser tab with MATLAB does not terminate it, because the remote sessions are persistent. The persistency brings a benefit of leaving a calculation running even when the browser tab is closed, and re-connecting to it later. To reconnect, click on the MyInteractiveSessions menu icon, My Interactive Sessions icon, to view active sessions, and click on the blue Launch MATLAB to start a new browser tab with the previously started MATLAB session. To terminate the session, click the red Delete button in the interactive sessions list to cancel the job.

 

Ansys Workbench

To run the Ansys Workbench, click on the Interactive Apps ->  Ansys Workbench menu item, we obtain the job submission window, where we fill in the needed parameters.


Ansys

After hitting Launch we get the job status window, which gets updated when the job starts.


OOD Ansys 2

We open the interactive session with the Launch Ansys Workbench to get the Ansys Workbench interface. There we can import a CFX input file, right click on the Solution workbench item to bring up the Run menu, and hit Start Run to run the simulation. Run progress is then interactively displayed in the Solver Manager window. This provides a user with the same experience as if run on a local workstation, but, instead, using one or more dedicated many-core cluster compute nodes.


OOD Ansys 3

 

Jupyter Notebook and Jupyter Lab

Jupyter notebooks allow for interactive code development. Although originally developed for Python, Jupyter notebooks now support many interpretative languages. Jupyter Lab provides a newer and better navigatable user interface as compared to the older Jupyter Notebook interface. Jupyter Notebook or Jupyter Lab within OnDemand runs on a cluster compute node using dedicated resources and can be launched by choosing the Interactive Apps -> Jupyter Notebook or Jupyter Lab menu item. A job submission screen will appear:


Jupyter

Set the job parameters. Unless using numerical libraries like NumPy, which are thread-parallelized, it is not adviseable to choose more than one CPU.

In the Environment Setup, one can either specify the CHPC-owned Python installation or their own Python installation. This allows one to use their own Python stack or even someone else's Python/Anaconda/Miniconda.

Note: If you are using your own Miniconda module, make sure that you have installed the Jupyter infrastructure by running conda install jupyter before attempting to launch any Jupyter job. For Jupyter Lab, run  conda install -c conda-forge jupyterlab.

Kernels for additional languages can be installed following their appropriate instructions, e.g. for R, the IRkernel. When installing the IRkernel, make sure to put the kernel into the directory where the Miniconda is installed with the  prefix option - by default it does not go there which may create conflicts with other Python versions. It is also a good idea to name the kernel so that multiple R versions can be supported with the  name option. That is:  IRkernel::installspec(prefix='/uufs/chpc.utah.edu/common/home/uNID/software/pkg/my_miniconda',name='ir402') .

Also note that if you are using Python Virtual Environments, you need to install the  ipykernel  in each virtual environment.

Once the job parameters are specified, hit the Launch button to submit the interactive job. The job gets queued; when it starts and Jupyter is provisioned, the following window appears:


Jupyter job ready

Click on the Connect to Jupyter button to open a new browser tab with the main Jupyter interface. Note that the Running Jupyter tab shows active notebooks, for example:


jupyter running notebooks

We have installed support for Python (CHPC Linux python/3.6.3 module), R (CHPC Linux R/3.6.1 module) and Matlab (R2019a). If you need other versions or support for other languages, contact helpdesk@chpc.utah.edu.

NOTE: The Jupyter server starts in the user's home directory and only files in the home directory are accessible through the Jupyter file browser. To access files on other file systems (scratch, group space), create a symbolic link from this space to your home directory, e.g.:

ln -s /scratch/general/nfs1/u0123456/my_directory $HOME/my_scratch/. 

 

RStudio server

The RStudio Server runs the RStudio interactive development environment inside of a browser. The OnDemand implementation allows users to set up and launch the RStudio Server on a cluster compute node for dedicated resources, allowing users to run more compute intensive R programs in the RStudio environment. To start an RStudio Server job, first navigate to the Interactive Apps -> RStudio Server menu item. A job parameters window will appear similar to the one below. Choose the appropriate job parameters, keeping in mind that R can internally thread parallelize vector based data processing, for which more than one CPU can be utilized.


RStudio

After clicking the Launch button, the cluster job is submitted and after the job is allocated resources, the following window appears:


Rstudio launch

Clicking on the Connect to RStudio Server button opens a new tab with RStudio:


rstudio tab

To terminate the session, close the RStudio browser tab and push Delete to delete the running job. Note that just closing the RStudio browser tab will keep the session active.

NOTE: If you install new R packages and get an error such as g++: error: unrecognized command line option '-wd308', please modify your ~/.R/Makevars and remove all the flags that contain the '-wd308' option. We recommend this option in order to build packages with Intel compilers that are used to build CHPC's R. However, the R we use in Open OnDemand uses g++, therefore these flags are not valid.

NOTE: For technical reasons, the RStudio Server currently does not work on the Frisco nodes. Please, contact us if you need this functionality.

Last Updated: 7/26/24