Skip to content

Open OnDemand web portal

Open OnDemand is a web portal that provides access to CHPC file systems and clusters. It allows to view, edit, upload and download files, create, edit, submit and monitor jobs, run GUI applications, and connect via SSH, all via a web browser and with a minimal knowledge of Linux and scheduler commands.

Ohio Supercomputer Center, which develops Open OnDemand, has a detailed documentation page on all the features, most of which are functional at CHPC as well.

Connecting to Open OnDemand

To connect, point your web browser to https://ondemand.chpc.utah.edu, and authenticate with CHPC user name and password. For classes, use https://ondemand-class.chpc.utah.edu. For the Protected Environment, go to https://pe-ondemand.chpc.utah.edu. After this, a front page is shown with a top menu like this:
OOD top menu

This menu provides access to the OnDemand tools.

XDMod integration

CHPC uses XDMoD to report job metrics such as resource utilization at xdmod.chpc.utah.edu, or pe-xdmod.chpc.utah.edu in the Protected Environment.  OnDemand front dashboard provides links to the user utilization and job efficiency data:

OOD XDMoD metrics

File Management and Transfer

The Files menu allows one to view and operate on files in user's home directory. OSC's File Transfer and Management help page provides details on its use. This is complementary to opening a FastX remote desktop and using that desktop's file manager, and using SCP based remote transfer tools like WinSCP or CyberDuck.

Job Management

Jobs can be monitored, created, edited and submitted with the job management tools under the Jobs menu. OSC's Job Management help page provides more information on the use and features. This serves as a GUI alternative to SLURM scheduler commands (Active Jobs menu item) and allows to write SLURM batch scripts with the help of pre-defined templates (Job Composer menu item). If you don't see a job template you would like, contact us at helpdesk@chpc.utah.edu.

Shell Access

The Clusters tab provides links to shell access to all CHPC clusters interactive nodes. The shell terminal is similar to many other tools that provide terminal access.

Interactive Applications

The Interactive Apps menu contains items to launch certain GUI applications interactively on CHPC cluster compute nodes or the Frisco nodes. On the clusters, the Apps are launched in a SLURM scheduled job, that is, they get allocated unique resources on the compute nodes - like running any other SLURM job via the command line. On the Frisco nodes, a new remote desktop login session is initiated, which is subject to the same resource limits as other Frisco login sessions (= monitored by the Arbiter script).

The supported applications include a remote desktop, Ansys Workbench, Ansys Electronics Desktop, Abaqus, COMSOL, IDL, Lumerical, MATLAB, Mathematica, Jupyter Notebook, RELION, RStudio Server, SAS, Star-CCM+, Stata and VS Code. Applications that require a functional X server are only available for the Frisco nodes, and include Coot, IDV, Meshroom, Paraview and VMD. Other applications may be requested via helpdesk@chpc.utah.edu.

In the general environment, all applications subbmissions default to notchpeak-shared-short cluster partition which is designed to provide interactive access to the compute resources, with limits 16 CPU cores and 8 hours wall time per job. However, any cluster partition/allocation or Frisco node can be requested this way, with a few exceptions noted in the respective app's form page.

Similarly, in the protected environment, redwood-shared-short account and partition are used as default, here the maximum is 8 CPU cores for 8 hours.

Most interactive applications share a common form with input fields shown and described further down.

Interactive App form

Choose the appropriate job options, or leave them to their defaults, which give one 1 CPU for 1 hour. Then push the Launch button, which submits the job. A new window, My Interactive Sessions, appears informing that the job is being staged. Once the job starts, the window is updated with the Launch button (e.g. Launch Interactive Desktop). There is also possibility to get a view only web link if it is supported (View Only button) that one can share with their colleagues (who have access to ondemand.chpc.utah.edu). One can also open an ssh session to the node with the job by clicking onto the blue host name box. Importantly for troubleshooting, there is the Session ID, which links to a directory where the job session files are. If something does not work, please, supply us with the Session ID link, which helps us to quickly identify the correct job.

Interactive Session

Once we click on the Launch button, a new browser tab opens with the application running at the compute node. Keep in mind that this interactive session will stay alive if one closes the browser tab, as long as it is within the chosen run time (walltime).

To remove the session, push the red Delete button, which cancels the job and the interactive session.

Basic options

There are several required inputs, ClusterAccount and PartitionNumber of cores (per node) and Number of hours. Certain applications also include Program version and a few specific inputs.

Cluster the cluster to run the Interactive App on, choose one of the pull-down options. The default is notchpeak since it houses the notchpeak-shared-short partition which targets interactive Open OnDemand jobs.

Cluster

Account and partition - SLURM account and partition to use. Choose one that corresponds to the selected cluster from the pull-down options. In the future we will have the options automatically changing for the selected cluster.

Account and partition

Number of CPUs number of CPUs to use. Note that in case of a multi-node job, this is a number of CPUs per node. Default is 1 CPU, leave it at that unless you know that your application is parallelized.

Number of CPUs

Number of hours duration of the Interactive App session. 

Number of hours

Advanced options

Advanced options are listed after checking the Advanced options checkbox. If one of the advanced options has a non-default value, the form will not allow hiding the options. If you want to hide the options, make sure to remove the value, or set the value to default. The advanced options include:

Number of nodes - number of compute nodes that the job will use. Only active for programs that support multi-node execution, Abaqus, AnsysWB, AnsysEDT, Comsol, Lumerical, and Relion. Note that the Number of CPUs is listed as CPUs per node, so, total number of CPUs for job will be Number of nodes x Number of CPUs.

Number of nodes

Memory per job (in GB) - amount of memory the job needs. Total for the whole job, not per CPU. Default is 2 GB or 4 GB per CPU, if left at 0, the default is used.

Memory per job

GPU type, count - Type of GPU and count to use for the job. Make sure that a partition that includes this GPU is selected, based on the ownership of the CPU (general or owner = specific research group). A floating point numerical precision of the GPU is also listed (SP or DP). See GPU node list for details on GPU features, owners and counts per node. Specify the number of GPUs requested in the GPU count field.

GPU type, count

Nodelist list of nodes to run the job on (equivalent to the SLURM -w option). Useful for targeting specific compute nodes.

Nodelist

Additional Environment - additional environment settings to be imported into the job, such as loading modules, setting environment variables, etc., in the BASH syntax. Note that this works only for applications that run natively on CHPC systems, not on applications that run in containers.

Additional environment

Constraints - constraints for the job (-C SLURM option). Helpful for targeting less used owner-guest nodes to lower preemption chances, see this page for more information. Also can be used to request specific CPU architecture types, e.g. only AMD or only Intel CPUs. Use the or operator, |, to include more options together.

Constraints

Interactive Desktop

The Interactive Desktop app allows to submit an interactive job to a cluster and attach to it via a remote interactive desktop connection. One thus gets a fully functioning desktop on a cluster compute node, or a Frisco node, from which can be run GUI applications.

To open the desktop, we first select menu Interactive Apps - Interactive Desktop, obtaining the form window showed above, where we can specify the job parameters.

After pushing the Launch button, the job gets submitted. A new window, My Interactive Sessions, appears informing that the job is being staged. Once the job starts, the window is updated with the Interactive Desktop launch button (Launch Interactive Desktop). There is also possibility to get a view only web link to this interactive desktop (View Only button) that one can share with their colleagues (who have access to ondemand.chpc.utah.edu). One can also open an ssh session to the node with the job by clicking onto the blue host name box. Importantly for troubleshooting, there is the Session ID, which links to a directory where the job session files are. If something does not work, please, supply us with the Session ID.

Once we click on the Launch Interactive Desktop, a new browser tab opens with the desktop at the compute node. This interactive session will stay alive if one closes the browser tab.

To remove the session, push the red Delete button, which cancels the job and the interactive session.

MATLAB

First choose the Interactive Apps - MATLAB menu item. A job submission window appears, where we can change parameters such as the cluster, MATLAB version, CPU core count, job walltime, and optionally memory and GPU requirements. MATLAB numerical calculations on vectors and matrices will benefit from parallelization up to roughly 16 CPU cores. Walltime should be chosen based on the estimated time the work will take. The default parameters are centered around the notchpeak-shared-short or redwood-shared-short limits.
Matlab

If in doubt, leave the defaults and push the blue Launch button, which submits the job. If there are available resources (there should be on notchpeak-shared-short or redwood-shared-short), the job will start, after which the intractive session will show that the job is running.
OOD Matlab job session

Pushing the blue Launch MATLAB button will open a new browser tab, which will start an interactive desktop session, followed by a launch of MATLAB GUI. One can then interact with this MATLAB GUI, as they have been used to on their local computer:
OOD matlab session

Please, keep in mind that closing the browser tab with MATLAB does not terminate it, because the remote sessions are persistent. The persistency brings a benefit of leaving a calculation running even when the browser tab is closed, and re-connecting to it later. To reconnect, click on the My Interactive Sessions menu icon, My Interactive Sessions icon, to view active sessions, and click on the blue Launch MATLAB to start a new browser tab with the previously started MATLAB session. To terminate the session, click the red Delete button in the interactive sessions list to cancel the job.

Ansys Workbench

To run for example the Ansys Workbench, after choosing Interactive Apps - Ansys Workbench, we obtain the job submission window, where we fill in the needed parameters.
Ansys

After hitting Launch we get the job status window, which gets updated when the job starts.
OOD Ansys 2

We open the interactive session with the Launch Ansys Workbench, to get the Ansys Workbench interface. There we can for example import a CFX input file, right click on the Solution workbench item to bring up the Run menu, and hit Start Run to run the simulation. Run progress is then interactively displayed in the Solver Manager window. This provides an user with the same experience as if run on a local workstation, but, instead, using one or more dedicated many-core cluster compute nodes.
OOD Ansys 3

Jupyter Notebook and Jupyter Lab

Jupyter notebooks allow for interactive code development. Although originally developed for Python, Jupyter notebooks now support many interpretative languages. Jupyter Lab provides a newer and better navigable user interface as compared to the older Jupyter Notebook interface. Jupyter Notebook or Lab inside OnDemand which runs on a cluster compute node using dedicated resources can be launched by choosing menu Interactive Apps -> Jupyter Notebook or Jupyter Lab. A job submission screen appears:
Jupyter

Set the job parameters. Unless using numerical libraries like NumPy, which are thread-parallelized, it is not adviseable to choose more than one CPU.

In the Environment Setup, one can either specify CHPC or own Python installation. This allows one to use their own Python stack, or even someone else's Python/Anaconda/Miniconda.

Note: If you are using your own Miniconda module, make sure that you have installed the Jupyter infrastructure by running conda install jupyter before attempting to launch any Jupyter job. For Jupyter Lab, run  conda install -c conda-forge jupyterlab.

Kernels for additional languages can be installed following their appropriate instructions, e.g. for R, the IRkernel. When installing the IRkernel, make sure to put the kernel into the directory where the Miniconda is installed with the  prefix option - by default it does not go there which may create conflicts with other Python versions. It is also a good idea to name the kernel so that multiple R versions can be supported, with the  name option. That is  IRkernel::installspec(prefix='/uufs/chpc.utah.edu/common/home/uNID/software/pkg/my_miniconda',name='ir402') .

Also note that if you are using Python Virtual Environments, you need to install the  ipykernel  in each virtual environment.

Once the job parameters are specified, hit the Launch to submit the interactive job. The job gets queued up and when it starts and the Jupyter is provisioned, following window appears:
Jupyter job ready

Click on the Connect to Jupyter button to open a new browser tab with the main Jupyter interface. Note that the Running Jupyter tab shows active notebooks, for example:
jupyter running notebooks

We have installed support for Python (CHPC Linux python/3.6.3 module), R (CHPC Linux R/3.6.1 module) and Matlab (R2019a). If you need other versions or support for other languages, contact helpdesk@chpc.utah.edu.

NOTE: The Jupyter server starts in user's home directory and only files in the home directory are accessible through the Jupyter file browser. To access files on other file systems (scratch, group space), create a symbolic link from this space to your home directory, e.g. ln -s /scratch/general/nfs1/u0123456/my_directory $HOME/my_scratch/. .

RStudio server

RStudio Server runs the RStudio interactive development environment inside of a browser. The OnDemand implementation allows to set up and launch the RStudio Server on a cluster compute node for dedicated resources, which allows to run more compute intensive R programs on the RStudio environment. To start RStudio Server job, first navigate to menu Interactive Apps - RStudio Server. A job parameters window appears:
RStudio

Choose the appropriate job parameters, keeping in mind that R can internally thread parallelize vector based data processing, for which more than one CPU can be utilized. After clicking the Launch button, the cluster job is submitted and after the job is allocated resources, the following window appears:
Rstudio launch

Clicking on the Connect to RStudio Server button opens a new tab with the RStudio:
rstudio tab

To close the session, close the RStudio browser tab and push Delete to delete the running job.

NOTE: If you install new R packages and get an error g++: error: unrecognized command line option '-wd308', please modify your ~/.R/Makevars and remove all the flags that contain this option. We recommend this option in order to build packages with Intel compilers that are used to build CHPC's R. However, the R we use in Open OnDemand uses g++, therefore these flags are not valid.

NOTE: For technical reasons, RStudio Server currently does not work on the Frisco nodes. Please, contact us if you need this functionality.

Last Updated: 7/5/23