CHPC has a set of eight nodes, friscoX.chpc.utah.edu, X=1-8, that can be used in a similar manner to the CHPC general cluster login or interactive nodes, i.e., kingspeak1.chpc.utah.edu. Each of these nodes also has the FastX2 server installed.
As these frisco nodes are not cluster interactive nodes on which users rely on to submit work to the compute nodes, the usage of the frisco nodes is more relaxed than that specified in the CHPC acceptable interactive node usage policy. However, they are still shared, general resources available to all CHPC users, and as such we do monitor their usage and will notify any user who is overloading the nodes. These nodes should not be used in lieu of using the batch system to access compute nodes for long runs and/or those that need multiple cores or large memory. In this case you can use the interactive batch session as described on the Slurm documentation page.
Example of good usage would be to run applications with which you need to interact with such as matlab, rstudio, and vmd. In addition, if you application has a graphical user interface, the frisco nodes will be a good resource to use. As listed below, and further described on the FastX2 page, note that all frisco nodes now have graphics cards, and therefore can be targeted for any application that either requires or can take advantage of using virtualgl (as an example, vmd).
Below is a description of the individual nodes.
frisco1: 40 physical cores, 384GB memory -- Old frisco1 was replaced with a new node 26 August 2019!
frisco2: 16 physical cores, 96 GB memory
frisco3-4: 8 physical cores, 24GB memory
frisco5-6: 8 physical cores, 72GB memory
frisco7: 16 physical cores (AMD), 128GB memory
frisco8: 28 physical cores, 128GB memory