Skip to content

New GPU nodes on kingspeak and new GPU procedures on ember

 Posted September 13th, 2016

CHPC now has four new GPU nodes on kingspeak. There are two nodes with four Tesla K80 cards each (kp297 and kp298), and two nodes with eight GeForce TitanX cards each (kp299 and kp300). The information about the GPU nodes given below along with additional details is given in a new CHPC web page https://www.chpc.utah.edu/documentation/guides/gpus-accelerators.php

The K80 is of the Kepler architecture, released in late 2014. Each K80 card consists of two GPUs, each GPU having 12 GB of global memory. Thus the K80 nodes will show 8 total GPUs available. Peak double prevision performance of a single K80 card is 1864 GFlops. The K80 nodes have two 6-core Intel Haswell generation CPUs and 64 GB of RAM.

The GeForce TitanX is of the next generation Maxwell architecture, and also has 12 GB of global memory per card.  An important difference from the K80 is that it does not have very good double precision performance (max ca. 200 GFlops), but it has great single precision speed (~ 7 TFlops). The TitanX nodes should be used for either single precision, or mixed single-double precision GPU codes. The TitanX nodes have two 6-core Intel Haswell generation CPUs and 64 GB of host RAM.

In addition, the Ember cluster has eleven nodes which have two Tesla M2090 cards each. The M2090 is of the Fermi architecture (compute capability 2.0) that was released in 2011. Each card has 6 GB of global memory. Although relatively old, each card has a peak double precision floating point performance of 666 GFlops, still making it a good performer. The GPU nodes have two 6-core Intel Westmere generation CPUs and 24 GB of host RAM.

In order to use the ember nodes you must request the ember-gpu account and partition; for the kingspeak nodes the account and partition is kingspeak-gpu.  Users are added to these accounts upon request (email your request to issues@chpc.utah.edu).  Note that even if you were in the ember-gpu group, you will still need to request to be added to the kingspeak-gpu group.  These nodes should only be used when running GPU enabled codes. Note that the use of the GPU nodes does not count against any general allocation your group may have.

Node sharing has been enabled on these GPU nodes as many GPU codes are only able to run on a single GPU and others perform better on a single or small number of GPUs.  This leads to a change in how you access the node.  The major changes that you need to make in your slurm batch script or srun line:

  1. In order for your job to be assigned any of the GPUs of the node you MUST specifically request one or more of the GPUs.  This is done via a #SBATCH --gres line, where gres refers to generic consumable resource of the node.  In order to get all GPUs. The gres notation is a colon separated list of resource_type:resource_name:resource_count, where the resource_type is always gpu, the  resource_name is either m2090, k80 or titanx, and the resource_count is the number of GPUs per node requested - 1-8:
    1. #SBATCH --gres=gpu:k80:8 gives the job all eight of the k80 gpus and targets kp297 and kp298
    2. #SBATCH –gres= gpu:titanx:4 gives the job four of the titanx gpus and targets kp299 and kp300
    3. #SBATCH –gres= gpu:m2090:2 gives the job both gpus on one of the ember GPU nodes
  2. You should explicitly request memory, using  #SBATCH --mem, as the default memory is 2GB/CPU core (lowest common amount on all CHPC compute nodes):
    1. #SBATCH --mem=0 gives the job all the memory of the node
    2. #SBATCH --mem=4G will give the job 4GB of memory
  3. You should explicitly request the number of CPUs, using the #SBATCH --ntasks option. The default is all of the CPU cores of the node(s) requested.

If you have any questions or would like more information, please see the web page; if this does not answer your questions, please send a note to issues@chpc.uah.edu

Last Updated: 12/17/24