HPC Users Group


Creating a community around research computing at ILRI



@mralanorth

July 3, 2013

HUG Meetup #1

Infrastructure Hierarchy

HPC



Login node
"Master" of the slaves
Don't run things here :)

taurus



128 GB of RAM
64 CPUs ("cores")
Great for batch and interactive jobs

mammoth


384 GB of RAM
16 CPUs ("cores")
Great for high-memory jobs

Infrastructure Basics


You are everywhere.
Your data is everywhere.

SLURM

Slurm, the number-one soft drink of the future!

  • Simple Linux Utility for Resource Management
  • Manages allocation of computing resources
  • Tries to ensure fair distribution of resources
  • By default, everyone is allowed 20 CPUs or 20 jobs

Running Programs With SLURM


Your first SLURM job:

[aorth@hpc: ~]$ interactive 
salloc: Granted job allocation 1080
[aorth@taurus: ~]$ 
  • The interactive command gives you an interactive shell :)
  • In this shell you can run R, blast, julia, CLC, etc


Check the queue status:
[aorth@taurus: ~]$ squeue
 JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)
  1080     batch     srun    aorth   R       0:02      1 taurus
  • Notice how I began on hpc and ended up on taurus!

Interactive Tips

By default interactive will give you 1 CPU...

If you want more, just ask!
[aorth@hpc: ~]$ interactive -c 5
salloc: Granted job allocation 1081

There are other options as well:
[aorth@hpc: ~]$ interactive -p highmem -J blast
jobname: blast
salloc: Granted job allocation 1082
  • -p:  selects the "partition" to run in
  • -J:  gives a name to the job (as seen in squeue)
  • -h:  print the help text

More SLURM Commands


Show job info:
 scontrol show job <id>

Cancel a running job:
 scancel <id>

Show cluster / queue status:
[aorth@hpc: ~]$ sinfo -le
Tue Jul  2 15:11:27 2013
NODELIST NODES PARTITION     STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT FEATURES REASON
hpc          1    debug*      idle    8    8:1:1      1        0      1   (null)   none
mammoth      1   highmem      idle   16   16:1:1      1        0      1   (null)   none
taurus       1     batch      idle   64   64:1:1      1        0      1   (null)   none

What Else Can I Run?


Consult the One True List™ of installed software!


Using Modules


"modules" allow you to dynamically load support for applications into your environment...
[aorth@hpc: ~]$ R    
-bash: R: command not found
Oh noes! What do we do?
[aorth@hpc: ~]$ module avail R
--------------------------- /export/apps/modules/modulefiles ---------------------------
R/2.11.0        R/2.12.2        R/2.13.0        R/2.15.2        R/3.0.0         R/latest
[aorth@hpc: ~]$ module load R/3.0.0
[aorth@hpc: ~]$ R

R version 3.0.0 (2013-04-03) -- "Masked Marvel"
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)
Show all available modules:
module avail

For The Adventurous


Applications are generally installed in /export/apps...

Getting Help



--

Come back for HUG #2 ;)
August 7, 2013

HPC Users Group

By Alan Orth

HPC Users Group

A discussion of research computing facilities available at ILRI, including examples of how to use them!

  • 861
Loading comments...

More from Alan Orth