1 # Guide to CCM Clusters
7 The login node for the system is `math-alderaan.ucdenver.pvt`. There is also a legacy login node `clas-compute`. Alderaan cluster runs Centos 8, while the Score and Colibri clusters and `clas-compute` run Centos 7.
9 At this time, the main way of using the system is to use an SSH client to login to a terminal session on math-alderaan or clas-compute. You will need to be on the CU Denver private network (wired or Auraria secure wireless, not Aurariua Guest). To connect from the internet, you need to use the [university's VPN](https://www.ucdenver.edu/vpn) or [VMware Horizon remote access](https://remote.ucdenver.edu) (click "Complimentary" to start Windows, then the Windows icon and search for Powershell). It is highly recommended to download and use the VMware Horizon app instead of continuing the browser. Either way, log in and click on the "Complimentary" button, which will give you a Windows virtual machine on the campus network. Then open a Powershell window (Windows button, search box opens, type `shell`, select `Powershell`). Type `ssh math-alderaan` in the Powershell window. See [here](https://www.ucdenver.edu/docs/default-source/offices-oit-documents/vpn-client-software/multi-factor-vmware-horizon-user-guide.pdf?sfvrsn=3d3a4db9_2) for more on the remote client.
11 This system uses your normal portal/email username and password, but your account must be set up before using the system. Please go to [accounts](../accounts/) to request an account; if you are a student, your faculty supervisor/project lead should request your account.
13 On Linux or a Mac, you can use simply the Terminal app, which is built into the operating system. It is hidden away in Applications -> Utilities folder on a Mac and in similar places on various Linux desktops. You may want to drag it to your dock (on a Mac) or the desktop (on Linux) so that it is available more conveniently in future.
15 Current Windows 10/11 has a [native ssh client](https://learn.microsoft.com/en-us/windows/terminal/tutorials/ssh) - just type `ssh` in a terminal window (also called powershell window or command window). The ssh client also has `scp` and `sftp` for file transfer.
17 [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) is not recommended due to known issues with VPN.
19 Either way, from a terminal window, at the command line prompt type in:
21 ssh username@math-alderaan.ucdenver.pvt
23 The username is your account name, a single short word which you can use to log into the university portal instead of email address, not the firstname.lastname in your university email. Contact us or OIT helpdesk at [ucd-oit-helpdesk@cuanschutz.edu](mailto:ucd-oit-helpdesk@cuanschutz.edu) if you do not know what your account name is.
25 After connecting, ssh should ask for your CU Denver password and you enter it at this point. You should be then at the `math-alderaan` prompt and in your home directory, which is `/home/username`.
27 ### Interactive use limitations
29 Using a server ‘interactively’ (a.k.a. not scheduling a job) is often needed for troubleshooting a job or just watching what it is doing in real time. After SSH’ing into a head node, you can type <code>ssh math-colibri-i01</code> or whatever interactive server you want to go to directly.
31 **Please do not run anything directly on compute nodes, which are reserved for jobs under the control of the scheduler, even if you may be able to ssh there. These are nodes with names like math-alderaan-c01 with something else than "i" before the number. Using compute nodes, where other people run jobs through the scheduler, will interfere with their work and make you very unpopular.** It is OK to ssh to a compute node to check on your job, but don't run anything there.
33 ### Screen virtual terminal in interactive usage
35 If you use `screen`, if you get disconnected, whatever you were running is still going and you can connect to it later. This is called a virtual terminal session. It is generally a good idea to use `screen` on math-compute, math-alderaan, or on the interactive nodes.
37 Typing `screen` creates a new terminal session. You can give it a name you want to juggle more sessions, by `screen -S 'name'` (make the name whatever you want).
39 If you want to disconnect from the session but leave it running, hit the combination of Control-A and press the D key to disconnect. Control-A is the combo to let screen know you want to do an action.
41 When you want to reconnect to your screen session later, log back onto wherever you started the screen and type <code>screen –r</code>. If you have more than one screen, it’ll complain and tell you the screens you have available to reconnect to. Type <code>screen –r 'name’</code> to reconnect to that screen.
43 You can't just scroll in `screen` to see your terminal history as you normally would.
44 Press Control-A and then Esc and scrolling up and down will work temporarily the usual way. When you type anything, `screen` will leave the scrolling model.
48 **You are responsible for keeping copies of your important files elsewhere. Files and entire filesystems can be lost.**
50 The home directories are on a shared file server and linked as `/home/username`. Everyone can have also
51 a project directory `/storage/department/projects/username`
52 (where department may be one of many departments who use this system). Users from some departments have
53 project directories created in `/data001/projects/username` instead. These are currently accessible from alderaan nodes only,
54 i.e., not accessible from clas-compute and math-colibri or math-score nodes. The location of project directory is emailed to the user
55 when the directory is created, usualy as a part of onboarding.
57 The difference betwen project and home directories is that home directories are backed up occasionally
58 while project directories are not. Please keep your home directory small to make the backups possible.
60 In addition, groups can request shared project directories also in `/storage/department/projects' or `/data001/projects`.
62 Please monitor the usage of the partition you are on by
66 and if it nearing full check you you do not use more space than you are aware of by `df -h`. If you need a lot of data storage, please contact us before filling everything you can find.
68 On Alderaan only, you can make your own directory in `/scratch`, which is on a large fast filesystem.
69 When `/scratch` starts filling up, oldest files will be purged automatically.
71 **Do not keep any confidential or sensitive files on this system.** We are not equipped for the level of security this would take.
72 In particular, no proprietary data, health records, grades, social security numbers, and like are allowed.
73 If you use ssh keys to connect elsewhere from this system
74 (such as github or another computer account), it is highly recommended to make an ssh key with a passcode for that. Otherwise,
75 the security of the account you are connecting to is only as good as the read protection of your files here.
77 Files and directories including your home directory are created with permissions which allow anyone to read them but not
78 write. This is Linux default to encourage collaboration. If you want to keep a file or directory private, you need to change the permissions yourself.
79 Type <code>chmod og-rwx file_or_directory_name</code> to make the file or directory not accessible by others (except system administrators, of course).
83 `df –h` will show you the storage arrays and how much space is available. There are different types of "empty" space in linux so it may say there is plenty of space in `df –h` yet the array is full.
86 ## Where is the software? Modules and Singularity containers
88 We normally do not install application software directly on the system because of software and version conflicts. Instead, we install sofware in *modules* or *singularity containers*.
92 To see what software is available in modules,
96 provide a list of available software packages and their versions. A command `module load modulename/version` will change your environment (such as the `PATH` variable) temporarily so that the software and its various parts can be found, for example
104 Try that and use `env` command to see what has changed.
106 You may need to load multiple modules at the same time. When you are done with a module you can `module unload` it but it is strongly recommended to do
110 and start over loading exactly the modules you need, or simply log out and back in again.
112 Installed software and environment modules on our different clusters are generally different. See [modules](../modules) for more information.
114 ### Singularity containers
116 A singularity container is a bit like a separate computer in itself which just runs on the same hardware. Thus, software in different containers won't conflict, and a container can provide a complete environment including a different operating system, libraries, etc. A disadvantage, however, is that you can use only the software installed in the container; software on the system outside of the container is not visible from the inside. Containers are read only and cannot be changed. An exception is that some package managers, like conda, may allow installing software while you are inside the container. Additions made by conda this way actually reside in directory `.local` in your home directory.
118 Using singularity is easy. Type, for example,
120 singularity shell /storage/singularity/tensorflow
123 and you can use many Python packages for machine learning. We have containers with statistics software, optimization, molecular chemistry, optimization,
124 and more. See [Singularity](../singularity/) for more details and list of software in our containers.
126 ### Old software versions
128 Sometimes, you may need a specific version of some software package from a few years ago. We'll try. If the software version is not too much in the past, we may be able to install the software in a module or in a singularity container. However, installing an older or a more complicated package may require recreating an entire software ecosystem at a certain point in computer history, which can be overwhelming or impossible when old versions of dependencies are hard to find or just no longer available.
130 ## Installing your own software packages
132 When working with software like R on our shared system, it’s important to install packages to a personal library to prevent conflicts with other users. This guide will help you set up and manage your R library effectively.
134 By default, user-installed packages go to a hidden directory in your home directory, `~/.local`, which is also used by other languages (e.g., Python) and can sometimes lead to potential conflicts, such as when packages from different versions of the same language end up in `~/.local` and then get used by another version of the language. You may want to occasionally clear out this directory to reset your personal environment.
136 > **Warning**: Running `rm -rf ~/.local` will delete **all** your user-installed packages, not just for R but also for Python and other languages. Use this only if you're comfortable reinstalling necessary packages.
138 To manage this safely:
140 - **Selective Cleanup**: Instead of wiping `~/.local`, you might choose to delete only specific folders within `.local/lib` for R or Python packages. For example:
142 rm -rf ~/.local/lib/R
143 rm -rf ~/.local/lib/python3.8
145 - **Set a custom R library path**:
146 - You can specify a custom directory for R packages instead of relying on `~/.local`.
147 - Add the following to your `~/.Rprofile` file to create and use a dedicated directory for R packages:
150 dir.create(Sys.getenv("HOME"), "R_packages", showWarnings = FALSE)
151 .libPaths(c("~/R_packages", .libPaths()))
153 - This configuration will tell R to look in `~/R_packages` for user-installed packages, separate from `~/.local`.
159 On a Linux or Mac computer, you can use file transfer utilities `rsync`, `scp`, `sftp` on your computer to transfer files and entire directores between your computer and clusters. These utilities are normally a part of the system, if not you can install them from your Linux distribution. [Rsync](https://en.wikipedia.org/wiki/Rsync) is recommended. Typing `man rsync` should give you the manual for the system you are on. Rsync can transfer file trees recursively and resume a transfer which was interrupted.
163 On current Windows PC, you can use `scp` and `sftp` from the command window (a.k.a. Powershell window). Current Windows 10 and 11 have OpenSSH client built in.
167 You can download a file from a website using simply `wget` followed by the URL of the file. You can get the URL of a file posted on the web by a right-click and selectingv something like "Copy link address".
171 The easiest way to download files from Github is to clone the entire repository. On the repository main page, click green button "Code" and copy the link. Then
173 git clone <the link you just copied>
175 You can use https link if you want to clone the repository for reading only. If you want to push your changes to Github in future, [you need to use ssh](https://docs.github.com/en/authentication/connecting-to-github-with-ssh). It is strongly recommended to create a separate key secured by a strong passphrase for this. Otherwise, the security of your Github account is only as good as the protection of files here - anyone who gains administrator access here can log into your Github account.
183 Globus is a free service which can transfer large files (many GB and TB) between servers on the internet using a simple web interface and without supervision.
184 See the [Globus](../globus/) section how to use Globus here.
186 ## Requesting Information about the Environment
190 Jobs are submitted to compute nodes through the scheduler. To see the queues (called "partitions") on the scheduler, type
193 PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
194 math-alderaan up 7-00:00:00 1 drain* math-alderaan-c28
195 math-alderaan up 7-00:00:00 4 mix math-alderaan-c[01,08-10]
196 math-alderaan up 7-00:00:00 25 alloc math-alderaan-c[04-07,11-27,29-32]
197 math-alderaan up 7-00:00:00 2 idle math-alderaan-c[02-03]
198 math-alderaan-gpu up 7-00:00:00 1 mix math-alderaan-h01
199 math-alderaan-gpu up 7-00:00:00 1 alloc math-alderaan-h02
200 math-colibri-gpu up infinite 24 idle math-colibri-c[01-24]
201 math-score up infinite 5 idle math-score-c[01-05]
202 chem-xenon up infinite 6 unk* chem-xenon-c[01-06]
203 clas-interactive up infinite 1 idle math-colibri-i02
204 math-alderaan-osg up 1-00:00:00 1 drain* math-alderaan-c28
205 math-alderaan-osg up 1-00:00:00 4 mix math-alderaan-c[01,08-10]
206 math-alderaan-osg up 1-00:00:00 25 alloc math-alderaan-c[04-07,11-27,29-32]
207 math-alderaan-osg up 1-00:00:00 2 idle math-alderaan-c[02-03]
208 clas-dev up infinite 1 idle clas-devnode-c01
211 To see a list of all nodes, use:
214 NODELIST NODES PARTITION STATE
215 chem-xenon-c01 1 chem-xenon unk*
216 chem-xenon-c02 1 chem-xenon unk*
217 chem-xenon-c03 1 chem-xenon unk*
218 chem-xenon-c04 1 chem-xenon unk*
219 chem-xenon-c05 1 chem-xenon unk*
220 chem-xenon-c06 1 chem-xenon unk*
221 clas-rcdesktop-01 1 clas-rcdesktop down*
222 math-alderaan-c01 1 math-alderaan alloc
223 math-alderaan-c02 1 math-alderaan alloc
224 math-alderaan-c03 1 math-alderaan alloc
225 math-alderaan-c04 1 math-alderaan alloc
226 math-alderaan-c05 1 math-alderaan alloc
227 math-alderaan-c06 1 math-alderaan alloc
228 math-alderaan-c07 1 math-alderaan alloc
229 math-alderaan-c08 1 math-alderaan alloc
230 math-alderaan-c09 1 math-alderaan alloc
231 math-alderaan-c10 1 math-alderaan alloc
232 math-alderaan-c11 1 math-alderaan alloc
233 math-alderaan-c12 1 math-alderaan alloc
234 math-alderaan-c13 1 math-alderaan alloc
235 math-alderaan-c14 1 math-alderaan alloc
236 math-alderaan-c15 1 math-alderaan alloc
237 math-alderaan-c16 1 math-alderaan mix
238 math-alderaan-c17 1 math-alderaan idle
239 math-alderaan-c18 1 math-alderaan idle
240 math-alderaan-c19 1 math-alderaan idle
241 math-alderaan-c20 1 math-alderaan idle
242 math-alderaan-c21 1 math-alderaan idle
243 math-alderaan-c22 1 math-alderaan idle
244 math-alderaan-c23 1 math-alderaan idle
245 math-alderaan-c24 1 math-alderaan idle
246 math-alderaan-c25 1 math-alderaan idle
247 math-alderaan-c26 1 math-alderaan idle
248 math-alderaan-c27 1 math-alderaan idle
249 math-alderaan-c28 1 math-alderaan idle
250 math-alderaan-c29 1 math-alderaan idle
251 math-alderaan-c30 1 math-alderaan idle
252 math-alderaan-c31 1 math-alderaan idle
253 math-alderaan-c32 1 math-alderaan idle
254 math-alderaan-h01 1 math-alderaan-gpu idle
255 math-alderaan-h02 1 math-alderaan-gpu idle
256 math-colibri-c01 1 math-colibri-gpu idle
257 math-colibri-c02 1 math-colibri-gpu idle
258 math-colibri-c03 1 math-colibri-gpu idle
259 math-colibri-c04 1 math-colibri-gpu unk*
260 math-colibri-c05 1 math-colibri-gpu unk*
261 math-colibri-c06 1 math-colibri-gpu unk*
262 math-colibri-c07 1 math-colibri-gpu unk*
263 math-colibri-c08 1 math-colibri-gpu unk*
264 math-colibri-c09 1 math-colibri-gpu unk*
265 math-colibri-c10 1 math-colibri-gpu unk*
266 math-colibri-c11 1 math-colibri-gpu unk*
267 math-colibri-c12 1 math-colibri-gpu unk*
268 math-colibri-c13 1 math-colibri-gpu idle
269 math-colibri-c14 1 math-colibri-gpu idle
270 math-colibri-c15 1 math-colibri-gpu idle
271 math-colibri-c16 1 math-colibri-gpu idle
272 math-colibri-c17 1 math-colibri-gpu idle
273 math-colibri-c18 1 math-colibri-gpu idle
274 math-colibri-c19 1 math-colibri-gpu idle
275 math-colibri-c20 1 math-colibri-gpu idle
276 math-colibri-c21 1 math-colibri-gpu idle
277 math-colibri-c22 1 math-colibri-gpu idle
278 math-colibri-c23 1 math-colibri-gpu idle
279 math-colibri-c24 1 math-colibri-gpu idle
280 math-score-c01 1 math-score unk*
281 math-score-c02 1 math-score unk*
282 math-score-c03 1 math-score idle
283 math-score-c04 1 math-score idle
284 math-score-c05 1 math-score idle
288 It looks confusing but there is a method to the madness in the naming convention. Obviously, math-colibri and math-score are the identifiers for what cluster/building the servers are in, but the –c## and –i## stand for compute and interactive. The c## servers are usually part of the queuing system and the i## ones are for interactive use. Again, never ssh to compute nodes directly.
290 ## Submitting Jobs to the Scheduler
294 The <code>sbatch job_script</code> command is used to submit a job into a queue. Your job starts executing in the directory where it was submitted, so submit it from a directory accessible to all compute nodes, such as a subdirectory of your home directory. You can add switches to the <code>sbatch</code> command, but it is recommended to make them a part of your batch script so that you do not have to do that every time. Please do not use more cores than the number of tasks specified in your script.
296 ### Template batch job scripts
298 The template batch scripts and simple examples to run are available. Get your copy by
300 git clone https://github.com/ccmucdenver/templates.git
302 To build the examples, type <code>make</code> in the <code>examples</code> directory.
304 **Please do not request the number of nodes on Alderaan by `--nodes` or `-N`, unless you really need entire nodes for some reason. Request only the CPU cores you need by `--ntasks`, then the node or nodes you use can be shared with others.**
309 This script will be sufficient for many jobs, such as those you code yourself which do not use multiprocessing.
312 # A simple single core job template
313 #SBATCH --job-name=mpi_hello_single
314 #SBATCH --partition=math-alderaan
315 #SBATCH --time=1:00:00 # Max wall-clock time
316 #SBATCH --ntasks=1 # number of cores, leave at 1
317 examples/hello_world_fortran.exe # replace by your own executable
320 If you run an application that can use more cores, you can requests the number of cores in <code>--ntask</code> parameter instead of 1. Your allocation will be charged for the time of all cores you requested, regardless if you use them or not.
322 If you expect that your application will use more memory than 8GB (our nodes have 512GB memory and 64 cores each), you should request more tasks, about the expected memory usage in GB divided by 8. Otherwise the node memory may get overloaded when the machine gets busy with many jobs, and everyone's jobs may stall or crash. Note: this may change once we start allocating memory use, but at the moment we do not.
324 ### Multiple single-core jobs using arrays
327 # Multiple single core jobs using array template
328 #SBATCH --job-name=mpi_hello_single
329 #SBATCH --partition=math-alderaan
330 #SBATCH --time=1:00:00 # Max wall-clock time
331 #SBATCH --ntasks=1 # number of cores, leave at 1
332 #SBATCH --array=1-5,10-11 # specifies to submit this script 7 times where array values are 1, 2, 3, 4, 5, 10, and 11.
334 examples/hello_world_fortran.exe # replace by your own executable
336 SLURM job arrays simplify running multiple instances of the same job script using a single batch script. The above example demonstrates submitting the 'hello_world_fortran.exe' script seven times where array values are 1, 2, 3, 4, 5, 10, and 11.
338 _Helpful Directives/Variables_:
340 * %a: add the array number to naming convention.
342 #SBATCH --job-name=mpi_hello_single_%a
344 * %[insert-number]: Limit the number of array jobs to submit at a time.
346 #SBATCH --array=1-1000%10
348 A SLURM array job automatically submits jobs within your allocated resources. If you wish to conserve resources for other tasks, it can be advantageous to control the number of array jobs submitted simultaneously. In the example provided above, a total of 1000 jobs are executed, with 10 jobs running concurrently at any given time.
350 * SLURM_ARRAY_TASK_ID: An environment variable that holds the array value. You can use it to pass the array value to the script you intend to execute.
352 python example_script.py ${SLURM_ARRAY_TASK_ID}
355 ### A simple MPI job template
359 # A simple MPI job template
360 #SBATCH --job-name=mpi_hello
361 #SBATCH --partition=math-alderaan
362 #SBATCH --time=1:00:00 # Max wall-clock time
363 #SBATCH --ntasks=360 # Total number of MPI processes, no need for --nodes
364 mpirun examples/mpi_hello_world.exe # replace by your own executable, no need for -np
366 ### A more general MPI job template
368 You can request the number of nodes. The scheduler will then split the tasks over the nodes.
371 # alderaan_mpi_general.sh
372 # A a more general MPI job template
373 #SBATCH --job-name=mpi_hello
374 #SBATCH --partition=math-alderaan
375 #SBATCH --nodes=2 # Number of requested nodes
376 #SBATCH --time=1:00:00 # Max wall-clock time
377 #SBATCH --ntasks=5 # Total number of tasks over all nodes, max 64*nodes
378 mpirun -np 10 examples/mpi_hello_world.exe # replace by your own executable and number of processors
379 # do not use more MPI processes than nodes*ntasks
381 **Please do not request the number of nodes on Alderaan by `--nodes` or `-N`, unless you really need entire nodes for some reason. Request only the CPU cores you need by `--ntasks`, then the node or nodes you use can be shared with others.**
386 ### How to to run with GPU on Alderaan
388 The partition math-alderaan-gpu has two high memory/GPU nodes`math-alderann-h[01,02]` with two NVIDIA A-100 40GB GPUs and 2TB memory each. Use `--partition=math-alderaan-gpu` with `--gres=gpu:a100:1` to request one GPU and `--gres=gpu:a100:2` to request two GPUs. At the moment, Alderaan does not support explicit memory allocation by the --mem flag.
390 **Please do not use Alderaan GPUs without allocating them by `--gres` as above first. Please do not request an entire node on Alderaan by `--nodes` or `-N`, unless you really need all of it, request only the CPU cores you need by `--ntasks`. Large memory jobs and GPUs jobs can share the same node.**
392 An example job script:
394 #SBATCH --job-name=gpu
395 #SBATCH --gres=gpu:a100:1
396 #SBATCH --partition=math-alderaan-gpu
397 #SBATCH --time=1:00:00 # Max wall-clock time 1 day 1 hour
398 #SBATCH --ntasks=1 # number of cores
399 singularity exec /storage/singularity/tensorflow.sif python3 yourgpucode.py
401 Of course, instead of singularity you can run another GPU code on one of the GPU nodes directly. The nodes have currently installed CUDA 11.2. You will have to install tensorflow in your account yourself. A compatible version is [tensorflow 2.4.0] (https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/rel_21-03.html).
403 It is recommended to use the tensorflow singularity container because it has updated CUDA (11.4) and a version of tensorflow compatible with the CUDA version.
405 ### Interactive jobs with GPU on Alderaan
407 From the command line,
409 srun -p math-alderaan-gpu --time=2:00:0 -n 1 --gres=gpu:a100:1 --pty bash -i
411 will give you an interactive shell on one of the GPU nodes with one GPU allocated. You can then start singluarity shell
413 singularity shell /storage/singularity/tensorflow.sif
415 You can also start the Singularity shell directly:
417 srun -p math-alderaan-gpu --time=2:00:0 -n 1 --gres=gpu:a100:1 singularity shell /storage/singularity/tensorflow.sif
419 will allocate one GPU, one core, and run an internactive sinularity shell.
421 ### How to run with GPUs on Colibri
423 To use Colibri GPUs, do not use `--gres` but reserve a whole node by `--nodes=1`. Singularity containers work on Colibri, but current versions of tensorflow do not support the CPUs on Colibri. You can use an older version instead:
426 #SBATCH --job-name=gpu
427 #SBATCH --gres=gpu:a100:1
428 #SBATCH --partition=math-colibri-gpu
429 #SBATCH --time=1:00:00 # Max wall-clock time 1 day 1 hour
430 #SBATCH --nodes=1 # number of nodes
431 singularity exec /storage/singularity/tensorflow-v1.3.sif python3 yourgpucode.py
436 Remember you should not directly ssh to a node because it would interfere with jobs scheduled to run on that node. For interactive access to a compute node, do instead:
439 srun -p math-alderaan --time=2:00:0 -n 1 --pty bash -i
441 This will request a session for you as a job in a single core slot on a compute node in the math-alderaan partition for up to 2 hours. After the job starts, your session is transfered to the node. The job will end when you exit or the time runs out. Of course you can do the same for other partitions and add other flags such as to request more cores or a GPU.
443 To start an interactive job on Alderaan with a GPU:
445 srun -p math-alderaan-gpu --time=2:00:0 -n 1 --gres=gpu:a100:1 --pty bash -i
448 ## Viewing Job Queues, Job Status, and System Status
450 The command <code>squeue</code> will show one line for each
451 job running on the system.
453 The command <code>sinfo</code> will show a summary of jobs and partitions status on the system:
455 PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
456 math-alderaan up 7-00:00:00 10 mix math-alderaan-c[01-10]
457 math-alderaan up 7-00:00:00 8 alloc math-alderaan-c[11-15,29,31-32]
458 math-alderaan up 7-00:00:00 14 idle math-alderaan-c[16-28,30]
459 math-alderaan-gpu up 7-00:00:00 1 drng math-alderaan-h01
460 math-alderaan-gpu up 7-00:00:00 1 mix math-alderaan-h02
461 math-colibri-gpu up infinite 24 idle math-colibri-c[01-24]
462 math-score up infinite 5 idle math-score-c[01-05]
463 chem-xenon up infinite 6 down* chem-xenon-c[01-06]
464 clas-interactive up infinite 1 down* math-score-i01
465 clas-interactive up infinite 1 idle math-colibri-i02
466 math-alderaan-osg up 1-00:00:00 10 mix math-alderaan-c[01-10]
467 math-alderaan-osg up 1-00:00:00 8 alloc math-alderaan-c[11-15,29,31-32]
468 math-alderaan-osg up 1-00:00:00 14 idle math-alderaan-c[16-28,30]
469 clas-dev up infinite 1 idle clas-devnode-c01
471 Real-time system status including temperature, load, and the partitions from `sinfo`, is available in [News and Status Updates](./updates/).
473 **We will be happy to install software and build containers for you, do not hesitate to ask!**
475 ## Building Your Own Software
477 Here are the best practices when you compile and link your own software:
479 * Use `math-alderaan` head node to build software for use on the Alderaan cluster. Use `module avail` to see which tools are available in [modules](./modules/). We can add other tools and package them in modules on request.
481 * Use `clas-compute` or `math-colibri-i02` to build software for the Colibri cluster, and `clas-compute` or `math-score-i01` for the Score cluster. You can download and build libraries and other package in your own account.
483 * Alderaan runs Centos 8, while `clas-compute` and Colibri and Score clusters Centos 7. Software built on one will normally not work on the other.