5 The Alderaan cluster is a resource for both campuses of the University of Colorado Denver.
7 Alderaan is a shared cluster running multi-core jobs using MPI and the high-speed interconnect (High-Performace Computing, HPC), as well as multiple single-core jobs (High-Throughput Computing, HTC). Alderaan head node, `math-alderaan`, is an alternative login node for the joint scheduler along with `clas-compute`.
9 All jobs need to be limited in duration. Long-running single-core jobs should be submitted to [Score cluster](../score/) or [Colibri cluster](../colibri/) partitions instead.
11 Work consisting of a very large number of single-core jobs can be also submitted to the [Open Science Grid (OSG)](https://opensciencegrid.org) - log into https://www.osgconnect.net with your CU Denver credentials and follow the instructions to be contacted by OSG staff for an onboarding meeting.
13 The Alddraan cluster is funded by [NSF grant 2019089 CC* Compute: Accelerating Science and Education by Campus and Grid Computing](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2019089) under the [NSF CC* program](https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf20507). Two additional GPUS were purchased by the Department of Mathematical and Statistical Sciences and by the Center for Computational Mathematics. The cluster is integrated with the [Open_Science_Grid (OSG)](https://opensciencegrid.org). At least 20% of core time is being contributed to the OSG as required by the NSF funding.
15 The cluster was built by [Atipa](https://www.atipa.com) in 2021.
18 * SLURM cheduler, cluster tools
19 * GNU compiler stack with MPI
20 * Intel OneAPI compilers, development tools, MPI, and MKL
22 * MATLAB, R, Python, Tensorflow
24 * Singularity containers for custom software environments
27 ### Fall semester 2020
30 * Advance testing of software
31 ### Spring semester 2021
32 * Delivery and installation
33 * Testing with the vendor's software configuration
34 ### Fall semester 2021
35 * Finalizing networking, access, and security
37 * Set up governance and allocation processes
38 ### Spring semester 2022
40 * Scaling up user base, installing custom software on request.
41 * Installing 2 additional GPUs
42 * Running OSG jobs to satisfy the 20% core time funding requirement
43 * Allocation limits and storage quotas not in effect yet
46 * OnDemand web access to the cluster for Jupyterhub and remote desktop
49 * ssh command line usage
54 Authorized users can submit jobs on the Alderaan cluster on the head node by ssh to math-alderaan.ucdenver.pvt. The head node is for compilation and small tests only. Please do not run any intensive or long jobs on the head node.
56 Another head node clas-compute.ucdenver.pvt is also available. SLURM jobs can be submitted and user and files accessed from either head node, but the software on clas-compute is not the same as on alderaan cluster.
58 An allocation system is set up and will be activated when warranted by the cluster load.
60 Web access including Jupyterhub, remote desktop, and R Studio is pending.
63 * 2048 AMD cores and 16TB memory in 32 compute nodes. Each node has 2 AMD EPYC 7502 32 core processors for a total of 64 cores, 512GB DDR4 memory, and dual 960GB SSD. The compute nodes are in 8 2U rack-mount drawers, with 4 nodes in each drawer.
64 * 2 high-memory GPU nodes, each with 2 AMD EPYC 7502 32 core processors for total 64 cores, 2 NVIDIA Tesla A100 GPUs, and 2TB DDR4 memory.
65 * Head node with 2 AMD EPYC 7502 32 core processors and 256GB memory.
66 * 816TB storage (1PB raw), in 2 storage nodes, each with 2 Intel Xeon( Gold 5215 CPUs 2.50GHz, 196GB memory, and XFS filesystems shared by NFS over Infiniband.
67 * HDR100 Infiniband interconnect
68 * Full 10Gb/s ethernet connectivity path from every node to Internet 2
71 jan.mandel@ucdenver.edu