4 This cluster was funded by NSF grant [GPU Cluster for Computing Research](http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0958354), PI Julien Langou.
6 The cluster consists of 24 compute nodes each equipped with two Intel CPUs with 16 cores each and two [Fermi](http://www.nvidia.com/object/fermi_architecture.html) GPUs, connected by InfiniBand. It was delivered by HP in 2013. There are two interactive nodes, one the original head node and the other a large memory (1TB) node added later.
8 In 2014, the HP software license run out and the cluster was reinstalled with Rocks+ by StackIQ. In 2021-2022, Colibri nodes were reinstalled with Centos 7 and integrated here.
10 Colibri compute nodes are accessible from both math-alderaan and clas-compute nodes through the common Slurm scheduler.