The HPCC maintains a large number of computing resources designed to support research.

In general, users write their programs, submit a request/job to the scheduling system to run on the cluster.  The job request specifies how much time and computing resources will needed. Since these resources are shared users/programs that overutilize the system, causing nodes to become unresponsive, may be terminated without prior notice.

Compute Cluster

iCER’s HPC clusters include 596 compute nodes with more than 17,500 cores. Included in the clusters are 90 NVIDIA GPUs and 28 Xeon Phi equipped nodes. The clusters are linked together by high-throughput, low-latency InfiniBand. HPCC uses ZFS with 1.6 PB of total capacity persistent storage, and a high-speed Lustre file system with 1.9 PB of temporary storage. See User Documentation for more information. 

Interactive Development Hardware

For each hardware type in the compute cluster, a single node is set aside for software development and testing. Users have a direct SSH connection to these nodes through the HPCC gateway. These nodes are shared resources and programs can run for up to two CPU hours before being terminated. Longer jobs should be submitted to the cluster.

Buy-In Options

Buy-in users receive priority access to purchased cluster resources for their research group. As part of the buy-in agreement, iCER provides the cyberinfrastructure for researchers to utilize their buy-in equipment. This includes replicated high-speed file servers, monitored redundant power and cooling, management nodes, and a comprehensive software stack. Additionally, iCER provides HPCC system administration and troubleshooting support, user training, and user consultations. External buyers may be required to pay an addititional overhead charge. Please contact iCER about current rates.

Each of the 2016 cluster nodes is equipped with two 2.4Ghz 14-core Intel Xeon E5-2680v4 Broadwell processors and 240 GB solid-state disk (SSD). InfiniBand interconnect between nodes enables fast communication.

Users may purchase priority access to any of the following node options:

Cluster Options Cores RAM Coprocessor Units/Chassis* Unit Price
Intel16 — Standard Memory 28 128 GB none 12 $5,320
Intel16 — Medium Memory 28 256 GB none 12 $6,432
Intel16 — Large Memory 28 512 GB none 12 $8,943
Intel16 — NVIDIA K80 GPU 28 256 GB NVIDIA K80 GPU (4 each) - $25,250
Chassis - - - - $2,020






There are also a limited number of ‘extra large’ memory systems available each equipped with four or eight E7-8867v3 processors that have sixteen cores per processor at 2.5 GHz. The interconnect between nodes on this system is an EDR (100 gigabit/s) network connection and each node has four to eight 480 GB SSDs.

Cluster Options  Cores RAM SSDs Units/Chassis* Unit Price
Intel16 — Extra-Large (XL) Memory 64 3,072 GB 4 x 480 GB 1 $43,236
Intel16 — Extra-Extra-Large (XXL) Memory 128 6,144 GB 8 x 480 GB 1 $87,007
Chassis - - - - $2,020

*Researchers need to purchase an adequate number of chassis to provide slots for the nodes ordered.

To make a purchase, please complete the Order form.
To contact an iCER team member, please complete the contact form.

Updated Nov. 4, 2016