The major goal of the procurement of a high-performance computing system at the University of Houston (UH) is to enhance scientific research at UH and to train the next generation of scientists and engineers of scientific computing on HPC platforms with general purpose graphics processing units (GPGPUs).  To stay at the forefront of scientific computing and to further advance the computationally successful research efforts,  uHPC provides a flexible, state-of-the art platform that consists of powerful computing nodes with massive parallelism, deep memory hierarchies and a variety of cores that optimize power constraints. uHPC offers readily available access of a flexible HPC system by satisfying the need of UH users for adopting existing codes to new architectures, optimizing code performance, training students, and ultimately addressing science and engineering challenges of the next decade. This project is supported by an NSF award (#1531814). 

 

 

The uHPC Cluster

The uHPC research instrument is an HPE Apollo r2000 series cluster.  The cluster is composed of a login node, two storage arrays, and 112 compute nodes interconnected with a single 56GbE network for storage and research traffic, and a 1GbE administrative network. The login node and storage arrays are also connected to the data center network with 40Gb ethernet. The data center network provides connectivity to the UH campus network and the internet.

Theoretical peak performance is 114TFlops.

Login Node

        HPE DL380 Gen 9 server with a 1TB internal hard drive, 48GB of memory and dual E5-2640v3 CPU's (16 total cores)

2 Storage Servers

        HPE SL4540 servers, each with 64GB of memory, dual E5-2650v3 CPU's (20 total cores) and 30 6TB drives

112 Compute Nodes

   16 GPU Nodes

          HPE Apollo XL190r Gen9 servers with an internal 240GB SSD, 128GB of memory and dual E5-2660v3 CPU's (20 total cores) and a single NVIDIA Tesla K80

   96 CPU Only Nodes

          HPE Apollo XL170r Gen 9 servers with an internal 240GB SSD, 128GB of memory and dual E5-2660v3 CPU's (20 total cores)

Network Interconnect

      6 Mellanox SX1036 ethernet switches (RoCE enabled) in a hub and spoke topology - each spoke is composed of 6 aggregated 56Gb links

      7 HPE Procurve 2920 switches for the administrative network


 

Files in home directories and in shared storage allocated to research projects are backed up by the UIT backup service.