UConn Health Logo Health



Two High-Performance Compute Clusters:

  • A High-Performance cluster running Bright Cluster Manager with eighteen nodes consisting of Dell C6145 chassis with four, 12-core AMD Opteron processors and 128 GB of RAM, Dell C410x PCI expansion chassis with 16 NVIDIA M2075 GPU’s attached to four of the compute nodes, ten-gigabit interconnects and ten gigabit interfaces to a 10/40/100 gigabit public network.
  • A High-Performance compute cluster provisioned by StackIQ and running SLURM WorkLoad Manager with twently five nodes consisting of Dell R730 chassis with two, 18-core Intel Xeon E5-2697v4 processors, 256 GB RAM and 10 gigabit interfaces and Dell C6145 chassis with four 8-core AMD Opteron processors and 256 GB RAM and 10 gigabit interfaces all connected to 10/40/100 gigabit networks.
  • All clusters utilize 2+ PB of clustered file servers with 40 gigabit-per-second aggregate throughput and a 3.0 PB Amplidata object storage system with 30 gigabit-per second aggregate throughput via an Avere gateway.

Node Configuration:

  • 2188 CPU cores with 8+ TB RAM
  • 7,000 GPU cores with large CPU-only and hybrid compute clusters

Virtualization Infrastructure:

  • 892 CPU cores, 7+ TB RAM VMWare server and desktop virtualization hosts hosting 300+ Windows/Linux virtual machines with SSD high OPS performance cache tier

Datacenter Infrastructure:

  • UPS generator backed power with redundant cooling
  • 3x40 Gbe dark fiber connection to off-site DR location

Network (100+ Gbe):

  • Full non-oversubscribed 10/40 GbE datacenter network core layer
  • BioScienceCT Research Network – 100 GbE to CEN, Internet2, Storrs
  • New HPC Science DMZ – low latency, 80 Gb-capable firewall


  • 5+ PB of storage including 1.4 PB EMC2 Isilon, 600+ TB Qumulo QC24/QC208 scale-out clusters along with 3+ PB Amplistor on-premise cloud storage