Two High-Performance Compute Clusters:
- A High-Performance cluster running Bright Cluster Manager with eighteen nodes consisting of Dell C6145 chassis with four, 12-core AMD Opteron processors and 128 GB of RAM, Dell C410x PCI expansion chassis
with 16 NVIDIA M2075 GPGPU’s attached to four of the compute nodes, ten-gigabit interconnects and ten gigabit interfaces to a 10/40/100 gigabit public network.
- A High-Throughput compute cluster running StackIQ (Rocks+) with fourteen nodes consisting of Dell C6145 chassis with four 8-core AMD Opteron processors and 256 GB RAM, gigabit interconnects and gigabit
interfaces to a 10/40/100 gigabit public network.
- A High-Performance compute cluster provisioned by StackIQ and running SLURM WorkLoad Manager with nine nodes consisting of Dell R730 chassis with two, 18-core Intel Xeon E5-2697v4 processors, 256 GB RAM and 10 gigabit interfaces to 10/40/100 gigabit networks.
- All clusters utilize 1.0 PB of clustered file servers with 40 gigabit-per-second aggregate throughput and a 3.0 PB Amplidata object storage system with 30 gigabit-per second aggregate throughput via an Avere gateway.
- 1636 CPU cores with 8+ TB RAM
- 7,000 GPU cores with large CPU-only and hybrid compute clusters
- 728 CPU cores, 5+ TB RAM VMWare server and desktop virtualization hosts hosting 300+ Windows/Linux virtual machines with SSD high OPS performance cache tier
- UPS generator backed power with redundant cooling
- 3x40 Gbe dark fiber connection to off-site DR location
Network (100+ Gbe):
- Full non-oversubscribed 10/40 GbE datacenter network core layer
- BioScienceCT Research Network – 100 GbE to CEN, Internet2, Storrs
- New HPC Science DMZ – low latency, 80 Gb-capable firewall
- 4.0+ PB of storage including 400 TB EMC2 Isilon, 600 TB Qumulo QC24/QC208 scale-out clusters along with 3.8 PB Amplistor on-premise cloud storage