Datura Hardware

Computing Components:

    1. daturamon queue – parallel computing queue
      The daturamon.q consists of 4800GB main memory, 2400 (Intel(R) Xeon(R) CPU X5650@2.67GHz) CPU cores, 200 compute nodes, 12cores/compute node, 24GB ram/compute node and is equipped with an Infiniband QDR communication network.
    2. corrosive queue – parallel computing/debugging/data analysis queue
      The corrosive.q consists of 240GB main memory, 120 (Intel(R) Xeon(R) CPU X5650@2.67GHz) CPU cores, 10 compute nodes, 12cores/compute node, 24GB ram/compute node and is equipped with an Infiniband QDR communication network. The corrosive.q can be used for batch- and interactive jobs.
    3. nxserver queue – remote desktop queue
      The nxserver.q consists of 1584GB main memory, 792 (Intel(R) Xeon(R) Woodcrest 5160@3.00GHz) CPU cores, 198 compute nodes, 4cores/compute node, 8GB ram/compute node and is equipped with an Infiniband DDR communication network.
    4. gpu queue – GPU/visualization queue
      The gpu.q consists of gpu-01:
      12GB main memory, 4 (Intel(R) Xeon(R) E5504@2.00GHz) CPU cores, 1 compute node, 4 Tesla C2050 cards, 1792 CUDA Cores and 2687MB global memory and
      Erich:
      130GB main memory, 16 (Intel Xeon(R) CPU E5-2670 0 @ 2.60GHz) CPU cores, 1 compute node, 1 NVIDIA Corporation GTX TITAN card with 2688 CUDA Cores, cuda capapbility v3.5 and 6143MB global memory.

Storage Components:

The main storage system consists of 6 Lustre storage server and 15 Lustre storage targets.

Object Storage Server (6x): 2 (Xeon E5620 2.4GHz Hexa-Core CPUs (Westmere)), 24 GB DDR3 ECC reg.  main memory, QDR Infiniband 40Gbit/s, 2 dual port 8Gbit/s fiber channel cards, IPMI

Object Storage Targets:
DotHill Raid 3730+ (6x), redundant controllers, fans, PSUs, 2TB and 3TB SAS harddisks
DotHill JBod 3130 (14x), redundant controllers, fans, PSUs, 2TB and 3TB SAS harddisks

Meta Data Targets:
DotHill JBod 3130 (1), redundant controllers, fans, PSUs, 600GB SAS harddisks

Network Components:

The main communication and storage network is based on a nonblocking Infiniband QDR 40Gbit/s to benefit of low latency and high bandwidth. The Grid Director™ 4700 connects all the nodes, the remote desktops and the storage system. The switch can handle up to 51.8 Tb/s of non-blocking bandwidth and is the bridge to the older Infiniband DDR 20Gbit/s infrastructure.