Skip to content

Lochness

This very heterogeneous cluster is a mix of manufacturers, components, and capacities as it was built up in incremental purchases spanning several years.

Lochness was decommissioned on March 2024. Much of lochness nodes will be incorporated into Wulver cluster 2Q 2024.

Specifications:

Current Partitions Number of Nodes Processor Family Cores Per Node Total Cores Number of GPU per node Total GPUs Model of GPU
bader 1 Sky Lake 20.0 20 4.0 4 Titan V
cld 11 Sandy Bridge 20.0 220 0.0 9
cld-gpu 2 Sandy Bridge 20.0 40 2.0 4 K20Xm
datasci 7 Sandy Bridge 20.0 140 2.0 14 P100
datasci 1 Sky Lake 20.0 20 4.0 4 Titan RTX
datasci3 1 Sky Lake 20.0 20 4.0 4 Titan RTX
datasci4 1 Sky Lake 20.0 20 4.0 4 Titan RTX
davidsw 1 Cascade Lake 32.0 32 2.0 2 A100
ddlab 1 Sandy Bridge 20.0 20 0.0 0
ddlab 1 Cascade Lake 20.0 20 2.0 2 V100
esratoy 2 Cascade Lake 32.0 64 0.0 0
fahmadpo 14 Cascade Lake 64.0 896 0.0 0
fahmadpo-gpu 1 Cascade Lake 24.0 24 4.0 4 Titan RTX
gor 28 Broadwell 20.0 560 0.0 0
gperry 3 Cascade Lake 36.0 108 0.0 0
hrjin 1 Sky Lake 32.0 32 4.0 3 Titan XP
jyoung 3 Cascade Lake 96.0 288 0.0 0
phan 1 Cascade Lake 20.0 20 4.0 4 1 Titan X,3 Ge Force
project 1 Cascade Lake 32.0 32 0.0 0
public 37 Cascade Lake 32.0 1184 0.0 0
samaneh 16 Cascade Lake 40.0 640 0.0 0
shakib 20 Cascade Lake 24.0 480 0.0 0
singhp 1 Ice Lake 56.0 56 0.0 0
smarras 4 Cascade Lake 48.0 192 0.0 0
smarras 2 Cascade Lake 48.0 96 2.0 2 A100
solarlab 1 Cascade Lake 48.0 48 2.0 2 A100
solarlab 4 Cascade Lake 48.0 192 0.0 0
solarlab 1 Ice Lake 48.0 48 1.0 1 A100
solarlab 2 Ice Lake 48.0 96 0.0 0
solarlab 1 Ice Lake 48.0 48 0.0 0
xt3 11 Cascade Lake 32.0 352 0.0 0
xye 1 Cascade Lake 16.0 16 5.0 5 3 types of GPUs
zhiwei 1 Ice Lake 56.0 56 4.0 4 A40
183 6080 72
  • All nodes have:
    • Ethernet network interface (1GigE or 10GigE)
    • Infiniband network interface (mix of HDR100, EDR, and FDR speeds)
    • 1TB local storage (mostly SSD but a few HD)
  • All nodes have network accessible storage:
    • /home/: 26 TB
    • /research/: 97 TB
    • /afs/cad/: 50 TB

The cluster also features

  • "CentOS Linux 7 (Core)" operating system
  • Virtualized login and control nodes
  • SLURM job scheduler
  • Warewulf stateless node provisioning
  • Managed entirely by NJIT personnel

Roughly half under warranty; rest are time and materials