Skip to content

Lochness

This very heterogeneous cluster is a mix of manufacturers, components, and capacities as it was built up in incremental purchases spanning several years.

Lochness was decommissioned on March 2024. Many lochness nodes were incorporated into Wulver cluster 2Q 2025 for HPC course purposes.

Specifications:

Current Partitions Number of Nodes Processor Family Cores Per Node Total Cores Number of GPU per node Total GPUs Model of GPU
bader 1 Sky Lake 20 20 4 4 Titan V
cld 11 Sandy Bridge 20 220 0 9
cld-gpu 2 Sandy Bridge 20 40 2 4 K20Xm
datasci 7 Sandy Bridge 20 140 2 14 P100
datasci 1 Sky Lake 20 20 4 4 Titan RTX
datasci3 1 Sky Lake 20 20 4 4 Titan RTX
datasci4 1 Sky Lake 20 20 4 4 Titan RTX
davidsw 1 Cascade Lake 32 32 2 2 A100
ddlab 1 Sandy Bridge 20 20 0 0
ddlab 1 Cascade Lake 20 20 2 2 V100
esratoy 2 Cascade Lake 32 64 0 0
fahmadpo 14 Cascade Lake 64 896 0 0
fahmadpo-gpu 1 Cascade Lake 24 24 4 4 Titan RTX
gor 28 Broadwell 20 560 0 0
gperry 3 Cascade Lake 36 108 0 0
hrjin 1 Sky Lake 32 32 4 3 Titan XP
jyoung 3 Cascade Lake 96 288 0 0
phan 1 Cascade Lake 20 20 4 4 1 Titan X,3 Ge Force
project 1 Cascade Lake 32 32 0 0
public 37 Cascade Lake 32 1184 0 0
samaneh 16 Cascade Lake 40 640 0 0
shakib 20 Cascade Lake 24 480 0 0
singhp 1 Ice Lake 56 56 0 0
smarras 4 Cascade Lake 48 192 0 0
smarras 2 Cascade Lake 48 96 2 2 A100
solarlab 1 Cascade Lake 48 48 2 2 A100
solarlab 4 Cascade Lake 48 192 0 0
solarlab 1 Ice Lake 48 48 1 1 A100
solarlab 2 Ice Lake 48 96 0 0
solarlab 1 Ice Lake 48 48 0 0
xt3 11 Cascade Lake 32 352 0 0
xye 1 Cascade Lake 16 16 5 5 3 types of GPUs
zhiwei 1 Ice Lake 56 56 4 4 A40
183 nan 6080 nan 72
  • All nodes have:
    • Ethernet network interface (1GigE or 10GigE)
    • Infiniband network interface (mix of HDR100, EDR, and FDR speeds)
    • 1TB local storage (mostly SSD but a few HD)
  • All nodes have network accessible storage:
    • /home/: 26 TB
    • /research/: 97 TB
    • /afs/cad/: 50 TB

The cluster also features

  • "CentOS Linux 7 (Core)" operating system
  • Virtualized login and control nodes
  • SLURM job scheduler
  • Warewulf stateless node provisioning
  • Managed entirely by NJIT personnel

Roughly half under warranty; rest are time and materials