Computing Resources Available to Collaborators

From NBCRwiki

Jump to: navigation, search

Kivid cluster

Cluster kivid.ucsd.edu is in production as of July 2015.

This cluster used for software development and for running interactive and batch computational jobs imainly with GPU access.

Frontend Exxact server:
  • 2 16-core processors Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
  • 64GB memory
  • 50Tb disk (available via NFS mount to all nodes)
  • 10GBe interconnect
9 compute nodes Exxact server with:
  • 32-core AMD Opteron(TM) Processor 6274 @1400 MHz
  • 32GB memory
  • 1Tb disk
  • 8 GPU GeForce TITAN
2 compute nodes Exxact server with:
  • 32-core AMD Opteron(TM) Processor 6274 @1400 MHz
  • 758Gb memory
  • 1Tb disk
  • 8 GPU GeForce TITAN
1GB ethernet network To all nodes
10GB network To all nodes


Rocce cluster

Cluster rocce.ucsd.edu is in production as of May 2010.

This cluster used for software development and for running interactive and batch computational jobs.

Frontend Dell PowerEdge R410 server :
  • 32GB memory
  • 2 quad-core Intel XeonE5620 2.4Ghz processors
  • 500GB disk
32 compute nodes Dell PowerEdge R410 server :
  • 24GB memory
  • 2 quad-core Intel XeonE5620 2.4Ghz processors
  • 500GB disk
4 virtual container nodes Dell PowerEdge R610 server :
  • 64GB memory
  • 2 quad-core Intel XeonE5650 2.66 Ghz processors
  • 2TB disk
8 gpu nodes Rackform iSErv R350.v2 server :
  • 48GB memory
  • dual Intel XeonX5650 6-core 2.66Ghz processors
  • 4TB disk
high memory node Rackform iServ R422 server :
  • 256 Gb memory
  • 4 Intel Xeon E7530 6-core 1.86Ghz
  • 4TB disk
3 storage servers 2 Aberdeen Stirling X538  :
  • dual Intel Xeon E5530 quad core 2.4 Ghz
  • 12Gb memory
  • 48Tb disk

1 SunFire x4540 server :

  • dual 6-core AMD opteron model 2435 2.6 Ghz processors
  • 32 Gb memory
  • 48Tb disk
6 Dell gpu nodes
  • 384 Gb memory
  • 2 GPU M2075 Tesla
  • 16 CPUs
  • 150Gb disk

high cpu/high memory node

  • 64 Intel(R) Xeon(R) CPU E7- 8830
  • 2 GPU C2075 Tesla
  • 1TB memory
  • 1TB disk
1GB ethernet network 2 SMC TigerSTack II SMC8848M 48 port stackable switches
10GB network Myricom switch includes:
  • 1 7u enclosure for Clos networks with up to 128 host ports
  • 2 center spine switch card
  • 3 line cards with 16 Myrinet-protocol front panel ports
  • 1 line card with 16 ethernet protocol front-panel ports
  • 2 SFP+ transceivers (optic for switch)
  • 4 blank line cards (cover unused slots)
  • PCIE cards:
    • 46 PCI-Express x8 NICs with single QSFP port card (for frontend, high memory, 32 compute, 8 gpu, 4 vm container nodes)
    • 3 PCI-Express x8 NICs with single SFP+ port card (for 3 storage servers)

Arista 52-port 10GbE Switch :

  • IB cards (for Dellgpu, high cpu/high memory nodes and sten cluster)

Sten cluster

Cluster sten.ucsd.edu is in production as of May 2012.

This cluster hosts scientific applications that are accessible via web services. There are no interactive logins.

Please see http://nbcr-222.ucsd.edu


Frontend EclipseA64 2U AMD server:
  • 16-core AMD Opteron(TM) Processor 6212 @1400 MHz
  • 32GB memory
  • 500Gb disk
  • IB+10GBe interconnect
6 compute nodes EclipseA64 2U AMD server with:
  • 32-core AMD Opteron(TM) Processor 6274 @1400 MHz
  • 64GB
  • 500Gb disk
  • IB+10GB interconnect
1 compute node EclipseA64 2U AMD server :
  • 16-core AMD Opteron(TM) Processor 6212 @1400 MHz
  • 32GB
  • 500Gb disk
  • IB+10GBe interconnect
1 storage node
  • 24 GB memory
  • 16 Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
  • 10TB disk
1GB ethernet network Melanox 18-port QDR switch
10GB network Arista 52-port 10GbE Switch

(also used for some nodes on rocce cluster)

Personal tools