NERSCPowering Scientific Discovery Since 1974

Systems

MachineRoom1Small2

Systems Overview Table

Summary table of NERSC systems Read More »

hopper1

Hopper Cray XE6

Hopper is NERSC's first peta-flop system, a Cray XE6, with 153,216 compute cores, 217 TB of memory and 2PB of disk. Hopper placed number 5 on the November 2010 Top500 Supercomputer list. Read More »

Edison Jeff

Edison Cray XC30

Edison is NERSC's newest supercomputing system. A Cray XC30, Edison features the Cray Aires high-speed interconnect, fast Intel processors, 64 GB of memory per node, and a multi-petabyte local scratch file system. When fully installed, the phase 2 system will have a peak performance of more than 2 petaflops. Read More »

carverracks.jpg

Carver IBM iDataPlex

Carver, named in honor of American scientist George Washington Carver, is an IBM iDataPlex system with 1,202 compute nodes. Each node contains two Intel Nehalem quad-core processors (9,984 processor cores total). The system's theoretical peak performance is 106.5 Tflop/s. Read More »

pdsf.jpg

PDSF

PDSF is a networked distributed computing environment used to meet the detector simulation and data analysis requirements of large scale Physics, High Energy Physics and Astrophysics and Nuclear Science investigations. Read More »

pdsf.jpg

Genepool

The Genepool system is a cluster dedicated to the JGI's computing needs. Phoebe is a smaller test system for Genepool. Read More »

hpss.png

HPSS data archive

The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. Read More »

Data Transfer Nodes

The data transfer nodes are NERSC servers dedicated to performing transfers between NERSC data storage resources such as HPSS and the NERSC Global Filesystem (NGF), and storage resources at other sites including the Leadership Computing Facility at ORNL (Oak Ridge National Laboratory). These nodes are being managed (and monitored for performance) as part of a collaborative effort between ESnet, NERSC, and ORNL to enable high performance data movement over the high-bandwidth 10Gb ESnet wide-area network (WAN). Read More »

pdsf.jpg

NERSC Global Filesystem

The NERSC Global Filesystem, known as NGF, is a collection of GPFS based file systems mounted across all systems at NERSC. Read More »

Dirac.png

Dirac: GPU Computing

Dirac is a testbed GPU cluster funded in collaboration with the Computational Research Division at Berkeley Lab, using funding from the DOE/ASCR Computer Science Research Testbeds program (DOE Contract Number DE-AC02-05CH11231). This cluster consists of 48 nodes with attached Graphics Processing Units (GPUs) from NVIDIA named Tesla. Read More »

T3E.liquid

History of Systems

Established in 1974 at Lawrence Livermore National Laboratory, NERSC was moved to Berkeley Lab in 1996 with a goal of increased interactions with the UC Berkeley campus. Read More »

NERSC-8 Procurement

Update: Draft Technical Requirements were released to the vendor community December 17, 2012. NERSC-8 Procurement Overview The U.S. Department of Energy (DOE) Office of Science (SC) requires a high performance production computing system in the 2015/2016 timeframe to support the rapidly increasing computational demands of the entire spectrum of DOE SC computational research. The system needs to provide a significant upgrade in computational capabilities, with a target increase between 10-30… Read More »

TrinityN8Logo.gif

Trinity / NERSC-8 RFP

NERSC and the Alliance for Computing at Extreme Scale (ACES), a collaboration between Los Alamos National Laboratory and Sandia National Laboratory are partnering to release a joint Request for Proposal (RFP) for two next generation systems, Trinity and NERSC-8, to be delivered in the 2015 time frame.  Interested Offerors are advised to monitor this web site and the LANL website for potential Trinity / NERSC-8 RFP amendments and other Trinity / NERSC-8 RFP updates. Interested Offerors who have… Read More »