NERSCPowering Scientific Discovery Since 1974

HPC Seminars

HPC seminars are held at NERSC's Oakland Scientific Facility. Some are open to NERSC users and some are restricted to NERSC staff. Public events will be listed on the NERSC events calendar.

Filter by Year
Screen Shot 2014 06 25 at 4.17.49 PM

System Monitoring

June 25, 2014

I will be describing the monitoring processes that are used in CSG.  What is being collected, where the data is located and how to access it.  I will also talk a bit about future data plans and how I would like to see it… Read More »

xrootd logo

XRootD in Perspective

November 7, 2013

XRootD in Perspective Read More »

JGI logo 2

Sequencing Technologies and Computational pipelines at the JGI

September 17, 2013

Sequencing Technologies and Computational pipelines at the JGI Read More »

Photo on 7 30 12 at 7.10 AM

Introduction to High Performance Computing

June 10, 2013

Introduction to High Performance Computing presented to Berkeley Lab Summer Interns. Read More »

byna

Trillion Particles, 120,000 cores, and 350 TBs: Lessons Learned from a Hero I/O Run on Hopper

May 23, 2013

Modern peta-scale applications can present a variety of configuration, runtime, and data management challenges when run at scale. In this paper, we describe our experiences in running VPIC, a large-scale plasma physics simulation, on the NERSC production Cray XE6 system Hopper. Read More »

jain2

The Materials Project: Combining Density Functional Theory Calculations with Supercomputing Centers for New Materials Discovery

May 2, 2013

New materials can potentially reduce the cost and improve the efficiency of solar photovoltaics, batteries, and catalysts, leading to broad societal impact. This talk describes a computational approach to materials design in which density functional theory (DFT) calculations are performed over very large computing resources. Because DFT calculations accurately predict many properties of new materials, this approach can screen tens of thousands of potential materials in short time frames. Read More »

6km25100off0650

Optimizing the FLASH code: preparing for Mira BG/Q and improving the laser ray trace

April 30, 2013

FLASH is a multi-physics, component-based scientific code which has been used on the largest HPC platforms over the last decade. It has been cumulatively used by over a thousand researchers to investigate problems in astrophysics, cosmology, and in some areas of basic physics, such as turbulence. The core capabilities in FLASH include Adaptive Mesh Refinement (AMR) and solvers for hydrodynamics and magneto-hydrodynamics. There are several other more specialized physics solvers which are included in the distribution. Read More »

Ben Bowen

Open MSI : a Mass Spectrometry Imaging Science Gateway

April 11, 2013

Metabolite and protein analysis is vital to understanding the phenotype of a biological sample. Specifically, metabolite levels dynamically vary in response to energy demands, diet, disease, and environment. Typical analysis of metabolite levels begins with homogenization of a sample and the spatial relationships of the biological material are lost. Mass spectrometry imaging of metabolite and protein levels overcomes this limitation by directly measuring the relative abundance of biomolecules and mapping their position. An "image" constitutes a relative abundance map for a given biomolecule, and large-numbers of molecules can be imaged simultaneously. While this technique is certainly revolutionary, the in-depth analysis of these datasets often provides a barrier to many researchers. OpenMSI provides a gateway for the management and storage of these datafiles (where each file is the size of a typical hard drive), the visualization of the hyper-dimensional contents of the data, and the statistical analysis of the data. Read More »

sadayappan

Domain-specific abstractions and compiler transformations

March 4, 2013

Recent trends in architecture are making multicore parallelism as well as heterogeneity ubiquitous. This creates significant chalenges to application developers as well as compiler implementations. Currently it is virtually impossible to achieve performance portability of high-performance applications, i.e., develop a single version of source code for an application that achieves high performance on different parallel computer platforms. Different implementations of compute intensive core functions are generally needed for different target platforms, e.g., for multicore CPUs versus GPUs. Read More »

Jason-Hick.jpg

Should NERSC use Cloud Storage?

September 28, 2012

In this presentation, the NERSC Storage Systems Group will discuss how three different cloud storage solutions that aim to provide durable scalable storage to consumers might look handling the current HPSS workload. The goal of the talk is to discuss the benefits and risks associated with the solutions to determine whether further investigation is warranted. Read More »

Filter by Year