NESAP for Data Projects
NESAP for Data explicitly addresses data-intensive science applications that rely on processing and analysis of massive datasets acquired from experimental and observational sources. These sources may include spectrographs like the Dark Energy Spectroscopic Instrument, telescopes studying the cosmic microwave background, genome sequencers at the Joint Genome Institute, synchrotron sources like the Advanced Light Source, or particle detectors at the Large Hadron Collider. The objective of this program is to enable application processing data from such sources to take full advantage of the KNL chipset on Cori.
Many current algorithms for data analysis used in experimental science contexts are not optimized for many-core architectures. Some are written in “productivity” languages like Python, not “performance” languages like C, C++ or Fortran. In this project, we will employ the latest advances in computer science to develop highly scalable, distributed parallel algorithms to overcome these challenges.
The 6 projects included in the NESAP for Data partnership with NERSC, Cray and Intel:
- Tomographic Reconstruction in Python (TomoPy), Doga Gursoy (Argonne National Laboratory), Basic Energy Sciences
- Time Ordered Astrophysics Scalable Tools (TOAST), Julian Borrill (Lawrence Berkeley Laboratory), High Energy Physics
- Dark Energy Spectroscopic Instrument Codes (DESI), Stephen Bailey (Lawrence Berkeley Laboratory), High Energy Physics
- ATLAS Data Processing (ATLAS), Steve Farrell (Lawrence Berkeley Laboratory), High Energy Physics
- CMS Data Processing (CMS), Dirk Hufnagel (Fermi National Accelerator Laboratory), High Energy Physics
- Union of Intersections (UoI), Kris Bouchard (Lawrence Berkeley Laboratory), Biological and Environmental Research