NERSC Summer Internships

1 2024 CSA students desk 6696 1040px

NERSC is a global leader in high performance computing (HPC) and data science. We empower researchers with the tools needed to tackle some of the world’s most pressing scientific challenges. 

Every summer, we offer paid internships to graduate students, as well as undergraduate juniors and seniors, allowing them to collaborate with NERSC staff on various science and technology research projects.

How to apply

Although NERSC participates in the Computing Sciences Area summer student program, we require potential interns to apply directly to project mentors. So, while there are several pathways to summer internships, NERSC mentors are the ultimate arbiters for their individual projects.

NERSC staff generally begin posting projects in January for the upcoming summer. Mentors continue posting more opportunities into the late spring.

Projects are organized by their primary science or technology focus. Select a project title to view the description and a link for full details.

Quantum computing

Stabilizer generators of the Steane code

Quantum computing is evolving from static logical qubits to their active use in computation. While experiments validate “quantum memory,” entangled operations introduce challenges often overlooked in benchmarks. This project aims to analyze how selected CSS codes (e.g., Steane, Bacon-Shor) execute non-trivial Clifford circuits, quantifying trade-offs in syndrome extraction overhead, circuit depth, and logical fidelity under realistic noise conditions.

Quantum cryofridge chandelier

This internship offers a unique opportunity to contribute to the rapidly evolving field of quantum computing, specifically focusing on the critical area of quantum resource estimation (QRE). QRE analyzes the key resources, such as qubit counts, gate depths, and execution time, required for solving scientifically relevant problems on future quantum computers.

This internship offers a unique opportunity to contribute to the rapidly evolving field of quantum computing, specifically focusing on developing and validating scalable bounds on the performance of quantum computers.

QuEra aquila optics

Neutral-atom quantum processors are emerging as a technological platform for quantum information processing, offering potential advantages in scalability and qubit connectivity.  This internship offers a unique opportunity to develop and test applications on digital, neutral atom-based quantum computers from QuEra Computing. 

decorative abstract series of glass cubes

Quantum computing is transitioning from maintaining logical qubits to performing logical computations. Qudit technology is an active area of exploration because many physical systems have more than two levels available. Understanding logical qudit operations and qubit-qudit operations can help further advance the field.   

The purpose of this project is to simulate qudit circuits and evaluate codes capable of representing qubit/qudit operations.

Quantum computing

Quantum computing is evolving from static logical qubits to their active use in computation. While experiments validate “quantum memory,” entangled operations introduce challenges often overlooked in benchmarks. This project aims to analyze how selected CSS codes (e.g., Steane, Bacon-Shor) execute non-trivial Clifford circuits, quantifying trade-offs in syndrome extraction overhead, circuit depth, and logical fidelity under realistic noise conditions.

This internship offers a unique opportunity to contribute to the rapidly evolving field of quantum computing, specifically focusing on the critical area of quantum resource estimation (QRE). QRE analyzes the key resources, such as qubit counts, gate depths, and execution time, required for solving scientifically relevant problems on future quantum computers.

This internship offers a unique opportunity to contribute to the rapidly evolving field of quantum computing, specifically focusing on developing and validating scalable bounds on the performance of quantum computers.

Neutral-atom quantum processors are emerging as a technological platform for quantum information processing, offering potential advantages in scalability and qubit connectivity.  This internship offers a unique opportunity to develop and test applications on digital, neutral atom-based quantum computers from QuEra Computing. 

Quantum computing is transitioning from maintaining logical qubits to performing logical computations. Qudit technology is an active area of exploration because many physical systems have more than two levels available. Understanding logical qudit operations and qubit-qudit operations can help further advance the field.   

The purpose of this project is to simulate qudit circuits and evaluate codes capable of representing qubit/qudit operations.

AI & machine learning

thedigitalartist ai generated CPU and circuit board

With the rise of AI, the energy cost of conventional computation is becoming unsustainable. One promising method for reducing this energy cost is thermodynamic computing: While thermal noise must be suppressed in digital or quantum computing at great energy cost, thermodynamic computers are instead powered by it.  The goal of this internship is to perform large-scale simulations of thermodynamic computers at NERSC, with the aim of better understanding different topologies and their energy landscapes, as well as training methods.

warpx amrex cropped square

NERSC is seeking an enthusiastic intern for a term of up to six months to investigate integrating the Enzyme Automatic Differentiator into the Gordon-Bell award-winning PIC code, WarpX, and its sibling code ImpactX.

The results of this internship will contribute to production-quality, daily-use open source modeling software with a direct impact on next-generation particle accelerators and fusion energy science research. 

decorative abstract programming illo

Scientific AI workloads are growing rapidly in scale and complexity, especially with the rise of foundation models and large-scale inference/training pipelines.

This internship project will benchmark and analyze the performance of representative scientific AI workloads (such as materials characterization or weather forecasting) on NERSC systems, with emphasis on Perlmutter and relevance to future Doudna-class platforms.

The project outcome will be actionable benchmark results and analysis that help guide workload readiness, optimization priorities, and system design decisions for next-generation supercomputing.

Abstract tech AI

High performance computing environments are increasingly complex, and scientific AI workloads demand faster, more reliable operations support. 

This internship project will design and prototype agentic AI capabilities that improve HPC operational efficiency at NERSC, including natural-language interfaces for operational data, intelligent assistance for troubleshooting and incident triage, and automated synthesis of system-health and performance insights.

This work will support the HPC ecosystem that underpins the DOE Genesis Mission AI-for-Science workloads.

AI & machine learning

With the rise of AI, the energy cost of conventional computation is becoming unsustainable. One promising method for reducing this energy cost is thermodynamic computing: While thermal noise must be suppressed in digital or quantum computing at great energy cost, thermodynamic computers are instead powered by it.  The goal of this internship is to perform large-scale simulations of thermodynamic computers at NERSC, with the aim of better understanding different topologies and their energy landscapes, as well as training methods.

NERSC is seeking an enthusiastic intern for a term of up to six months to investigate integrating the Enzyme Automatic Differentiator into the Gordon-Bell award-winning PIC code, WarpX, and its sibling code ImpactX.

The results of this internship will contribute to production-quality, daily-use open source modeling software with a direct impact on next-generation particle accelerators and fusion energy science research. 

Scientific AI workloads are growing rapidly in scale and complexity, especially with the rise of foundation models and large-scale inference/training pipelines.

This internship project will benchmark and analyze the performance of representative scientific AI workloads (such as materials characterization or weather forecasting) on NERSC systems, with emphasis on Perlmutter and relevance to future Doudna-class platforms.

The project outcome will be actionable benchmark results and analysis that help guide workload readiness, optimization priorities, and system design decisions for next-generation supercomputing.

High performance computing environments are increasingly complex, and scientific AI workloads demand faster, more reliable operations support. 

This internship project will design and prototype agentic AI capabilities that improve HPC operational efficiency at NERSC, including natural-language interfaces for operational data, intelligent assistance for troubleshooting and incident triage, and automated synthesis of system-health and performance insights.

This work will support the HPC ecosystem that underpins the DOE Genesis Mission AI-for-Science workloads.

Application performance

3D Core Collapse simulation using chimera based on AMReX

NERSC is seeking enthusiastic summer interns to investigate ways to improve the overall performance of large-scale simulations by advancing load-balancing (LB) algorithms. Load balancing is extremely important for large-scale, massively parallel simulations. Current LB algorithms are generally simplistic, as calculations must be performed at runtime and depend on the reduced data users choose to collect and pass to them. However, as simulations become more complex and Moore’s Law draws to a close, having the best possible LB is an increasing priority for the large-scale performance of an HPC code and its future research possibilities.

summer intern and mentor

In this exciting internship, you will study the performance of math libraries on CPUs and GPUs through performance modeling tools. You will develop interactive, browser-based labs that teach users how to model application performance on NERSC’s GPU-accelerated systems. You will develop interactive, browser-based labs that teach users how to model application performance on NERSC’s GPU-accelerated systems.

Application performance

NERSC is seeking enthusiastic summer interns to investigate ways to improve the overall performance of large-scale simulations by advancing load-balancing (LB) algorithms. Load balancing is extremely important for large-scale, massively parallel simulations. Current LB algorithms are generally simplistic, as calculations must be performed at runtime and depend on the reduced data users choose to collect and pass to them. However, as simulations become more complex and Moore’s Law draws to a close, having the best possible LB is an increasing priority for the large-scale performance of an HPC code and its future research possibilities.

In this exciting internship, you will study the performance of math libraries on CPUs and GPUs through performance modeling tools. You will develop interactive, browser-based labs that teach users how to model application performance on NERSC’s GPU-accelerated systems. You will develop interactive, browser-based labs that teach users how to model application performance on NERSC’s GPU-accelerated systems.

Data science

FES NERSC allocations graph chart

NERSC is looking for a knowledgeable summer 2026 intern to apply social network analysis to our allocations data and report on its potential to identify and respond to community needs and to build a strong NERSC community of practice.

Data science

NERSC is looking for a knowledgeable summer 2026 intern to apply social network analysis to our allocations data and report on its potential to identify and respond to community needs and to build a strong NERSC community of practice.

HPC systems & software

Doudna System poster art 1025 x 685

Spin is NERSC’s container service platform for running persistent services that integrate with NERSC systems and storage. Users typically build container images, publish them to a registry, and deploy workloads in project namespaces, where services can be operated and maintained for scientific collaborations. For NERSC’s next-generation HPC system, Doudna, we expect a Spin-like capability on the HPC system itself and many similar user workloads.

This project will analyze current Spin workloads, starting with storage I/O behavior and possibly extending to network I/O, CPU, and memory utilization. The outcome will be an evidence-based characterization of workload patterns, bottlenecks, and resource requirements to guide capacity planning, platform design, and operational readiness for Doudna.

Sarafina Nance with Perlmutter

Containers provide major benefits for HPC applications, including portability across systems, reproducibility of software stacks, and faster onboarding for users who need consistent build and runtime environments. These benefits are especially important as HPC workflows become more complex, combining simulation, data analysis, and AI components. At the same time, HPC programming environments are inherently complex, with multiple compiler families, MPI variants, math libraries, and toolchain version combinations that users currently access through module stacks.

This project aims to build a solid set of container images that package the NERSC programming environment directly inside the image, with software primarily built using Spack.

 

JGI DNA sequencer

In large HPC environments, scheduling is often prioritized for large block allocations, allowing HPC centers to increase system utilization. This can often leave smaller work in the backfill queue, only running when extra capacity is not needed. 

For projects with many smaller tasks, workflow tools can often be used to allocate large blocks of resources for the project, and then use those blocks to run many smaller tasks as part of the workflow.

The project will focus on determining the fastest scheduling and most efficient way to start high-throughput jobs on large supercomputers using real bioinformatics workflows from the Joint Genome Institute’s JAWS workflow tool.

HPC systems & software

Spin is NERSC’s container service platform for running persistent services that integrate with NERSC systems and storage. Users typically build container images, publish them to a registry, and deploy workloads in project namespaces, where services can be operated and maintained for scientific collaborations. For NERSC’s next-generation HPC system, Doudna, we expect a Spin-like capability on the HPC system itself and many similar user workloads.

This project will analyze current Spin workloads, starting with storage I/O behavior and possibly extending to network I/O, CPU, and memory utilization. The outcome will be an evidence-based characterization of workload patterns, bottlenecks, and resource requirements to guide capacity planning, platform design, and operational readiness for Doudna.

Containers provide major benefits for HPC applications, including portability across systems, reproducibility of software stacks, and faster onboarding for users who need consistent build and runtime environments. These benefits are especially important as HPC workflows become more complex, combining simulation, data analysis, and AI components. At the same time, HPC programming environments are inherently complex, with multiple compiler families, MPI variants, math libraries, and toolchain version combinations that users currently access through module stacks.

This project aims to build a solid set of container images that package the NERSC programming environment directly inside the image, with software primarily built using Spack.

 

In large HPC environments, scheduling is often prioritized for large block allocations, allowing HPC centers to increase system utilization. This can often leave smaller work in the backfill queue, only running when extra capacity is not needed. 

For projects with many smaller tasks, workflow tools can often be used to allocate large blocks of resources for the project, and then use those blocks to run many smaller tasks as part of the workflow.

The project will focus on determining the fastest scheduling and most efficient way to start high-throughput jobs on large supercomputers using real bioinformatics workflows from the Joint Genome Institute’s JAWS workflow tool.