The team behind the simulation: (From left) Andy Nonaka, Staff Scientist and CCSE Group Lead; Yingheng Tang, Postdoc; Candice Kang, UC GSRA; Katherine Klymko, NERSC Computer Systems Engineer; Zhi (Jackie) Yao, Computational Research Scientist/Engineer; Kan-Heng Lee, Research Scientist; Christopher Spitzer, Program Manager, and Johannes Blaschke, NERSC HPC Workflow Performance Expert. - Credit: Robinson Kuntz, Berkeley Lab
A broad association of researchers from across Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California, Berkeley, have collaborated to perform an unprecedented simulation of a quantum microchip, a key step forward in perfecting the chips required for this next-generation technology. The simulation used more than 7,000 NVIDIA GPUs on the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy (DOE) user facility.
Modeling quantum chips allows researchers to understand their function and performance before they’re fabricated, ensuring that they work as intended and spotting any problems that might come up. Quantum Systems Accelerator (QSA) researchers Zhi Jackie Yao and Andy Nonaka of the Applied Mathematics and Computational Research (AMCR) Division at Berkeley Lab develop electromagnetic models to simulate these chips, a key step in the process of producing better quantum hardware.
This simulation captures electromagnetic wave propagation across a multi-layered quantum microchip measuring just 10 millimeters square and 0.3 millimeters thick with etchings a mere one micron wide. It used almost all of the Perlmutter system’s 7,168 NVIDIA GPUs over a 24-hour period. - Credit: Zhi Jackie Yao, Berkeley Lab
“The computational model predicts how design decisions affect electromagnetic wave propagation in the chip,” said Nonaka, “to make sure proper signal coupling occurs and avoid unwanted crosstalk.”
Here, they used their exascale modeling tool, ARTEMIS, to model and optimize a chip designed in a collaboration of Irfan Siddiqi’s Quantum Nanoelectronics Laboratory at the University of California, Berkeley, and Berkeley Lab’s Advanced Quantum Testbed (AQT). This work will be featured in a technical demonstration by Yao at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC25).
Designing quantum chips incorporates traditional microwave engineering in addition to advanced low-temperature physics. This makes a classical electromagnetic modeling tool like ARTEMIS, which was developed as part of the DOE’s Exascale Computing Project initiative, a natural choice for this type of modeling.
A large simulation for a tiny chip
Not every quantum chip simulation calls for so much computing capacity, but modeling the miniscule details of this tiny, extremely complex chip required nearly all of Perlmutter’s power. The researchers used almost all of its 7,168 NVIDIA GPUs over a period of 24 hours to capture the structure and function of a multi-layered chip measuring just 10 millimeters square and 0.3 millimeters thick, with etchings of just one micron in width.
“I’m not aware of anybody who's ever done physical modeling of microelectronic circuits at full Perlmutter system scale. We were using nearly 7,000 GPUs,” said Nonaka. “We discretized the chip into 11 billion grid cells. We were able to run over a million time steps in seven hours, which allowed us to evaluate three circuit configurations within a single day on Perlmutter. These simulations would not have been possible in this time frame without the full system.”
It’s this level of detail that makes this simulation unique. Where other simulations tend to treat chips as “black boxes” due to constraints on modeling capability, using Perlmutter’s massively parallel GPUs gave Yao and Nonaka the compute power to lean into the physical details and show the chip’s mechanism at work.
“We do full-wave physical-level simulation, meaning that we care about what material you use on the chip, the layout of the chip, how you wire the metal – the niobium or other type of metal wires – how you build the resonators, what's the size, what's the shape, what material you use,” said Yao. “We care about those physical details, and we include that in our model.”
In addition to its fine-grained view of the chip, the simulation mimicked the experience of experiments in the lab – how qubits communicate with each other and with other parts of the quantum circuit.
Combining these qualities – a focus on the physical chip design and the ability to simulate in real time – is part of what made the simulation unique, said Yao: “The combination is instrumental, because we use the partial differential equation, Maxwell's equation, and we do it in the time domain so we can incorporate nonlinear behavior. All this adds up to give us one-of-a-kind capability.”
NERSC has supported many quantum information science projects through the Quantum Information Science @ Perlmutter program, which grants Director’s Discretionary Reserve hours on Perlmutter to promising quantum projects. Still, staff say tackling a simulation of this size was an exciting challenge.
“This effort stands out as one of the most ambitious quantum projects on Perlmutter to date, using ARTEMIS and NERSC’s computing capabilities to capture quantum hardware detail over more than four orders of magnitude,” said Katie Klymko, a NERSC quantum computing engineer who worked on the project.
Modeling the next step
Next, the team plans to do more simulations to strengthen their quantitative understanding of the chip’s design and see how it functions as part of a larger system.
“We’d like to do a more quantitative simulation so that we can do a post-process and quantify the spectral behavior of the system,” said Yao. “We’d like to see how the qubit is resonating with the rest of the circuit. In the frequency domain, we’d like to benchmark it with other frequency-domain simulations to give us greater confidence that, quantitatively, the simulation is correct.”
Eventually, the simulation will take the ultimate test: comparison with the physical world. When the chip is fabricated and put through its paces, Yao and Nonaka will see how their model measured up and make adjustments from there.
Nonaka and Yao emphasized that a successful simulation of this technology at this level of detail would not have been possible without strong collaboration across the Berkeley community, from AMCR to QSA and AQT to NERSC, which supported the simulation with staff expertise in addition to compute power. The collaboration has yielded important results for the advancement of science, said de Jong. “This unprecedented simulation, made possible by a broad partnership among scientists and engineers, is a critical step forward to accelerate the design and development of quantum hardware,” he said. “More powerful, more performant quantum chips will unlock new capabilities for researchers and open up new avenues in science.”
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is the mission computing facility for the U.S. Department of Energy Office of Science, the nation’s single largest supporter of basic research in the physical sciences.
Located at Lawrence Berkeley National Laboratory (Berkeley Lab), NERSC serves 11,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials sciences, physics, chemistry, computational biology, and other disciplines. An average of 2,000 peer-reviewed science results a year rely on NERSC resources and expertise, which has also supported the work of seven Nobel Prize-winning scientists and teams.
NERSC is a U.S. Department of Energy Office of Science User Facility.
Media contact: Email our communications team ⟶