NERSCPowering Scientific Discovery for 50 Years

Using HIP and GPU Libraries with OpenMP, December 14, 2022

December 14, 2022

Introduction

This online training session is part of the OLCF’s Preparing for Frontier Training Series, and is open to NERSC users. 

Date and Time: 10:00 - 11:30 am (Pacific time), Wednesday, December 14, 2022

Attendees are encouraged to review the materials from the previous training sessions on OpenMP Offload Basics and OpenMP Optimization and Data Movement, and Introduction to HIP Programming in advance.

Overview

This training is designed for Fortran and C/C++ users who are using OpenMP or considering OpenMP for their applications on Frontier and Perlmutter. The focus will be showing how one can augment an OpenMP program with GPU kernels and libraries written in HIP.

For Fortran OpenMP programmers, we will demonstrate how to build or use the C-interoperability interfaces to launch HIP kernels and call ROCm libraries (for example, rocBLAS, rocFFT, etc.), while using OpenMP to manage data allocation and movement. In this training, we will walk through a concrete example of a Fortran + OpenMP program that exhibits all of the above-mentioned features (managing data with OpenMP, offloading OpenMP kernel, launching HIP kernel, and calling a ROCm library). A similar example program in C will also be presented.

Additionally, we will show how to use the hardware-supported GPU-aware MPI feature from OpenMP to enable faster MPI communications on Frontier. The techniques described in this training apply to the corresponding CUDA libraries on Summit and NERSC Perlmutter, so users may immediately apply the knowledge gained from the lessons.

Date and Time: 10 am - 2 pm (Pacific time), Wednesday, Dec 14, 2022

The format of this event will be online only.

Registration

Registration is required for remote participation.  Please find more information and register on the OLCF event page

Presentation Materials