NERSCPowering Scientific Discovery for 50 Years

Advanced SYCL Techniques and Best Practices, May 30, 2023

May 30, 2023

Introduction

The SYCL programming model means heterogeneous programming using C++ is now more accessible than ever. SYCL uses modern standard C++, and it’s a programming model that lets developers support a wide variety of devices (CPUs, GPUs, FPGAs, and more) from a single code base. The growing popularity of this programming model means that developers are eager to understand how to use all the features of SYCL and how to achieve great performance for their code.

While the tutorial assumes existing knowledge and some experince with using SYCL to develop code for accelerators such as GPUs, video recordings of more introductory SYCL trainings that may help prepare you for this training are available on this YouTube Playlist.

Concepts covered in this training include strategies for optimizing code, managing data flow, how to use different memory access patterns, understanding work group sizes, using vectorization, the importance of ND ranges, and making the most of the multiple devices available on your architecture. 

ALCF and OLCF users are welcome to this training. NERSC training accounts will be provided if needed.

Workshop Leader: Hugh Delaney, Codeplay Software

Workshop Instructors: Thomas Applencourt and Abhishek Bagusetty, Argonne National Laboratory

Course Outline

  • Brief introduction
  • In order queues
  • Multiple devices
  • Using key SYCL features
  • Image convolution introduction
  • Coalesced global memory
  • Vectorization
  • Local memory tiling

Date and Time: 9:00 am - 1:00 pm (Pacific time), Tuesday, May 30, 2023

The format of this event will be online only.

 

Other Information

Please join Slack for Q&A (#general channel):

https://tinyurl.com/sycl-training-slack

Please help us by answering the survey:
https://tinyurl.com/sycl-training-survey

Course Materials

Welcome slides

Lesson materials and code exercises

Video recordings