nersc
Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of May 4, 2020

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2020-05-04 21:08:16

# NERSC Weekly Email, Week of May 4, 2020 <a name="top"></a> # ## Contents ## - [Summary of Upcoming Events and Key Dates](#dates) ## [NERSC Status](#section1) ## - [Normal NERSC Operations Continue During California Shelter-in-Place Period](#curtailment) ## [This Week's Events & Deadlines](#section2) ## - [Next Meeting for NERSC Users Group Special Interest Group on Experimental Facilities this Wednesday!](#nugsig) - [Vote Today for Your NUGEX Representatives; Election Ends Friday!](#nugexelect) ## [Updates at NERSC ](#section3) ## - [Please Take the Machine Learning at NERSC Survey!](#mlsurvey) - [Save the Date: NERSC User Group Annual Meeting Set for August 17](#nugmtg) - [Join the NERSC Users Slack Sponsored by NUG Today!](#slack) - [Share Your Research, Images, Movies, and Journal Covers with NERSC!](#share) - [VTune Profiler Default Upgraded to 2020 & All Older Versions Deprecated](#vtune) ## [Upcoming Training Events ](#section4) ## - [Register Now for May 21 Online Hands-On Training on Variable-Time Jobs!](#vtjobstrain) - [IDEAS-ECP Webinar on Accelerating Numerical Libraries with Multi-Precision Algorithms on May 13](#ecpwebinar) - [Registration Open for New User Training on June 10](#newusertrain) - [CUDA Training Series Continues May 13](#cudatrain) - [OpenACC Training Series Continues May 28](#openacc) ## [Calls for Participation ](#section5) ## - [Submissions for INFOCOMP 2020 Due May 18](#infocomp) ## [NERSC News ](#section6) ## - [No New "NERSC User News" Podcast this Week](#nopodcast) - [Come Work for NERSC!](#careers) - [Upcoming Outages](#outages) - [About this Email](#about) ## Summary of Upcoming Events and Key Dates <a name="dates"/></a> ## May 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 *6* 7 *8* 9 6 May NUG SIG Exp Facilities Mtg [1] 8 May NUGEX voting closes [2] 10 11 12 *13* 14 15 16 13 May NVIDIA CUDA Training [3] 13 May IDEAS-ECP Monthly Webinar [4] 17 18 19 *20**21* 22 23 20 May Cori Monthly Maintenance [5] 21 May Variable-time Jobs Training [6] 24 *25* 26 27 *28* 29 30 25 May Memorial Day [7] 28 May OpenACC Training part 2 [8] 31 June 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 *10* 11 12 13 10 Jun New User Training [9] 14 15 16 17 *18* 19 20 20 Jun NVIDIA CUDA Training [3] 21 22 *23* 24 25 26 27 23 Jun OpenACC Training part 3 [8] 28 29 30 July 2020 Su Mo Tu We Th Fr Sa 1 2 *3* 4 3 July Independence Day Holiday [10] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 *21* 22 23 24 25 21 July NVIDIA CUDA Training [3] 26 27 28 29 30 31 1. **May 6, 2020**: [NUG SIG Experimental Facilities Meeting](#nugsig) 2. **May 8, 2020**: [NUGEX Election Ends](#nugexelect) 3. **May 13, June 18, and July 21, 2020**: [NVIDIA CUDA Training Series](#cudatrain) 4. **May 13, 2020**: [IDEAS-ECP Monthly Webinar](#ecpwebinar) 5. **May 20, 2020**: Cori monthly maintenance 6. **May 21, 2020**: Variable-time Jobs training 7. **May 25, 2020**: Memorial Day Holiday (No Consulting or Account Support) 8. **May 28 and June 23, 2020**: [OpenACC Training](#openacc) 9. **June 10, 2020**: [NERSC New User Training](#newusertrain) 10. **July 3, 2020**: Independence Day Holiday (No Consulting or Account Support) 11. All times are **Pacific Time zone** - **Upcoming Planned Outage Dates** (see [Outages section](#outages) for more details) - **May 6, 2020**: HPSS Archive (user) - **May 20, 2020**: Cori - **May 20, 2020**: ProjectA (retirement) - **Other Significant Dates** - **July 6-12, 2020**: [SciPy2020](https://www.scipy2020.scipy.org/) Conference - **August 17, 2020**: [NERSC User Group Meeting](#nugmtg) (save the date) - **September 7, 2020**: Labor Day Holiday (No Consulting or Account Support) - **November 26-27, 2020**: Thanksgiving Holiday (No Consulting or Account Support) - **December 24, 2020-January 1, 2021**: Christmas/New Year Holiday (Limited Consulting or Account Support) ([back to top](#top)) --- ## NERSC Status <a name="section1"/></a> ## ### Normal NERSC Operations Continue During California Shelter-in-Place Period <a name="curtailment"/></a> Berkeley Lab, where NERSC is housed, is located in California, which is under a statewide Public Health Shelter-in-Place Order for an indefinite period of time. Under this Order, only essential businesses may remain open. NERSC is considered an essential business due to its national importance, so we remain in operation, but with the majority of NERSC staff working remotely and only a skeleton crew onsite. During this period, you can expect regular online consulting and account support but no telephone support. All trainings will be held online, or postponed if online is infeasible. Regular maintenances on the systems will continue to be performed while minimizing onsite staff presence, which could result in longer downtimes than would occur under normal circumstances. Because onsite staffing is so minimal, we request that you refrain from calling NERSC Operations except to report urgent system issues. ([back to top](#top)) --- ## This Week's Events & Deadlines <a name="section2"/></a> ## ### Next Meeting for NERSC Users Group Special Interest Group on Experimental Facilities this Wednesday! <a name="nugsig"/></a> The next meeting for the NERSC Users Group (NUG) Special Interest Group (SIG) on Experimental Facilities will be this Wednesday, May 6, at 10 am Pacific time. David Lawrence will present the NERSC workflow for the Jefferson Lab GlueX experiment. For more information, including how to connect to the meeting and an archive of all resources from previous meetings, please see <https://www.jlab.org/indico/event/383/>. The inaugural meeting of the SIG on Experimental Facilities was held last month, with a discussion of plans and goals for the SIG and a presentation on best practices for using NERSC. Outcomes of the meeting included: - The creation of a web page covering the presented best-practices advice: <https://docs.nersc.gov/science-partners/bestpractices-eod/>. (Please note that NERSC documentation is hosted from an open [git repository](https://gitlab.com/NERSC/nersc.gitlab.io) and contributions are encouraged, so please feel free to share your own best practices!) - A new channel on the NERSC Users Slack workspace, `#user-facilities`, for anyone interested in the SIG to join. (See the next entry for information on how to join the NERSC Users Slack workspace.) The SIG was formed in order to provide a forum NERSC users who process data from experimental and observational facilities sponsored by the DOE Office of Science to exchange best practices, knowledge, and tools. The group will also provide feedback to NERSC staff on how to improve support for these workflows, and provide input on policies that affect this growing workload at NERSC. To get involved please join the channel and take a look at <https://www.nersc.gov/users/NUG/sig-for-experimental-facility-users/> for regular updates. ### Vote Today for Your NUGEX Representatives; Election Ends Friday! <a name="nugexelect"/></a> Voting is now open for six open positions on the NERSC User Group Executive Committee (NUGEX). NUGEX is the voice of the user community to NERSC and DOE, shaping and guiding important NERSC user policies. Please follow [this link](https://nersc.servicenowservices.com/nav_to.do?uri=%2Fassessment_take2.do%3Fsysparm_assessable_type%3D3f0e20161b9c14102548ea82f54bcbc5) to vote for up to six candidates (NERSC login required). Hurry, voting ends this Friday, May 8! ([back to top](#top)) --- ## Updates at NERSC <a name="section3"/></a> ## ### Please Take the Machine Learning at NERSC Survey! <a name="mlsurvey"/></a> NERSC is conducting a [survey](https://bit.ly/3d28ZdM) of scientific researchers who are developing and using machine-learning (ML) models for scientific problems. We want to better understand users' current and future ML ecosystem and computational needs for development, training, and deployment of models. Your feedback is critical! It will help us optimize Cori and also Perlmutter for ML capability and performance. Please take the survey at <https://bit.ly/3d28ZdM>. ### Save the Date: NERSC User Group Annual Meeting Set for August 17 <a name="nugmtg"/></a> Mark your calendars for the annual meeting of the NERSC User Group (NUG), which will be held online on Monday, August 17. More details on the schedule of events are forthcoming. ### Join the NERSC Users Slack Sponsored by NUG Today! <a name="slack"/></a> Are you interested in discussing NERSC happenings and issues with your fellow NERSC users? Do you want to get or give some advice from a user perspective about something in your workflow? If so, please join the NERSC Users Slack workspace. For more information and a link to join, please see <https://www.nersc.gov/users/NUG/nersc-users-slack/> (login required). **Please note that this Slack workspace is not an official NERSC staff-supported platform.** While NERSC staff may sometimes join the NERSC Users Slack, the best way to reach NERSC is still through the online help desk at <https://help.nersc.gov>. ### Share Your Research, Images, Movies, and Journal Covers with NERSC! <a name="share"/></a> NERSC is always looking for stories, images, movies, and journal cover stories related to research conducted at NERSC. Please tell us about your research using the [NERSC Science Highlights Submission Form](https://www.nersc.gov/science/science-highlight-submit/) and we could feature your work in our [news stories](https://www.nersc.gov/news-publications/nersc-news/science-news/), [science highlight presentations](https://www.nersc.gov/science/science-highlights-presentations/), and/or other NERSC presentations and reports. ### VTune Profiler Default Upgraded to 2020 & All Older Versions Deprecated <a name="vtune"/></a> Last month, the Intel VTune profiler version default was upgraded to version 2020. Version 2020 includes significant upgrades in functionality and performance, and changes the names of the executables to simpler and easier-to-understand strings. While the older versions (2018.up3 and 2019.up3) remain on Cori, we encourage you to upgrade your workflow to `vtune/2020`. The deprecated versions will no longer collect performance data, but can continue to be used to read and analyze existing performance data that was collected with these tools. For more information on using VTune at NERSC, please see the NERSC VTune documentation page at <https://docs.nersc.gov/programming/performance-debugging-tools/vtune/>. ([back to top](#top)) --- ## Upcoming Training Events <a name="section4"/></a> ## ### Register Now for May 21 Online Hands-On Training on Variable-Time Jobs! <a name="vtjobstrain"/></a> NERSC will host a two-hour, online hands-on user training on variable-time jobs on Thursday, May 21, from 10:00 am to noon (Pacific time). The training begins with a 30-minute presentation, followed by a 90-minute hands-on session. Variable-time jobs can greatly improve queue turnaround by automatically exploiting opportunities for backfill in Slurm, allowing the job to start sooner than it would otherwise. Applications need to be capable of checkpoint/restart, either through application-level checkpointing or via external checkpointing tools (e.g., DMTCP). For more information and to register, please see: <https://www.nersc.gov/users/training/events/online-hands-on-user-training-on-variable-time-jobs-on-thursday-may-21-2020/>. ### IDEAS-ECP Webinar on Accelerating Numerical Libraries with Multi-Precision Algorithms on May 13 <a name="ecpwebinar"/></a> The May webinar in the Best Practices for HPC Software Developers series is entitled "Accelerating Numerical Software Libraries with Multi-Precision algorithms", and will take place next Wednesday, May 13, 2020, at 10:00 am Pacific time. This webinar, presented by Hartwig Anzt (Karlsruhe Institute of Technology) and Piotr Luszczek (University of Tennessee), will explain how low-precision, high-speed special function units in modern processors can be leveraged to create fast numerical libraries for production computing. For more information and to register (there is no cost but registration is required), please see <https://www.exascaleproject.org/event/multprec/>. ### Registration Open for New User Training on June 10 <a name="newusertrain"/></a> Registration is now open for the NERSC New User Training that will be held online on Wednesday, June 10. The purpose of the training is to provide users new to NERSC with the basics on our computational systems; accounts and allocations; programming environment, tools, and best practices; and data ecosystem. **The training will be presented online only**, using Zoom technology. Please see <https://www.nersc.gov/users/training/events/new-user-training-june-10-2020/> for the agenda and to register. ### CUDA Training Series Continues May 13 <a name="cudatrain"/></a> NVIDIA is presenting a 9-part CUDA training series intended to help new and existing GPU programmers understand the main concepts of the CUDA platform and its programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture via teleconference or on your own. **This event will be held exclusively online.** The fifth training in the series covers CUDA atomics, reductions, and warp shuffle. These operations can help you in cases where exposing parallelism is not immediately obvious due to potential thread contention. Following the presentation will be a hands-on session where participants can complete example exercises meant to reinforce the presented concepts. **Registration for part 5 closes this Wednesday, May 6.** For more information (including registration information) please see <https://www.nersc.gov/users/training/events/cuda-atomics-reductions-and-warp-shuffle-part-5-of-9-cuda-training-series/>. Other scheduled dates in the series: - June 18: 6. [Managed Memory](https://www.nersc.gov/users/training/events/managed-memory-part-6-of-9-cuda-training-series-june-18-2020/) - July 21: 7. [CUDA Concurrency](https://www.nersc.gov/users/training/events/cuda-concurrency-part-7-of-9-cuda-training-series-july-21-2020/) ### OpenACC Training Series Continues May 28 <a name="openacc"/></a> OpenACC is a directive-based approach to parallel programming for heterogeneous architectures, where developers specify regions of code (written in C, C++, or Fortran) to be offloaded from a host CPU to a GPU. This approach is meant to reduce the amount of programming effort required of developers relative to low-level models, such as CUDA. NVIDIA will present a [3-part OpenACC training series](https://www.olcf.ornl.gov/openacc-training-series/) intended to help new and existing GPU programmers learn to use the OpenACC API. Each part will include a one-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a one-hour hands-on session following each lecture (via teleconference) or on your own. **The May training will be online only.** The May training covers data management in OpenACC. For more information and to register, please see <https://www.nersc.gov/users/training/events/openacc-data-management-part-2-of-3-openacc-training-series-may-28-2020/>. Remaining in the series: - [Loop Optimizations with OpenACC (June 23)](https://www.nersc.gov/users/training/events/loop-optimizations-with-openacc-part-3-of-3-openacc-training-series-june-23-2020/) Previously held sessions: - [Introduction to OpenACC (April 17)](https://www.nersc.gov/users/training/events/introduction-to-openacc-part-1-of-3-openacc-training-series-april-17-2020/) ([back to top](#top)) --- ## Calls for Participation <a name="section5"/></a> ## ### Submissions for INFOCOMP 2020 Due May 18 <a name="infocomp"/></a> INFOCOMP2020, The Tenth International Conference on Advanced Communications and Computation, is soliciting academic, research, and industrial contributions presenting research and practical results, position papers addressing the pros and cons of specific proposals, survey papers, and panel proposals. The conference will be held in **Lisbon, Portugal** from September 27 to October 01, 2020 -- please note the location change. The submission deadline is **May 18, 2020**. Please see the [INFOCOMP website](http://www.iaria.org/conferences2020/INFOCOMP20.html) for information about the conference and the [submission page](http://www.iaria.org/conferences2020/SubmitINFOCOMP20.html) for submission details. ([back to top](#top)) --- ## NERSC News <a name="section6"/></a> ## ### No New "NERSC User News" Podcast this Week <a name="nopodcast"/></a> There will be no new episode of the "NERSC User News" podcast this week as we try to adapt to social distancing. We encourage you to instead enjoy some of our most recent episodes and greatest hits: - [RAPIDS](https://anchor.fm/nersc-news/episodes/The-RAPIDS-Library-Nick-Becker-Interview-eb0h5a) In this interview with NVIDIA RAPIDS senior engineer Nick Becker, learn about the RAPIDS library, how it can accelerate your data science, and how to use it. - [IO Middleware](https://anchor.fm/nersc-news/episodes/IO-Middleware-Quincey-Koziol-Interview-eaf5r3/a-a1c7plt) NERSC Principal Data Architect Quincey Koziol talks about IO Middleware: what it is, how you can benefit from using it in your code, and how it is evolving to support data-intensive computing and future supercomputing architectures. - [NERSC 2019 in Review and Looking Forward](https://anchor.fm/nersc-news/episodes/NERSC-2019-in-Review-and-Looking-Forward--Sudip-Dosanjh-Interview-ea5d5t/a-a1a6cpd) NERSC director Sudip Dosanjh reflects upon the accomplishments of NERSC and its users in 2019, and what he's looking forward to in 2020 at NERSC. - [Community File System](https://anchor.fm/nersc-news/episodes/Community-File-System-Kristy-Kallback-Rose--Greg-Butler--and-Ravi-Cheema-Interview-e9d88q/a-a149hf5) NERSC Storage System Group staff Kristy Kallback-Rose, Greg Butler, and Ravi Cheema talk about the new Community File System and the migration timeline. - [Monitoring System Performance](https://anchor.fm/nersc-news/episodes/Monitoring-System-Performance-Eric-Roman-Interview-e5g20m/a-aobd6p) NERSC Computational Systems Group's Eric Roman discusses how NERSC monitors system performance, what we're doing with the data right now, and how we plan to use it in the future. - [The Superfacility Concept](https://anchor.fm/nersc-news/episodes/The-Superfacility-Concept-Debbie-Bard-Interview-e5a5th/a-amoglk): Join NERSC Data Science Engagement Group Lead Debbie Bard in a discussion about the concept of the superfacility: what it means, how facilities interact, and what NERSC and partner experimental facilities are doing to prepare for the future of data-intensive science. - [Optimizing I/O in Applications](https://anchor.fm/nersc-news/episodes/Optimizing-IO-in-Applications-Jialin-Liu-Interview-e50nvm): Listen to an I/O optimization success story in this interview with NERSC Data and Analytics Services Group's Jialin Liu. - [NESAP Postdocs](https://anchor.fm/nersc-news/episodes/NESAP-Postdocs--Laurie-Stephey-Interview-e2lsg0): Learn from NESAP postdoc Laurie Stephey what it's like working as a postdoc in the NESAP program at NERSC. The NERSC User News podcast, produced by the NERSC User Engagement Group, is available at <https://anchor.fm/nersc-news> and syndicated through iTunes, Google Play, Spotify, and more. Please give it a listen and let us know what you think, via a ticket at <https://help.nersc.gov>. ### Come Work for NERSC! <a name="careers"/></a> NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - **NEW** [HPC Storage Infrastructure Engineer](https://jobs.lbl.gov/jobs/hpc-storage-infrastructure-engineer-2697): Support and optimize hundreds of petabytes of parallel storage that is served to thousands of clients at terabytes per second. - [Software/DevOps/API Engineer](https://jobs.lbl.gov/jobs/software-devops-api-engineer-2611): Work on the system that tracks and manages resource usage and help build an API to automate the use of supercomputing resources. - [Storage Systems Group Leader](https://jobs.lbl.gov/jobs/storage-systems-group-leader-2596): Lead the group responsible for supporting NERSC's large-scale parallel file systems and archival storage systems with an eye towards balancing performance, stability, and usability for NERSC's over 7000 users. - [HPC Network Engineer](https://jobs.lbl.gov/jobs/hpc-network-engineer-2580): Be part of the team who shares in the design, implementation and on-going maintenance of NERSC's high performance networks. - [NESAP Engineer](https://jobs.lbl.gov/jobs/nesap-engineer-2476): Work as part of a multidisciplinary team composed of computational and domain scientists working together to produce mission-relevant science that pushes the limits of HPC in simulation, data, or learning. - [HPC Architecture and Performance Engineer](https://jobs.lbl.gov/jobs/hpc-architecture-and-performance-engineer-2427): Evaluate global technology trends and combine them with the needs of NERSC users with the goal of architecting the supercomputing ecosystem of the future. - [Application Performance Specialists (for ECP)](https://jobs.lbl.gov/jobs/application-performance-specialist-1010): Help prepare large-scale scientific codes for next-generation high performance computing (HPC) systems. - [NESAP for Simulations Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-simulations-postdoctoral-fellow-2004): work in multidisciplinary teams to transition simulation codes to NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [NESAP for Data Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-data-postdoctoral-fellow-2412) work in multidisciplinary teams to transition data-analysis codes to NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [NESAP for Learning Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-learning-postdoctoral-fellow-1964): work in multidisciplinary teams to develop and implement cutting-edge machine learning/deep learning solutions in codes that will run on NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [HPC Storage Systems Analyst](https://jobs.lbl.gov/jobs/hpc-storage-systems-analyst-1851): Help architect, deploy, and manage NERSC's storage hierarchy (including Burst Buffer, Lustre, and Spectrum Scale filesystems, and HPSS archives). (**Note:** We have received reports that the URLs for the jobs change without notice, so if you encounter a page indicating that a job is closed or not found, please check by navigating to <https://jobs.lbl.gov/>, scrolling down to the 9th picture that says "All Jobs" and clicking on that. Then, under "Business," select "View More" and scroll down until you find the checkbox for "NE-NERSC" and select it.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ### Upcoming Outages <a name="outages"/></a> - **Cori** - 05/20/20 5:00-19:00 PDT, Scheduled Maintenance - **HPSS Archive (User)** - 05/06/20 9:00-13:00 PDT, Scheduled Maintenance *Firmware updates* - **ProjectA** - 05/20/20 5:00-5:00 PDT, Retired *ProjectA has been retired all data has been copied to CFS* Visit <http://my.nersc.gov/> for latest status and outage information. ### About this Email <a name="about"/></a> You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window