nersc
Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of February 10, 2020

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2020-02-10 14:15:00

# NERSC Weekly Email, Week of February 10, 2020 # ## Contents ## - [Summary of Upcoming Events and Key Dates](#dates) - [Call for Applications: Deep Learning for Science School](#dl4sci) - [Women in HPC Summit Student Program Applications Due February 21!](#whpcstudent) - [President's Day Holiday Next Monday; No Consulting or Account Support](#presday) - [Sitewide Outage for Power Upgrade Next Weekend, February 21-24](#powwork) - [Submissions to SciPy2020 Conference Close Tomorrow!](#scipy2020) - [Training: Introduction to GPUs, Feb 28](#gputrain) - [Learn to Use Spin to Build Science Gateways at NERSC: Next Spinup Workshop Feb 11](#spinup) - [NERSC User Survey for AY 2019 Begins this Week!](#usersurvey) - [Argonne Training Program on Extreme-Scale Computing Now Accepting Applications!](#atpesc) - [CUDA Training Series Resumes Next Wednesday](#cudatrain) - [Need Help? Check out NERSC Documentation or Send in a Ticket!](#gettinghelp) - [User Dotfile Changes Planned for Upcoming Maintenance](#dotfiles) - [This Week on "NERSC User News" Podcast: Slurm](#podcast) - [Come Work for NERSC!](#careers) - [Upcoming Outages](#outages) - [About this Email](#about) ## Summary of Upcoming Events and Key Dates <a name="dates"/></a> ## February 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 *11**12* 13 14 15 11 Feb SciPy2020 Submissions Due [1] 12 Feb DL4Sci School Apps Open [2] 16 *17* 18 *19* 20 *21*-22- 17 Feb Presidents Day Holiday [3] 19 Feb CUDA Shared Mem Training [4] 21 Feb WHPC Student Program Deadline [5] 21-24 Feb Building Power Upgrade -23--24**25* 26 27 *28* 29 (all systems offline) [6] 25 Feb DL4Sci School Apps Due [7] 28 Feb Intro to GPU [8] March 2020 Su Mo Tu We Th Fr Sa 1 *2* 3 4 5 6 7 2 Mar ATPESC Applications Due [9] 8 9 10 11 12 13 14 15 16 17 *18* 19 20 21 18 Mar CUDA Optimization Training [4] 22 23 24 25 26 27 28 29 30 31 April 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 *13* 14 15 16 17 18 13 Apr SpinUp [10] 19 20 21 22 23 24 25 26 27 28 29 30 Notes: 1. **February 11, 2020**: [SciPy2020 submissions due](#scipy2020) 2. **February 12, 2020**: [Deep Learning for Science School Applications Open](#dl4sci) 3. **February 17, 2020**: Presidents Day Holiday (No Consulting or Account Support) 4. **February 19 & March 18, 2020**: [NVIDIA CUDA Training Series](#cudatrain) 5. **February 21, 2020**: [Women in HPC Summit Student Program Deadline](#whpcstudent) 6. **February 21-24, 2020**: [Building Power Upgrade (all systems offline)](#powwork) 7. **February 25, 2020**: [Deep Learning for Science School Applications Due](#dl4sci) 8. **February 28, 2020**: [Intro to GPU training](#gputrain) 9. **March 2, 2020**: [ATPESC applications due](#atpesc) 10. **April 13, 2020**: [SpinUp workshop](#spinup) 11. All times are **Pacific Time zone** ### Other Significant Dates ### - **April 7-9, 2020**: [Performance, Portability, and Productivity in HPC Forum](https://p3hpcforum2020.alcf.anl.gov/) - **April 16 & May 13, 2020**: Additional CUDA Training dates - **April 29-May 1, 2020**: [Women in HPC Summit](https://womeninhpc.org/events/summit-2020) - **May 25, 2020**: Memorial Day Holiday (No Consulting or Account Support) - **July 4, 2020**: Independence Day Holiday (No Consulting or Account Support) - **July 6-12, 2020**: [SciPy2020](https://www.scipy2020.scipy.org/) Conference - **July 20-24, 2020**: [Deep Learning for Science School](https://dl4sci-school.lbl.gov/) - **September 7, 2020**: Labor Day Holiday (No Consulting or Account Support) - **November 26-27, 2020**: Thanksgiving Holiday (No Consulting or Account Support) - **December 24, 2020-January 1, 2021**: Christmas/New Year Holiday (Limited Consulting or Account Support) ## Call for Applications: Deep Learning for Science School <a name="dl4sci"/></a> ## Are you interested in applying deep learning to your science problem? Would you like to learn more about the capabilities of deep learning and how to use it in high-performance computing environments, and connect with other scientists working on similar problems? NERSC, in collaboration with the Computing Sciences area at Berkeley Lab, is hosting the second Deep Learning for Science School July 20-24, 2020. The school will bring together researchers and engineers for lectures and tutorials on state-of-the-art deep learning methods and best practices for running deep learning on high-performance computing systems, and provide opportunities to connect with fellow scientists with a shared interest in how the latest advances in learning algorithms can be used for their science. For more information, including how to apply, please visit <https://dl4sci-school.lbl.gov>. Applications open this Wednesday, February 12th and close on February 25th. ## Women in HPC Summit Student Program Applications Due February 21! <a name="whpcstudent"/></a> ## Applications are now open for the Women in HPC Summit Student Program. The program is aimed at students of all levels interested in high-performance computing. All students are required to submit a poster related to HPC topics that they work on. These posters can be for new work or existing work. Application materials include two documents: - A student-prepared document containing a brief bio (including HPC background); a statement about why you wish to attend the summit; your career aspirations; and a 1-2 page extended abstract describing your poster contents. - A letter of support from an academic advisor supporting your attendance. For more information on the agenda for the student track please see the [WHPC Summit](https://womeninhpc.org/events/summit-2020#!/submissions) website (scroll down for "Student Program"); to submit please go to the [submissions](https://easychair.org/conferences/?conf=whpcsummit2020) website. Applications close on February 21! ## President's Day Holiday Next Monday; No Consulting or Account Support <a name="presday"/></a> ## Consulting and account support will be unavailable next Monday, February 17, due to the Berkeley Lab-observed President's Day holiday. Regular consulting and account support services will resume on February 18. Operations staff are available for urgent queries via 1-800-66-NERSC, Option 1, at all times. ## Sitewide Outage for Power Upgrade Next Weekend, February 21-24 <a name="powwork"/></a> ## Over the weekend of February 22-23, there will be a facilities upgrade to the systems that supply power to NERSC, in order to increase the amount of power available in the machine room. This work is required in order to have enough power to supply both Cori and Perlmutter (scheduled to arrive at the very end of this year). **During the outage, no NERSC systems will be available, including all computing, file systems, and auxiliary services.** In preparation for the power being turned off so the electrical work may be performed, NERSC will begin shutting off computer systems at 7:00 am (Pacific time) next Friday, February 21. On Cori, there will be a system reservation in place to prevent jobs from running, but any errant jobs will be killed at that time and users will be logged off the machine. All systems will be offline by 5:00 pm. After the electrical work is complete, we will begin powering the center back up. Work on Cori will begin at 7:00 am on Monday, February 24 and a maintenance will be performed at that time. We expect to have all systems returned to users before midnight. ## Submissions to SciPy2020 Conference Close Tomorrow! <a name="scipy2020"/></a> ## Are you running large Python jobs at NERSC? Have you worked hard to make your Python code run fast? Are you exploring options for running your Python code on Perlmutter's GPUs? If so, we invite you and your colleagues to consider submitting to the new [High-Performance Python track](https://www.scipy2020.scipy.org/talk-poster-presentations) at [SciPy2020](https://www.scipy2020.scipy.org/), which will be held July 6-12 in Austin, Texas. Submissions for talks and posters are being accepted through **tomorrow, February 11.** Sucessful talk submissions are eligible to submit a paper to be published in the conference proceedings. We hope to see you in Austin! ## Training: Introduction to GPUs, Feb 28 <a name="gputrain"/></a> ## NERSC will host a one-day "Introduction to GPU" training event for users on Friday, February 28, 2020. Topics covered will include: - Why GPUs - GPU architecture - DOE roadmap - Types of applications likely to benefit from GPU acceleration - How to use Cori GPU - Where to find documentations - available compilers - how to compile and run - GPU libraries - Programming for GPU using directives (OpenACC and OpenMP) - Debugging and profiling on GPU Attendees can be in-person or remote, and will be given Cori GPU access to try to run and profile some simple GPU codes. For more information and to register, see <https://www.nersc.gov/users/training/events/introduction-to-gpu-february-28-2020/> ## Learn to Use Spin to Build Science Gateways at NERSC: Next Spinup Workshop Feb 11 <a name="spinup"/></a> ## Spin is a new service platform at NERSC based on Docker container technology. It can be used to deploy science gateways, workflow managers, databases, and all sorts of other services that can access NERSC systems and storage on the back end. To learn more about how Spin works and what it can do, please listen to the NERSC User News podcast on Spin: <https://anchor.fm/nersc-news/episodes/Spin--Interview-with-Cory-Snavely-and-Val-Hendrix-e1pa7p>. Attend an upcoming SpinUp workshop to learn to use Spin for your own science gateway projects! **Applications for sessions that begin [Thursday April 13](https://www.nersc.gov/users/data-analytics/spin/#toc-anchor-3) are now open**. SpinUp is hands-on and interactive, so space is limited. Participants will attend two instructional sessions and a hack-a-thon to learn about the platform, create running services, and learn maintenance and troubleshooting techniques. Local and remote participants are welcome. See a video of Spin in action, check the workshop schedule, and apply at the [NERSC Spin](https://www.nersc.gov/users/data-analytics/spin/) page. ## NERSC User Survey for AY 2019 Begins this Week! <a name="usersurvey"/></a> ## Users who were active in 2019 should have received a note today sent from a company called NBRI, about the upcomping annual NERSC User Survey for AY2019, which opens on Wednesday. You will receive a personalized link to the survey on Wednesday. The survey helps NERSC judge the quality of our services and identify areas we could improve, and is important for our annual reporting to DOE, so please take some time to complete the survey when it arrives. ## Argonne Training Program on Extreme-Scale Computing Now Accepting Applications! <a name="atpesc"/></a> ## Are you a doctoral student, postdoc, or computational scientist looking for advanced training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future? If so, consider applying for the Argonne Training Program on Extreme-Scale Computing (ATPESC) program. The core of the two-week program focuses on programming methodologies that are effective across a variety of supercomputers and applicable to exascale systems. Additional topics to be covered include computer architectures, mathematical models and numerical algorithms, approaches to building community codes for HPC systems, and methodologies and tools relevant for Big Data applications. There is no cost to attend. For more information and to apply, please see <https://extremecomputingtraining.anl.gov/>. The application deadline is March 2, 2020. ## CUDA Training Series Resumes Next Wednesday <a name="cudatrain"/></a> ## NVIDIA is presenting a 9-part CUDA training series intended to help new and existing GPU programmers understand the main concepts of the CUDA platform and its programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture (for in-person participants) or on your own (for remote participants). OLCF and NERSC will both be holding in-person events for each part of the series. The second training in the series, on CUDA shared memory, will take place next Wednesday, February 19, 2020, from 10 am to 12 pm (Pacific time). This training will introduce users to CUDA shared memory, and how to use it to manage data caches, speed up high-performance cooperative parallel algorithms, and facilitate global memory coalescing in cases where it would otherwise not be possible. Following the presentation will be a hands-on session where in-person participants can complete example exercises meant to reinforce the presented concepts. For more information (including registration information) please see <https://www.nersc.gov/users/training/events/cuda-shared-memory-part-2-of-9-cuda-training-series/> Other scheduled dates in the series: - March 18: [3. Fundamental CUDA Optimization (Part 1)](https://www.nersc.gov/users/training/events/fundamental-cuda-optimization-part-1-part-3-of-9-cuda-training-series/) - April 16: [4. Fundamental CUDA Optimization (Part 2)](https://www.nersc.gov/users/training/events/fundamental-cuda-optimization-part-2-part-4-of-9-cuda-training-series/) - May 13: [5. CUDA Atomics, Reductions, and Warp Shuffle](https://www.nersc.gov/users/training/events/cuda-atomics-reductions-and-warp-shuffle-part-5-of-9-cuda-training-series/) ## Need Help? Check out NERSC Documentation or Send in a Ticket! <a name="gettinghelp"/></a> ## Are you confused about setting up your MFA token? Is there something not quite right with your job script that causes the job submission filter to reject it? Are you struggling to understand the performance of your code on the KNL nodes? There are many ways that you can get help with issues at NERSC: - First, we recommend the NERSC documentation (<https://docs.nersc.gov/>). Usually the answers for simpler issues, such as setting up your MFA token using Google Authenticator, can be found there. (Often the answers to more complex issues are in the documentation too!) - For more complicated issues, or issues that leave you unable to work, submitting a ticket is a good way to get help fast. NERSC's consulting team will get back to you within four business-hours (8 am - 5 pm, Monday through Friday, except holidays) with a response. To submit a ticket, log in to <https://help.nersc.gov> (or, if the issue is stopping you from logging in, send an email to <accounts@nersc.gov>). - Sometimes, a colleague can figure out the issue faster than NERSC, because they already understand your workflow. It's possible that they know what flag you need to add to your Makefile for better performance, or how to set up your job submission script just so. ## User Dotfile Changes Planned for Upcoming Maintenance <a name="dotfiles"/></a> ## Currently, by default, `.bashrc`/`.bash_profile`/`.cshrc`/`.login` files are symlinks in your `$HOME` to read-only NERSC-supplied startup files. You may have made changes to your starting environment by adding `.bashrc.ext`/`.bash_profile.ext`/`.cshrc.ext`/`.login.ext` files. To reduce some shell startup overhead, and to bring NERSC in line with most other HPC centers, we will migrate away from this arrangement during the scheduled maintenance that begins February 21. After the change is made, you will be able to edit `.bashrc` (etc.) directly. During the change, your `.bashrc` (etc.), which is currently a symlink, will be replaced by a template `.bashrc` (etc.) that simply sources your `.bashrc.ext` (etc.). For most users this should have no other impact. But some non-default environments and workflows might experience some changes to their environment. You can test the changes now, by using the `dotmgr` command and logging in to cori12 or dtn12, which now have the new configuration: `dotmgr -l` # list my current dotfiles `dotmgr -s` # save my current dotfiles, and print the location `dotmgr -e` # replace my existing dotfiles with the new arrangement You can then login to cori12 and/or dtn12 to check whether this affected your environment. Check that things still look the same and your aliases still work. `ssh cori12` You can then return your dotfiles to the current configuration with: `dotmgr -r <directory-that-the-save-step-returned>` Note that `dotmgr -e` and `dotmgr -r` **don't affect your current environment** -- they affect the contents of your dotfiles. For the changes to take effect, you must log out and log back in. For detailed help, please see <https://docs.nersc.gov/environment/>. Please let us know of any problems you encounter, by filing a ticket at <https://help.nersc.gov>. ## This Week on "NERSC User News" Podcast: Slurm <a name="podcast"/></a> ## In this week's NERSC User News podcast, learn about how Slurm works from NERSC HPC Systems Engineer Chris Samuel: the life cycle of a job submitted to the batch queue, how Slurm decides which jobs should run when, and tips and tricks on making Slurm work for you. The NERSC User News podcast, produced by the NERSC User Engagement Group, is available at <https://anchor.fm/nersc-news> and syndicated through iTunes, Google Play, Spotify, and more. A direct link to this week's podcast is <https://anchor.fm/nersc-news/episodes/How-Slurm-Works-Chris-Samuel-Interview-eaoevp/a-a1ecvrg>. Please give it a listen, and let us know what you think via a ticket at <https://help.nersc.gov>. ## Come Work for NERSC! <a name="careers"/></a> ## NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - [HPC Consultant/Software Integration Specialist](https://jobs.lbl.gov/jobs/hpc-consultant-software-integration-specialist-2491): provide courteous and expert advice to NERSC users, and help develop continuous integration at NERSC. - [NESAP Engineer](https://jobs.lbl.gov/jobs/nesap-engineer-2476): Work as part of a multidisciplinary team composed of computational and domain scientists working together to produce mission-relevant science that pushes the limits of HPC in simulation, data, or learning. - [HPC Architecture and Performance Engineer](https://jobs.lbl.gov/jobs/hpc-architecture-and-performance-engineer-2427): Evaluate global technology trends and combine them with the needs of NERSC users with the goal of architecting the supercomputing ecosystem of the future. - [Application Performance Specialists (for APG](https://jobs.lbl.gov/jobs/application-performance-specialist-2312) and [ECP)](https://jobs.lbl.gov/jobs/application-performance-specialist-1010): Help prepare large-scale scientific codes for next-generation high performance computing (HPC) systems. - [High Performance Computing Security Developer](https://jobs.lbl.gov/jobs/high-performance-computing-security-developer-2295): Protect Exascale class systems in an open science environment and enhance network and host intrusion prevention as we migrate from 100G to Terabit networks. - [NESAP for Simulations Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-simulations-postdoctoral-fellow-2004): work in multidisciplinary teams to transition simulation codes to NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [NESAP for Data Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-data-postdoctoral-fellow-2412) work in multidisciplinary teams to transition data-analysis codes to NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [NESAP for Learning Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-learning-postdoctoral-fellow-1964): work in multidisciplinary teams to develop and implement cutting-edge machine learning/deep learning solutions in codes that will run on NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [HPC Storage Systems Analyst](https://jobs.lbl.gov/jobs/hpc-storage-systems-analyst-1851): Help architect, deploy, and manage NERSC's storage hierarchy (including Burst Buffer, Lustre, and Spectrum Scale filesystems, and HPSS archives). (**Note:** We have received reports that the URLs for the jobs change without notice, so if you encounter a page indicating that a job is closed or not found, please check by navigating to <https://jobs.lbl.gov/>, scrolling down to the 9th picture that says "All Jobs" and clicking on that. Then, under "Business," select "View More" and scroll down until you find the checkbox for "NE-NERSC" and select it.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ## Upcoming Outages <a name="outages"/></a> ## - **NERSC Center** - 02/21/20 7:00-02/25/20 23:59 PST, Scheduled Maintenance NERSC will have a planned facility outage from Friday, Feb 21, 2020 at 07:00 through Tuesday, Feb 25 23:59. This is part of the facility upgrade to prepare for Perlmutter. This is a complete shutdown of the facility. No NERSC resources will be available during this time. Visit <http://www.nersc.gov/users/live-status/> for latest status and outage information. ## About this Email <a name="about"/></a> ## You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window