nersc
Close this window

Email Announcement Archive

[Users] NERSC Weekly Email, Week of January 6, 2020

Author: Rebecca Hartman-Baker <rjhartmanbaker_at_lbl.gov>
Date: 2020-01-06 15:49:56

# NERSC Weekly Email, Week of January 6, 2020 # ## Contents ## - [Summary of Upcoming Events and Key Dates](#dates) - [Happy New Year from NERSC!](#happyny) - [New Allocation Year Begins January 14, 2020](#neway) - [Need Help Switching to Cori KNL Nodes? KNL Office Hours on Fridays All Month!](#knlofficehrs) - [CUDA Training Series Begins January 15](#cudatrain) - [Women in HPC Summit Call for Submissions; Paper Submissions Deadline this Friday!](#whpc) - [New Community File System to Replace Project File System in New Allocation Year](#cfs) - [Will Your Science Gateway Be Affected by the Community File System Sync?](#scigateway) - [New Programing Environments Now Available on Cori](#corisw) - [Dynamic Linking to Become Default on Jan 14, 2020; Test Now!](#dynamic) - [User Dotfile Changes Planned for February 2020](#dotfiles) - [Call For Papers: Performance, Portability, and Productivity in HPC Forum (P3HPC)](#p3hpc) - [NERSC Will Support Only Python3 in New Allocation Year](#python2) - [No New "NERSC User News" Podcast this Week](#nopodcast) - [Come Work for NERSC!](#careers) - [Upcoming Outages](#outages) - [About this Email](#about) ## Summary of Upcoming Events and Key Dates <a name="dates"/></a> ## January 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 *10* 11 10 Jan KNL Office Hours [1] 10 Jan WHPC Paper Submissions due [2] 12 13 *14**15* 16 *17* 18 14 Jan AY2020 Begins [3] 15 Jan CUDA C++ Training [4] 17 Jan KNL Office Hours [1] 19 *20* 21 22 23 *24* 25 20 Jan MLK Holiday [5] 24 Jan P3HPC Submissions due [6] 25 Jan KNL Office Hours [1] 26 27 28 29 30 *31* 31 Jan WHPC Poster Submissions due [7] 31 Jan KNL Office Hours [1] February 2020 Su Mo Tu We Th Fr Sa 1 2 *3* 4 5 6 7 8 3 Feb ALCC Due Date [8] 9 10 11 12 13 14 15 16 *17* 18 *19* 20 21 22 17 Feb Presidents Day Holiday [9] 19 Feb CUDA Shared Mem Training [4] 23 24 25 26 27 28 29 March 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 *18* 19 20 21 18 Mar CUDA Optimization Training [4] 22 23 24 25 26 27 28 29 30 31 Notes: 1. **January 10, 17, 25, and 31, 2020**: [KNL Office Hours](#knlofficehrs) 2. **January 10, 2020**: [Women in HPC Summit paper submissions due](#whpc) 3. **January 14, 2020**: First day of Allocation Year 2020 4. **January 15, February 19, & March 18, 2020**: [NVIDIA CUDA Training Series](#cudatrain) 5. **January 20, 2020**: Martin Luther King Jr. Day Holiday (No Consulting or Account Support) 6. **January 24, 2020**: [P3HPC Submissions due](#p3hpc) 7. **January 31, 2020**: [Women in HPC Summit poster submissions due](#whpc) 8. **February 3, 2020**: [ALCC Proposals due](#alcc) 9. **February 17, 2020**: Presidents Day Holiday (No Consulting or Account Support) 10. All times are **Pacific Time zone** ### Other Significant Dates ### - **April 7-9, 2020**: [Performance, Portability, and Productivity in HPC Forum](https://p3hpcforum2020.alcf.anl.gov/) - **April 16 & May 13, 2020**: Additional CUDA Training dates - **April 29-May 1, 2020**: [Women in HPC Summit](https://womeninhpc.org/events/summit-2020) - **May 25, 2020**: Memorial Day Holiday (No Consulting or Account Support) - **July 4, 2020**: Independence Day Holiday (No Consulting or Account Support) - **September 7, 2020**: Labor Day Holiday (No Consulting or Account Support) - **November 26-27, 2020**: Thanksgiving Holiday (No Consulting or Account Support) - **December 24, 2020-January 1, 2021**: Christmas/New Year Holiday (Limited Consulting or Account Support) ## Happy New Year from NERSC! <a name="happyny"/></a> ## Happy New Year and welcome back! We're glad to have you as a user and look forward to working with you in 2020. May this be the most productive year yet! ## New Allocation Year Begins January 14, 2020 <a name="neway"/></a> ## The 2020 Allocation Year (AY) begins on January 14, 2020. There will be several changes that will take effect in the new AY. Of note: - The software environment defaults will change, as outlined in the [NERSC CDT Policy](https://docs.nersc.gov/programming/Cray-PE-CDT-policy/); - The default Python module will point to a version of Python 3; - The new Community File System (CFS) will replace the Project File System; - There will be new charge factors announced this week; - NERSC will have a new software policy, providing more clarity on how we support software on Cori and future machines. More details about the AY transition can be found at: <https://www.nersc.gov/news-publications/announcements/allocation-year-transition-2019-to-2020/>. ## Need Help Switching to Cori KNL Nodes? KNL Office Hours on Fridays All Month! <a name="knlofficehrs"/></a> ## NERSC will hold virtual office hours over Zoom from 9:00 am to 3:00 pm Pacific Time for every Friday in January starting this Friday (January 10), to help users get their codes running efficiently on the Cori KNL nodes. For many users, running efficiently on the KNL nodes is as simple as making sure that their job script is set to request the proper thread affinity on the node, and their executable is compiled correctly to exploit the KNL architecture. We have seen a performance gap shrink by a factor of 7 just with these two simple steps. Other user codes may require some relatively straightforward code changes (for example, a loop reordering to exploit vectorization). Profiling the code is the first step towards finding these hot spots or bottlenecks. During the KNL Office Hours, NERSC experts will be on hand to help you take these steps. Please (virtually) drop by for help with - Setting up your job script for proper thread affinity - Compiling your code with the best optimization flags - Getting started with profiling your code - Interpreting the results of profiling, and advice on how to proceed A [podcast](https://anchor.fm/nersc-news/episodes/KNL-Office-Hours-Jack-Deslippe-Interview-e3uk9f/a-aee631) from May provides additional information about the office hours. View the event on the [NERSC Public Events calendar](https://calendar.google.com/calendar/embed?src=lbl.gov_ls0gdtgi7b93jredles0ibl0u4%40group.calendar.google.com&ctz=America%2FLos_Angeles) for connection information. ## CUDA Training Series Begins January 15 <a name="cudatrain"/></a> ## NVIDIA will present a 9-part CUDA training series intended to help new and existing GPU programmers understand the main concepts of the CUDA platform and its programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture (for in-person participants) or on your own (for remote participants). OLCF and NERSC will both be holding in-person events for each part of the series. The first training in the series will take place on January 15, 2020, from 10 am to 12 pm (Pacific time). This training will introduce participants to CUDA C++, an extension of C++ that allows developers to program GPUs with a familiar programming language and simple APIs. Participants will learn basic concepts, syntax, and APIs needed to transfer data to and from GPUs, write GPU kernels, and manage GPU thread groups. Following the presentation will be a hands-on session where in-person participants can complete example exercises meant to reinforce the presented concepts. For more information (including registration information) please see <https://www.nersc.gov/users/training/events/introduction-to-cuda-c-part-1-of-9-cuda-training-series/>. Other scheduled dates in the series: - February 19: [2. CUDA Shared Memory](https://www.nersc.gov/users/training/events/cuda-shared-memory-part-2-of-9-cuda-training-series/) - March 18: [3. Fundamental CUDA Optimization (Part 1)](https://www.nersc.gov/users/training/events/fundamental-cuda-optimization-part-1-part-3-of-9-cuda-training-series/) - April 16: [4. Fundamental CUDA Optimization (Part 2)](https://www.nersc.gov/users/training/events/fundamental-cuda-optimization-part-2-part-4-of-9-cuda-training-series/) - May 13: [5. CUDA Atomics, Reductions, and Warp Shuffle](https://www.nersc.gov/users/training/events/cuda-atomics-reductions-and-warp-shuffle-part-5-of-9-cuda-training-series/) ## Women in HPC Summit Call for Submissions; Paper Submissions Deadline this Friday! <a name="whpc"/></a> ## Submissions for papers and posters are now being accepted for the first Women in HPC Summit, to be held April 29-May 1, 2020 in Vancouver, British Columbia, Canada. Papers and posters are solicited on a diverse range of technical and diversity, inclusion, and leadership topics, including but not limited to: - Programming models and applications for HPC, Big Data, and AI; - Architectures and accelerators on high-performance platforms; - Computational models and algorithms for HPC, Big Data, and AI; - Using machine learning to analyze large-scale systems; - Performance modeling, analysis, and benchmarking of HPC, Big Data, and AI applications/architectures; - Methods and techniques to create a diverse workforce; - Inclusive leadership and retention strategies; - Building diversity advocates and allies; - Dealing with unconscious bias and sexism in the workplace; - Fostering creativity through diversity. The tutorial submission deadline has passed. The paper submission deadline has been extended to this Friday, January 10, 2020, AOE, and poster submissions are due January 31, 2020, AOE. For more information and to submit, please see <https://womeninhpc.org/events/summit-2020>. ## New Community File System to Replace Project File System in New Allocation Year <a name="cfs"/></a> ## The new "Community" File System (CFS) will replace the Project file system in the new allocation year (AY). No action is required for users; NERSC will transfer your data from Project to CFS. Each active repository will have a directory on CFS with the path structure `/global/cfs/cdirs/<repo_name>`, but existing `/global/project/projectdirs/<repo_name>` paths will redirect to the corresponding CFS path until mid-2020. During the period from January 14 to January 21, the Project file system will be set to read-only to allow a final synchronization between the two file systems. Once this operation is complete, CFS will be made read/write and become available for use, and Project will be removed from service. For more details, please see - The December 9 [email](https://www.nersc.gov/REST/announcements/message_text.php?id=4280) announcing the new file system; - Slides from the December 12 [NUG meeting](https://www.nersc.gov/users/NUG/teleconferences/nug-webinar-dec-12-2019/); or - The December 9 NERSC User News [podcast](https://anchor.fm/nersc-news/episodes/Community-File-System-Kristy-Kallback-Rose--Greg-Butler--and-Ravi-Cheema-Interview-e9d88q/a-a149hf5) on the topic of CFS. ## Will Your Science Gateway Be Affected by the Community File System Sync? <a name="scigateway"/></a> ## Developers of science gateways should be aware of the upcoming replacement of the NERSC Project File System (Project) with the new Community File System (CFS) and the effect this may have on science gateways. Files on Project will be automatically transferred to CFS in a multiday sync, which is scheduled to occur January 14-21. During that time, Project will be set to read-only. Any science gateways that need to write to Project will therefore be subject to breakage during that time. We recognize that this will be an inconvenience and that some gateway sites will temporarily lose functionality. Unfortunately, this is inevitable. If your gateway's inability to write to Project during the sync will cause serious problems, please file a ServiceNow ticket with details about your gateway **immediately**, so that the science gateways team can help you find a workaround. Once the data transfer is complete, we will immediately update the web server so that URLs like `https://portal.nersc.gov/project/myprojectdir` will again work as before but will then point to the new location on CFS. We will also enable a similar request-handling mechanism so that requests for URLs like `https://portal.nersc.gov/cfs/myprojectdir` will retrieve files from directories like `/global/cfs/cdirs/myprojectdir`. We recommend directing your traffic to the new `/cfs` URLs, since users will more readily recognize it as data on the community file system. If any of this needs further clarification, please do not hesitate to reach out, and we will work with you to ensure a smooth migration path. ## New Programing Environments Now Available on Cori <a name="corisw"/></a> ## NERSC installed the new Cray Programming Environment Software release CDT/19.11 during the most recent maintenance. While there were no software default version changes made, CDT/19.11 will become the default in the new allocation year and [dynamic linking](#dynamic) will become default. Please see the detailed list of new software at <https://docs.nersc.gov/systems/cori/timeline/default_PE_history/2019Dec-2020Jan>. Cray compiler users shoule note the important information about the all-new CCE 9.0 on the above webpage; in particular the CCE 9.0 C/C++ compiler is based on Clang instead of the classic Cray compiler. Key consequences of these changes include: - CCE 9.0 compilers are not compatible with pre-CDT-19.06 library versions (such as MPI); - The OpenMP flag is no longer turned on by default. ## Dynamic Linking to Become Default on Jan 14, 2020; Test Now! <a name="dynamic"/></a> ## We plan to set the new CDT/19.11 as default at the time of the Allocation Year transition on **January 14, 2020**. When this happens, the **default linking mode on Cori will change from static to dynamic**. We encourage users to test this change now and [let us know](https://help.nersc.gov) if you encounter any issues. We encourage you to start testing dynamic linking with: ``` % module load cdt/19.11 % export LD_LIBRARY_PATH=$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH ``` and proceeding to compile and run your code. If, after the default changes, you prefer to use static linking as default (e.g., for workflow or performance reasons), you can set: ``` % export CRAYPE_LINK_TYPE=static ``` to retain static linking. ## User Dotfile Changes Planned for February 2020 <a name="dotfiles"/></a> ## Currently, by default, `.bashrc`/`.bash_profile`/`.cshrc`/`.login` files are symlinks in your `$HOME` to read-only NERSC-supplied startup files. You may have made changes to your starting environment by adding `.bashrc.ext`/`.bash_profile.ext`/`.cshrc.ext`/`.login.ext` files. To reduce some shell startup overhead, and to bring NERSC in line with most other HPC centers, we will migrate away from this arrangement during the scheduled maintenance in **February 2020**. After the change is made, you will be able to edit `.bashrc` (etc) directly. During the change, your `.bashrc` (etc), which is currently a symlink, will be replaced by a template `.bashrc` (etc) that simply sources your `.bashrc.ext` (etc). For most users this should have no other impact. But some non-default environments and workflows might experience some changes to their environment. You can test the changes now, by using the `dotmgr` command and logging in to cori12 or dtn12, which now have the new configuration: `dotmgr -l` # list my current dotfiles `dotmgr -s` # save my current dotfiles, and print the location `dotmgr -e` # replace my existing dotfiles with the new arrangement You can then login to cori12 and/or dtn12 to check whether this affected your environment. Check that things still look the same and your aliases still work. `ssh cori12` You can then return your dotfiles to the current configuration with: `dotmgr -r <directory-that-the-save-step-returned>` Note that `dotmgr -e` and `dotmgr -r` **don't affect your current environment** -- they affect the contents of your dotfiles. For the changes to take effect, you must log out and log back in. For detailed help, please see <https://docs.nersc.gov/environment/>. Please let us know of any problems you encounter, by filing a ticket at <https://help.nersc.gov>. ## Call For Papers: Performance, Portability, and Productivity in HPC Forum (P3HPC) <a name="p3hpc"/></a> ## The call for papers for the Performance, Portability, and Productivity in HPC Forum (P3HPC) is now open. This workshop is an opportunity for researchers to share ideas, practical experiences, and methodologies for tackling the compelling problems that lie at the intersection of performance, portability and productivity. We are particularly interested in research that addresses the complexities of real-life applications and/or realistic workloads, the composability challenges arising from the use of bespoke solutions, and the desire to "future-proof" applications in the long term. Submissions close January 24, 2020. For more information and to submit a paper, please see <https://p3hpcforum2020.alcf.anl.gov/>. ## NERSC Will Support Only Python3 in New Allocation Year <a name="python2"/></a> ## Python 2 has reached its end of life ([January 1, 2020](https://devguide.python.org/#status-of-python-branches)), so there will be no more development, bug fixes, patches, etc. Therefore, upon the beginning of the 2020 Allocation Year at NERSC, the following changes to Python support will occur at NERSC: - At the AY rollover, the default Python module will become a module based on a Python 3 distribution. - The old Python 2 module will remain available for use but users must specify the version suffix. - No new installations of Python 2 packages or modules will be performed. - During the next Cori operating system upgrade, which could occur sometime in 2020, the Python 2 module will be retired. NERSC will actively support only Python 3 (or future Python versions should Python 3 become deprecated) on Perlmutter and future systems. Please let us know your questions via a ticket to <https://help.nersc.gov>. ## No New "NERSC User News" Podcast this Week <a name="nopodcast"/></a> ## There will be no new episode of the "NERSC User News" podcast this week. We encourage you to instead enjoy some of our most recent episodes and greatest hits: - [Community File System](https://anchor.fm/nersc-news/episodes/Community-File-System-Kristy-Kallback-Rose--Greg-Butler--and-Ravi-Cheema-Interview-e9d88q/a-a149hf5) NERSC Storage System Group staff Kristy Kallback-Rose, Greg Butler, and Ravi Cheema talk about the new Community File System and the migration timeline. - [May Quarterly Maintenance & James Botts Interview](https://anchor.fm/nersc-news/episodes/May-Quarterly-Maintenance--James-Botts-Interview-e1ec2g/a-a3cfi7) The first-ever NERSC User News podcast, in which James Botts from NERSC's Computational Systems Group describes the process of rebooting Cori after an outage. - [Monitoring System Performance](https://anchor.fm/nersc-news/episodes/Monitoring-System-Performance-Eric-Roman-Interview-e5g20m/a-aobd6p) NERSC Computational Systems Group's Eric Roman discusses how NERSC monitors system performance, what we're doing with the data right now, and how we plan to use it in the future. - [The Superfacility Concept](https://anchor.fm/nersc-news/episodes/The-Superfacility-Concept-Debbie-Bard-Interview-e5a5th/a-amoglk): Join NERSC Data Science Engagement Group Lead Debbie Bard in a discussion about the concept of the superfacility: what it means, how facilities interact, and what NERSC and partner experimental facilities are doing to prepare for the future of data-intensive science. - [Optimizing I/O in Applications](https://anchor.fm/nersc-news/episodes/Optimizing-IO-in-Applications-Jialin-Liu-Interview-e50nvm): Listen to an I/O optimization success story in this interview with NERSC Data and Analytics Services Group's Jialin Liu. - [ERCAP Allocation Requests](https://anchor.fm/nersc-news/episodes/ERCAP-Allocation-Requests-Clayton-Bagwell-Interview-e4u09l): Learn how to get compute and storage allocations on NERSC resources for next year in this interview with NERSC User Engagement Group's Clayton Bagwell. - [Roofline Model for Application Performance](https://anchor.fm/nersc-news/episodes/Roofline-Model-for-Application-Performance-Charlene-Yang-Interview-e4osl1): NERSC Application Performance Specialist Charlene Yang discusses the roofline model for application performance: what it is and how it works, how to use it to improve your application's performance, and future directions in roofline model research. - [Parallelware Trainer; Manuel Arenaz Interview](https://anchor.fm/nersc-news/episodes/Parallelware-Trainer-Manuel-Arenaz-Interview-e4g46r): Learn about the Appentra Parallelware Trainer tool: how it can help you learn to code with OpenMP and OpenACC, the features of the tool, and how to use it on Cori. - [Profiling Codes with Cray Performance Tools](https://anchor.fm/nersc-news/episodes/Profiling-Codes-with-Cray-Performance-Tools-Heidi-Poxon-Interview-e42veg): Learn about why you would want to profile your codes and the tools for profiling provided by Cray from Cray senior principal engineer and senior manager Heidi Poxon. - [Energy Efficiency and Environmental Consciousness at NERSC](https://anchor.fm/nersc-news/episodes/Energy-Efficiency-and-Environmental-Consciousness-at-NERSC--Norm-Bourassa-Interview-e35tfp): Learn about all the energy efficiency work going on at NERSC from building energy efficiency expert Norm Bourassa. - [Getting a Machine from Contract to Reality](https://anchor.fm/nersc-news/episodes/Getting-a-Machine-from-Contract-to-Reality--Tina-Declerck-Interview-e307eg/a-a9521c): Listen to Systems Department Head Tina Declerck describe the complex process of going from a contract with a vendor to a supercomputer on the floor in production for users. - [A Day in the Control Room](https://anchor.fm/nersc-news/episodes/A-Day-in-the-Control-Room--Interview-with-Owen-James-e2uh9v/a-a8rppe): In this interview, NERSC Site Reliability Engineer Owen James talks about what it's like in the NERSC control room and how they ensure that the systems stay up for you. - [NESAP Postdocs](https://anchor.fm/nersc-news/episodes/NESAP-Postdocs--Laurie-Stephey-Interview-e2lsg0): Learn from NESAP postdoc Laurie Stephey what it's like working as a postdoc in the NESAP program at NERSC. The NERSC User News podcast, produced by the NERSC User Engagement Group, is available at <https://anchor.fm/nersc-news> and syndicated through iTunes, Google Play, Spotify, and more. Please give it a listen and let us know what you think, via a ticket at <https://help.nersc.gov>. ## Come Work for NERSC! <a name="careers"/></a> ## NERSC currently has several openings for postdocs, system administrators, and more! If you are looking for new opportunities, please consider the following openings: - **NEW** [HPC Architecture and Performance Engineer](https://jobs.lbl.gov/jobs/hpc-architecture-and-performance-engineer-2427): Evaluate global technology trends and combine them with the needs of NERSC users with the goal of architecting the supercomputing ecosystem of the future. - [Computer Systems Engineer](https://jobs.lbl.gov/jobs/computer-systems-engineer-2357): Help prepare Exascale Computing Project (ECP) codes for the next-generation pre-exascale and exascale high performance computing (HPC) systems. - Application Performance Specialists for [NESAP](https://jobs.lbl.gov/jobs/application-performance-consultant-1010) and [ECP](https://jobs.lbl.gov/jobs/application-performance-specialist-2312): Help prepare large-scale scientific codes for next-generation high performance computing (HPC) systems. - [High Performance Computing Security Developer](https://jobs.lbl.gov/jobs/high-performance-computing-security-developer-2295): Protect Exascale class systems in an open science environment and enhance network and host intrusion prevention as we migrate from 100G to Terabit networks. - [Software Engineer (Storage and I/O)](https://jobs.lbl.gov/jobs/software-engineer-storage-and-i-o-2275): Enable DOE researchers and the broader science community to benefit from improvements to HDF5 and other leading high-performance computing (HPC) storage and I/O software. - [Data Management Engineer](https://jobs.lbl.gov/jobs/data-management-engineer-2129): Provide a variety of engineering support services to manage a data warehouse and notification infrastructure for the NERSC computational facility. - [NESAP for Simulations Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-simulations-postdoctoral-fellow-2004): work in multidisciplinary teams to transition simulation codes to NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [NESAP for Data Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-data-postdoctoral-fellow-2412) work in multidisciplinary teams to transition data-analysis codes to NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [NESAP for Learning Postdoctoral Fellow](https://jobs.lbl.gov/jobs/nesap-for-learning-postdoctoral-fellow-1964): work in multidisciplinary teams to develop and implement cutting-edge machine learning/deep learning solutions in codes that will run on NERSC's new Perlmutter supercomputer and produce mission-relevant science that truly pushes the limits of high-end computing. - [HPC Storage Systems Analyst](https://jobs.lbl.gov/jobs/hpc-storage-systems-analyst-1851): Help architect, deploy, and manage NERSC's storage hierarchy (including Burst Buffer, Lustre, and Spectrum Scale filesystems, and HPSS archives). (**Note:** We have received reports that the URLs for the jobs change without notice, so if you encounter a page indicating that a job is closed or not found, please check by navigating to <https://https://jobs.lbl.gov/>, scrolling down to the 9th picture that says "All Jobs" and clicking on that. Then, under "Business," select "View More" and scroll down until you find the checkbox for "NE-NERSC" and select it.) We know that NERSC users can make great NERSC employees! We look forward to seeing your application. ## Upcoming Outages <a name="outages"/></a> ## - **Cori** - 01/14/20 7:00-01/15/20 7:00 PST, Scheduled Maintenance - **HPSS Regent (Backup)** - 01/08/20 9:00-11:00 PST, Scheduled Maintenance - **Community File System (CFS)** - 01/21/20 0:00-23:00 PST, Scheduled Maintenance Community File System (CFS) will go live at 7pm on 1/21/2020, or at the completion of data migration from Project, whichever is sooner. - **Project** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance 1. All NGF global filesystems will be unavailable on all systems. 2. After maintenance, Project will be read-only for data migration purposes and will be removed on 1/21/2020. - **ProjectA** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance All NGF global filesystems will be unavailable on all systems. - **ProjectB** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance All NGF global filesystems will be unavailable on all systems. - **SeqFS** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance All NGF global filesystems will be unavailable on all systems. - **DNA** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance All NGF global filesystems will be unavailable on all systems. - **Global Common** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance All NGF global filesystems will be unavailable on all systems. - **Global Homes** - 01/14/20 7:00-13:00 PST, Scheduled Maintenance All NGF global filesystems will be unavailable on all systems. Visit <http://www.nersc.gov/users/live-status/> for latest status and outage information. ## About this Email <a name="about"/></a> ## You are receiving this email because you are the owner of an active account at NERSC. This mailing list is automatically populated with the email addresses associated with active NERSC accounts. In order to remove yourself from this mailing list, you must close your account, which can be done by emailing <accounts@nersc.gov> with your request. _______________________________________________ Users mailing list Users@nersc.gov

Close this window