NERSCPowering Scientific Discovery Since 1974

2014 User Survey Text

Section 1: Overall Satisfaction with NERSC

For each item you use, please indicate both your satisfaction and its importance to you.
Please rate:How satisfied are you?How important is this to you?
Overall satisfaction with NERSC
NERSC services
NERSC computing resources
NERSC data resources
HPC software
How long have you used NERSC?
Less than 1 year 1 year - 3 years More than 3 years

Section 2: HPC Resources

Please rate the NERSC systems or resources you have used.

Cray XC30: Edison
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Batch wait time
Batch queue structure
Ability to run interactively
Scratch configuration and I/O performance
 [Back to list of systems]
Cray XE6: Hopper
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Batch wait time
Batch queue structure
Ability to run interactively
Scratch configuration and I/O performance
 [Back to list of systems]
IBM iDataPlex Linux Cluster: Carver
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Batch wait time
Batch queue structure
Ability to run interactively
 [Back to list of systems]
Parallel Distributed Systems Facility: PDSF
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Batch queue structure
Ability to run interactively
Disk configuration and I/O performance
Connection to external data repositories
 [Back to list of systems]
JGI Cluster: Genepool
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Batch wait time
Batch queue structure
Ability to run interactively (gpints, genepool login nodes, qlogin)
File system configuration and I/O performance
Data storage, archiving, and retrieval
 [Back to list of systems]
High Performance Storage System: HPSS
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Reliability (data integrity)
Data transfer rates
Data access time
User interface (hsi, pftp, ftp)
 [Back to list of systems]
NERSC /project File System
The NERSC "Project" file system is globally accessible from all NERSC computers. Space in /project is allocated upon request for the purpose of sharing data among members of a research group.
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Reliability (data integrity)
I/O bandwidth
File and directory (metadata) operations
 [Back to list of systems]
NERSC Global /projectb File System
The NERSC "Projectb" file system is accessible from all NERSC systems except PDSF. Space in /projectb is dedicated to serve the JGI Bioinformatics community.
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Reliability (data integrity)
I/O bandwidth
File and directory (metadata) operations
 [Back to list of systems]
Global Scratch File System
Carver uses the Global Scratch file system as its only \$SCRATCH file system. Global scratch is also accessible from Hopper and Edison as \$GSCRATCH.
Please rate:How satisfied are you?
Overall satisfaction
Uptime (Availability)
Reliability (data integrity)
I/O bandwidth
File and directory (metadata) operations
 [Back to list of systems]
NERSC Network
Please rate:How satisfied are you?
Network performance within NERSC (e.g. Hopper to HPSS)
Remote network performance to/from NERSC (e.g. Hopper to your home institution)
 [Back to list of systems]
NX Server (X-Accelerator)
Please rate:How satisfied are you?
Overall Satisfaction
 [Back to list of systems]

Section 3: Software

Please rate software on NERSC systems. For each of the software categories below, consider availability, usability, and robustness of the software.

NERSC Software
Please rate:How satisfied are you?
Programming environment
Applications software
Programming libraries
Performance and debugging tools
Visualization software
Data analysis software
Workflow software
Data transfer software

Section 4: HPC Services

HPC Consulting
For each item you use, please indicate both your satisfaction and its importance to you.
Please rate:How satisfied are you?
Consulting overall
Quality of technical advice
Response time
Time to solution
Special requests (e.g. disk quota increases, etc.)
On-Line Help Desk
Accounts and Allocations
For each item you use, please indicate both your satisfaction and its importance to you.
Please rate:How satisfied are you?
Account support and passwords
NIM web accounting interface
Allocations process
 [Back to top]
Communications
The following questions relate to how you keep informed of NERSC changes and current issues.
Please rate:How useful is this to you?
MOTD (Message of the Day) on the computers
E-mail announcements
Live Status
Do you feel you are adequately informed about NERSC changes?
Yes No
 [Back to top]
Training
For each item you use, please indicate both your satisfaction and how useful it is to you.
Please rate:How satisfied are you?How useful is this to you?
Web tutorials
Getting Started Guide
Training Events
Video Tutorials
Archived Presentation Slides
 [Back to top]
NERSC Web Site
Please rate:How satisfied are you?
NERSC web site overall
Ease of navigation
Timeliness of information
Accuracy of information
Live Status
My NERSC
Searching
Ease of use with mobile devices
Mobile Web Site (m.nersc.gov)
 [Back to top]
What additional web services would you like NERSC to provide?

 [Back to top]
Security
Please rate:How satisfied are you?
NERSC Security

Section 6: Application Readiness

 
NERSC recently announced that its next supercomputer, to be named Cori, will be based on the Intel Xeon Phi many-core processor. It is anticipated that codes will need to use OpenMP threads to run efficiently on Cori's processors, which will have 60+ cores each. Applications will also need to take advantage of the Phi's vector units and on-package high-bandwidth memory.
Code Readiness
Please rate:How ready are you?
Your codes' readiness for Cori
Your codes' readiness to effectively use OpenMP
Your codes' readiness to use vectorization
Your codes' readiness to use on-package memory
Accelerators and Many-Core
Do your codes use any of the following (anywhere, not just at NERSC)? Check all that apply.
Technologies
Intel Xeon Phi (MIC) GPUs IBM Cell FPGAs Hardware Multi-Threading
Programming Models
OpenMP CUDA CUDA Fortran OpenCL OpenACC CAPS HMPP
Coarray Fortran UPC Intel TBB Intel Cilk pthreads thrust
other
What are your plans for transitioning your code(s) to run on Cori? How much of your code can use vector units or processors? Do you know if you code is compute or memory bound?

 [Back to top]
How can NERSC help you get your code(s) ready for Cori?

 [Back to top]

Section 6: Comments

What does NERSC do well?
How can NERSC serve you better?
If there is anything important to you that is not covered in this survey, please tell us about it here.