2014 NERSC User Survey Results
Executive Summary
Satisfaction with NERSC remained extremely high in all major categories in 2014, according to 673 users who responded to the 2014 NERSC user survey. The survey satisfaction scores mirrored those achieved in 2013, a year that saw record or near-record scores across the board. The 673 responses was a new high, surpassing the previous mark of 613 in 2013. Those users accounted for 72 percent of all computing hours used at NERSC in 2014.
The overall satisfaction rating of 6.50 on a 7-point scale equaled the best ever recorded and the same as in 2013. The average of all satisfaction scores was 6.33, also the same as the previous year. Only three of the 105 statisfaction questions showed a decrease compared to the previous year.
Survey Format
NERSC conducts its yearly survey of users to gather feedback on the quality of its services and computational resources. The survey helps both DOE and NERSC staff judge how well NERSC is meeting the needs of users and points to areas where NERSC can improve.
The survey is conducted on the web, and in 2014 consists of 105 satisfaction questions that are scored numerically. In addition, we solicit free-form feedback from the users. In December 2014, 5,000 authorized users (those with registered accounts who have signed the computer policy use form) were invited by email to take the 2014 user survey. The survey was open through January 19, 2015.
7-Point Survey Satisfaction Scale
The survey uses a seven-point rating scale, where “1” is “very dissatisfied” and “7” indicates “very satisfied.” For each question the average score and standard deviation are computed.
Text Value | Numerical Value |
---|---|
Very Satisfied | 7 |
Mostly Satisfied | 6 |
Somewhat Satisfied | 5 |
Neutral | 4 |
Somewhat Dissatisfied | 3 |
Mostly Dissatisfied | 2 |
Very Dissatisfied | 1 |
3-Point Usefulness and Importance Scale
Questions that asked if a system or service was useful or important used the following scale.
Text Value | Numerical Value |
---|---|
Very Useful (or Important) | 3 |
Somewhat Useful (or Important) | 2 |
Not Useful (or Important) | 5 |
Changes from one year to the next were considered significant if they passed the t-test criteria at the 90% confidence level.
Overall Satisfaction
The average response to the item "Please rate your overall satisfaction with NERSC" was 6.50 on the seven-point satisfaction scale. This was the highest rating ever (to with statistical error) since the survey was created in its current form in 2003.
The following figure shows the overall satisfaction rating from 2003-2014. The red line labeled "Target" is the minimum acceptable DOE target for NERSC.
Overall Satisfaction Questions
Satisfaction was the same as in 2013 in all fives areas surveyed in the first section of the survey: "Overall Satisfaction." As with the "Overall Satisfaction with NERSC" score, none of the five area have ever received a statistically significant higher rating. (Results shown here include responses from all NERSC users – including those from JGI and PDSF users – for 2012, 2013, and 2014.
Questions Asked on the "Overall Satisfaction" Survey Page
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
NERSC Overall | 666 | 6.50 | 0.80 | - |
Services | 640 | 6.58 | 0.78 | - |
Computing Resources | 660 | 6.37 | 0.87 | - |
Data Resources | 563 | 6.31 | 1.02 | - |
HPC Software | 554 | 6.26 | 1.02 | - |
Total for 5 Questions | 6.41 | 0.89 | - |
The most common rating in all five categories was "Very Satisfied (7)". The distributions for the Overall Satisfaction categories are show below.
Other Overall Satisfaction Questions
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Security | 358 | 6.71 | 0.69 | - |
Consulting | 479 | 6.64 | 0.80 | - |
Account Support | 521 | 6.71 | 0.74 | +0.09 |
PDSF | 51 | 6.43 | 1.14 | - |
Project Global File System | 278 | 6.61 | 0.71 | - |
HPSS | 295 | 6.45 | 0.89 | - |
Web Site | 520 | 6.53 | 0.72 | - |
Global Scratch File System | 367 | 6.53 | 0.84 | - |
Hopper | 416 | 6.27 | 0.89 | -0.19 |
Carver | 265 | 6.36 | 0.87 | - |
Projectb File System | 108 | 6.42 | 0.86 | - | Genepool | 59 | 6.17 | 0.81 | - |
Edison* | 390 | 6.31 | 0.88 | - |
NX | 135 | 6.10 | 1.15 | - |
Highest Rated Items
The 10 highest-rated items (of all 105 satisfaction questions) on the survey involved data storage systems (6 items), consulting and account support (2 items), and NERSC networking and cybersecurity (2). This tells us that users think NERSC takes good care of their data and makes it readily available, and provides excellent consulting, account support, networking, and cybersecurity.
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Project File System Data Integrity | 265 | 6.77 | 0.59 | +0.12 |
Project File System Availability | 265 | 6.75 | 0.55 | - |
Account Support | 521 | 6.71 | 0.74 | +0.09 |
NERSC Cybersecurity | 443 | 6.71 | 0.65 | - |
HPSS Data Integrity | 263 | 6.71 | 0.64 | - |
HPSS Availability | 269 | 6.70 | 0.65 | - |
Global Scratch Availability | 350 | 6.66 | 0.73 | - |
Consulting Overall | 479 | 6.64 | 0.80 | - |
Internal NERSC Network | 371 | 6.63 | 0.73 | - |
Global Project File System Overall | 278 | 6.61 | 0.71 | - |
Lowest Rated Items
The 10 lowest-rated items (of all 105 satisfaction questions) on the survey involved the batch queues and the associated wait times (5), and data policies and analysis software (5). Batch wait times on Hopper and Edison were below the NERSC minimum target of 5.25, reflecting the strong demand for access to NERSC systems.
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Data Analysis Software | 258 | 5.98 | 1.26 | - |
Hopper Batch Queue Structure | 397 | 5.97 | 1.11 | - |
Long Team Data Retention | 277 | 5.96 | 1.27 | - |
Workflow Software | 200 | 5.91 | 1.32 | - |
Visualization Software | 200 | 5.88 | 1.23 | - |
Edison Batch Queue Structure | 377 | 5.76 | 1.24 | - |
Carver Batch Wait Time | 160 | 5.63 | 1.28 | - |
Hopper Batch Wait Time | 404 | 5.17 | 1.52 | - |
Edison Batch Wait Time | 383 | 4.87 | 1.56 | -0.50 |
Demographic Responses
Large, Medium, and Small MPP Users
For purpose of survey analysis, users were divided into those who used more than 3 million MPP hours (Large), between 500,000 and 3 million MPP hours (Medium), and less than 500,000 MPP hours (Small). Overall, there was not a great difference among the groups, in contrast to the previous year when larger users expressed greater satisfaction with NERSC.
Satisfaction by NERSC Experience
Users that had been computing at NERSC reported more satisfaction in most areas, the exception being in data resources where new users were the happiest.
Scientific Domains
While users in all scientific domains rated NERSC highly, there were some differences among the groups. The plots below show differences from the average of all user responses. Researchers in accelerators, applied math, chemistry, geosciences, lattice guage theory, and fusion research ranked NERSC higher than average, while scores were lower users in astrophysics and biosciences. An explanation for these variations are not immediately clear.
Satisfaction by Allocation Type
Projects receive allocations via a number of methods. "Production" accounts are allocated by DOE program mangers through the ERCAP allocations process, "ALCC" (ASCR Leadership Computing Challenge) accounts are allocated by DOE's Office of Advanced Scientific Computing, the NERSC Director has a reserve of time to allocate, NERSC awards small "Startup" accounts, and there was a "Data Pilot" program in 2013.
Production accounts make up the vast majority of accounts (and thus survey responses), so their responses define the average and that group shows little variation from it. ALCC and Startup users rated NERSC higher than average in all categories, while Director's Reserve and Data Pilot users had mixed responses.
Hopper
Hopper has been a stable, productive system over the last three years and its satisfaction scores remained high, but down slightly from 2013 . The addition of the production Edison system in 2014 helped relieve some of the demand on Hopper, but the rating for queue wait times still fell to 5.17, but was still above the pre-Edison value of 4.90.
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Hopper Overall | 416 | 6.27 | 0.89 | -0.19 |
Uptime | 405 | 6.47 | -0.19 | - |
Disk Configuration and I/O | 385 | 6.18 | 1.07 | - |
Ability to run interactively | 299 | 6.11 | 1.13 | - |
Batch queue structure | 397 | 5.97 | 1.11 | - |
Batch queue wait times | 404 | 5.17 | 1.52 | - |
Total for 6 Questions | 6.02 | 1.08 |
Representative User Comments
" 1. Hopper and Edison were clearly oversubscribed during at least the third quarter of this last calendar year. Turnaround became unworkable for a number of projects. Only more hardware or smaller allocations can really fix this, although the situation has improved following queue adjustments. Given the planned machine move and periods with lower machine availability, some careful consideration should be given to queue structures (etc.) to minimize the crush in the second half of 2015. 2. Some (more) careful expectations management may be needed for Cori/KL. For many applications it might be a lot of work to get performance not significantly better than a similar era Xeon. (Energy usage should be much better, but the allocations are in hours, not joules.) "
" The queue time last year for hopper is quite long, compared to the years before 2013, as far as I remember. This seems odd as more computational resources such as Edison are also available. Maybe some of the queue policies can be further improved. "
" Small to medium jobs are sometimes on the queue for a few consecutive days - and this is extremely inconvenient! "
" Please provide more capabilities for high-throughput jobs. It is very difficult with many users competing for time in a single high-throughput queue (thruput on hopper), especially when small projects need to compete with larger collaborations, which already have their own dedicated resources. I would hope that if NERSC provides dedicated resources to these large collaborations, then the other computing environments would be accessible to smaller projects. Currently, this is not the case for the thruput queue, the only high throughput queue available at NERSC "
" Something needs to be done about the queue wait times on Hopper and Edison. I often have to wait 1 to 2 weeks for a job to start requesting 1000 - 5000 processors. "
" Decrease wait times for short to medium jobs on hopper. "
" Reduce the batch queue wait times on Hopper and Edison, perhaps by altering the priorities or lowering the limits on the run times of jobs (e.g., 12 hrs for jobs with more than 512 cores, etc). Every code should be able to checkpoint. Also, use a 'fair share' system so that users lose priority as they run more jobs. "
" The queue time is very important for me. Now it will take me several days to finish one job on Hopper, compared with the very short waiting time (less than 1 day) in early 2014. "
" Currently, the batch wait times on both Hopper and Edison are terrible. I wish this could be improved going forward in future. "
" I have had an extremely positive experience with NERSC. Edison and hopper have been indispensable for my research over the past 2 years. NERSC's consultants have been extremely helpful and responsive. "
" Excellent programming environment, excellent hardware, good queueing policies on hopper; the higi-priority debug queue is extremely useful in particular (not all supercomputing centers have them, regrettably) "
" I have been very impressed with the stability of the computing systems I have used (Hopper and Edison). Any issues I had were resolved very quickly and thoroughly. Planned outages of systems were communicated well in advance of when they would occur, allowing me to schedule my tasks appropriately. "
" It is very easy to install and run the simulation I need to. Considering I do not have no experience using super computer, it took just few hours to run my case in Hopper. "
Edison
Edison entered preproduction in 2014 and had improved stability and uptime compared to pre-production time in 2013. However, long queue waits caused some user dissatisfaction.
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Edison Overall | 390 | 6.31 | 0.88 | - |
Uptime | 383 | 6.40 | 0.84 | +1.04 |
Disk Configuration and I/O | 352 | 6.25 | 1.07 | +0.23 |
Ability to run interactively | 273 | 5.98 | 1.26 | - |
Batch queue structure | 377 | 5.76 | 1.24 | - |
Batch queue wait times | 383 | 4.87 | 1.56 | -0.50 |
Total for 6 Questions | 5.92 | 1.14 |
Representative User Comments
" I believe that edison is over subscribed. The queue wait times for say 5000 cores for the full 48 hours can approach two weeks. There also seem to be users requesting large numbers of cores (50,0000 or so) for ridiculously short periods of time (like 2 hours or less). These jobs may limit the ability of the batch system to backfill effectively and so should probably not be allowed (or given much lower priority). "
" Edison is awesome. "
" I am particularly interested in the performance stability of my applications. Recently, the performance of my codes on edison becomes unpredictable. Different runs reported widely different timings. I am wondering how NERSC will address these issues ? "
" Edison is one of the best machines in the world. "
" Job waiting time on Edison is too long, particularly for regular_small queue, for which the priority should be increased. I know it is a policy to encourage users to use a large number of cores. However, the regular_small queue is a very important way to use the limited Edison resource most efficiently and productively. Saying that, for many applications with available NERSC resource (allocation), running high-core_number jobs can quickly burn out yearly allocations, and produce less scientific results than running low-core-number jobs. "
" Something needs to be done about the queue wait times on Hopper and Edison. I often have to wait 1 to 2 weeks for a job to start requesting 1000 - 5000 processors. Our codes would also benefit from having some machines with more memory per processor. 8GB/processor would be ideal. "
Carver
Satisfaction scores for Carver remained high as it continued to be a reliable resource for users that like a standard LINUX computing environment. As with other system, the most dissatisfaction was with the queue wait time.
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2012 to 2013 |
---|---|---|---|---|
Carver Overall | 171 | 6.36 | 0.87 | - |
Uptime | 184 | 6.54 | 0.88 | - |
Ability to run interactively | 141 | 6.16 | 1.21 | - |
Batch queue structure | 157 | 6.12 | 1.26 | - |
Batch queue wait times | 160 | 5.63 | 1.28 | - |
Total for 5 Questions | 6.17 | 1.06 |
Representative user comments:
Wait times on small jobs can be very long (especially on Carver). Given that most of what I do is in the form of small jobs, rectifying this would help a lot.
We often need to run a short job on thousands of nodes and many people use the debug queue to do that. In fact some people submit several debug queue jobs. Many times I need to run I/O intensive jobs, which don't scale well.
Genepool
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2012 to 2013 |
---|---|---|---|---|
Genepool Overall | 59 | 6.17 | 0.81 | - |
Uptime | 52 | 6.36 | 1.08 | - |
Ability to run interactively | 52 | 6.27 | 1.01 | - |
Batch queue structure | 49 | 5.98 | 1.11 | - |
Batch queue wait times | 51 | 5.86 | 1.11 | - |
Filesystem configuration and I/O performance | 52 | 5.81 | 1.37 | - |
Data storage, archiving, and retrieval | 54 | 5.72 | 1.46 | - |
Total for 7 Questions | 6.03 | 1.13 |
PDSF
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2012 to 2013 |
---|---|---|---|---|
PDSF Overall | 51 | 6.43 | 1.14 | - |
Uptime | 49 | 6.65 | 0.80 | - |
Ability to run interactively | 48 | 6.13 | 1.63 | - |
Batch queue structure | 44 | 6.25 | 1.10 | - |
Filesystem configuration and I/O performance | 47 | 6.26 | 1.21 | - |
Connection to external data repositories | 39 | 6.28 | 1.02 | +0.28 |
Total for 6 Questions | 6.46 | 0.82 |
HPSS
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
HPSS Overall | 281 | 6.45 | 0.69 | - |
Reliability (Data Integrity) | 263 | 6.71 | 0.64 | - |
Availability (Uptime) | 269 | 6.70 | 0.65 | - |
Data Transfer Rates | 265 | 6.36 | 0.96 | - |
Data Access Time | 260 | 6.35 | 0.95 | - |
User Interface | 266 | 6.02 | 1.32 | - |
Total for 6 Questions | 6.43 | 0.90 |
Representative Comments
" My main complaint is that HPSS is confusing. There is information on how to use it on the web site, but for users (like me) who are not necessarily Linux experts it is not very clear. The Globus GUI is nice, but I usually need to use the command-line htar since I have lots of files, and when using this command it's not very clear when I have successfully backed up my data and how I can check what I've stored on HPSS and what I haven't. "
" Transferring large files to HPSS is not easy - very time consuming and hence inconvenient. It would be great if we could simply use sftp to store/retrieve archived data. "
" Proactive backup from HPSS. I hate it when I forget stuff on scratch and its purged. "
" I realize that for good reasons we are not able to log-in directly to HPSS. But some ways to display the stored files there --- without relying on the memory of the user to remember all the files the user stored --- can be very helpful. "
Project File System
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2013 |
---|---|---|---|---|
Project Overall | 278 | 6.61 | 0.71 | - |
Reliability (Data Integrity) | 265 | 6.77 | 0.59 | +0.12 |
Availability (Uptime) | 265/td> | 6.75 | 0.55 | - |
File and Directory Operations | 249 | 6.50 | 0.86 | - |
I/O Bandwidth | 262 | 6.40 | 0.97 | +0.22 |
Total for 5 Questions | 6.61 | 0.73 |
Software
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Software Overall | 554 | 6.26 | 1.02 | - |
Programming Libraries) | 511 | 6.38 | 0.91 | - |
Programming Environment | 509 | 6.47 | 0.82 | - |
Applications Software | 499 | 6.32 | 0.96 | - |
Performance and Debugging Tools | 393 | 6.13 | 1.06 | - |
Data Analysis Software | 258 | 5.98 | 1.23 | - |
Visualization Software | 255 | 5.88 | 1.23 | - |
Total for 6 Questions | 6.22 | 1.02 |
Consulting
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2012 to 2013 |
---|---|---|---|---|
Consulting Overall | 479 | 6.64 | 0.80 | - |
Response Time | 474 | 6.59 | 0.88 | - |
Quality of Technical Advice | 470 | 6.55 | 0.86 | - |
Response to Special Requests | 320 | 6.49 | 0.97 | - |
Time to Solution | 465 | 6.43 | 0.97 | - |
On-line Help Desk | 300 | 6.54 | 0.95 | - |
Data Analysis and Visualization Assistance | 104 | 6.00 | 1.25 | - |
Total for 7 Questions | 6.52 | 0.91 |
Representative User Comments
You are doing excellent in 1. providing consult services to help our daily works; 2. providing a stable computing environment that we can depend on to complete our work; 3. very friendly and helpful online informations to help us to make our work done. Anyway, many thanks.
Helpful and knowledgeable consultants.
I've been quite happy with the computing and consulting services.
Consulting is excellent. An overall spirit of helpfulness is evident in all interactions from HPSS to disks to computing.
Communication is also very good. Between the "live status", MOTD, and emails with long-range plans, I feel that NERSC lets the users know what to expect and what is happening.
Consulting team is awesome! Thanks for everything!
[NERSC should] 1) Better communication of outages (planned and unplanned): Generally these are good, but they are occasionally incomplete or work gets added in at the last minute without informing the users. Generally the consultants have a great handle of what work is planned, but there has been occasions where work has been added in without informing the consultants or stake-holders. Also during the outages, there should be somewhat regular status updates on the progress of work (e.g., what is complete, what is behind, etc). This will help us plan better for restoration of services. This will also be critical for the planned move for NERSC from OSF to the hill. 2) Stakeholder notifications: Occasionally individual systems are rebooted or serviced. There seems to be no mechanism for informing users of those systems that an event has occurred that may disrupt their services or processes. Notifications should extend to all NERSC hardware (such as tape and DTNs) and not just the big hardware. 3) Business continuity: Provide a means of archiving critical data off-site. There is archival data at NERSC that should be protected should a catastrophic event occur (local and/or regional). While it is desirable to have co-located archiving, there should be a means for externally storing a copy of mission and science critical data outside of the bay area.:w
Accounts and Allocations
Topic | Number of Responses | Average Rating | Standard Deviation | Statistically Significant Changes 2013 to 2014 |
---|---|---|---|---|
Account Support and Passwords | 521 | 6.71 | 0.74 | +0.09 |
NIM Web Accounting Interface | 504 | 6.53 | 0.86 | - |
Allocations Process | 469 | 6.43 | 0.93 | - |
Comments
The NERSC home page is great. But I cannot figure out how to get from that to my personal home page"NIM home". This page is great for monitoring account usage and other administrative tasks related to my account, which is the primary reason I vist the NERSC web site. As a result, I have my browser configured to open up to my "NIM home" page when I login. The needed links for these tasks - My Stuff, Reports, Actions, Account Usage, and so on are right there. But if I wish to find another page, say the hardware of Cory, I just cannot manage to get to that from my home page. None of the links on the "NIM home" page can get me to the NERSC home page. In particular, the Search link only offers links for administrative functions. So I always have to open a new tab on the browser and open up "NERSC home" to find this other type of info. It is a pain...
Communications
Topic | Number of Responses | Average Usefulness Rating (1-3) | Standard Deviation |
---|---|---|---|
502 | 2.73 | 0.47 | |
Center Status on Web | 488 | 2.74 | 0.52 |
MOTD | 462 | 2.57 | 0.61 |
Are you adequately informed about NERSC changes?
Yes: 388 (98.2%), No: 7 (1.8%)
Training
Topic | Number of Responses | Usefulness Rating (1-3) | Satisfaction Rating (1-7) | Standard Deviation | Statistically Significant Change 2013-2014 |
---|---|---|---|---|---|
Web Tutorials | 252 | 2.63 | 6.38 | 0.93 | - |
New Users Guide | 315 | 2.79 | 6.58 | 0.77 | - |
Training Presentations on Web | 183 | 2.44 | 6.27 | 0.95 | - |
Classes | 150 | 2.30 | 6.17 | 1.20 | - |
Video Tutorials | 123 | 2.30 | 6.23 | 1.15 | - |
Web
Topic | Number of Responses | Satisfaction Rating (1-7) | Standard Deviation | Statistically Significant Change 2013-2014 |
---|---|---|---|---|
Web Site Overall (www.nersc.gov) | 504 | 6.53 | 0.72 | - |
System Status Info | 444 | 6.60 | 0.74 | - |
Accuracy of Information | 473 | 6.56 | 0.80 | - |
MyNERSC (my.nersc.gov) | 439 | 6.51 | 0.89 | - |
Timeliness of Information | 467 | 6.50 | 0.82 | +0.11 |
Ease of Finding Information | 485 | 6.31 | 0.95 | - |
Searching | 372 | 6.04 | 1.19 | - |
Ease of Use From Mobile Devices | 111 | 5.99 | 1.32 | - |
Mobile Web Site | 103 | 6.09 | 1.33 | - |
Data Analytics and Visualization
Topic | Number of Responses | Importance Rating (1-3) | Satisfaction Rating (1-7) | Standard Deviation | Statistically Significant Change 2012-2013 |
---|---|---|---|---|---|
Data Analytics and Visualization Assistance | Data Analysis and Visualization Assistance | 104 | 6.00 | 1.25 | - |
Ability of perform data analysis | 173 | 2.69 | 6.22 | 1.05 | - |
NERSC Databases | 101 | 2.50 | 6.15 | 1.17 | - |
NERSC Science Gateways | 187 | 2.45 | 6.22 | 1.10 | - |
Where to you perform analysis and visualization of data produced at NERSC?
All at NERSC | 67 | 13.1% |
Most at NERSC | 95 | 18.6% |
Half at NERSC | 104 | 20.4% |
Most elsewhere | 132 | 25.9% |
All elsewhere | 91 | 17.8% |
I don't need data analysis or visualization | 21 | 4.1% |
Data
How important are each of the following to you?
Topic | Number of Responses | Average Importance Rating (1-3) | Standard Deviation |
---|---|---|---|
Scratch Directory Quotas | 411 | 2.80 | 0.41 |
Project Directory Quotas | 313 | 2.72 | 0.48 |
Archival storage space | 267 | 2.70 | 0.50 |
Scratch File System Purge Policies | 282 | 2.69 | 0.51 |
I/O bandwidth to local disk | 301 | 2.69 | 0.48 |
Ability to checkpoint jobs | 230 | 2.67 | 0.56 |
Data Transfer Nodes | 258 | 2.67 | 0.50 |
Analytics and visualization assistance | 113 | 2.62 | 0.63 |
Long-term data retention | 236 | 2.59 | 0.57 |
Data management tools | 166 | 2.55 | 0.57 |
Access to Databases at NERSC | 162 | 2.51 | 0.63 |
Science gateways (see portal.nersc.gov) | 172 | 2.45 | 0.65 |
Interactive Jobs | 373 | 2.27 | 0.76 |
Serial Jobs | 381 | 2.15 | 0.82 |
Programming Models
We asked users to let us know if they had experience with the programming models shown in the chart below. By far, users are most familiar with OpenMP. This will be important for programming on next-generation systems, where mixed MPI and OpenMP is expected to be the main programming model, with other languages and paradigs vying for acceptance.
Application Readiness
We asked users if about how ready their codes were for using the Intel Xeon Phi processors that would be in Cori.
Item | Number of Responses | I Do Not Know | Not at All Ready | Somewhat Ready | Very Ready |
---|---|---|---|---|---|
Overall Readiness | 396 | 155 (39.1%) | 93 (23.5%) | 104 (26.3%) | 44 (11.1%) |
On package memory (MCDRAM) Readiness | 378 | 219 (57.9%) | 67 (17.7%) | 59 (15.6%) | 33 ( 8.7%) |
Vectorization Readiness | 390 | 143 (36.7%) | 67 (17.2%) | 100 (25.6%) | 80 (20.5%) |
OpenMP (threading) Readiness | 408 | 95 (23.3%) | 85 (20.8%) | 108 (26.5%) | 120 (29.4%) |