NERSCPowering Scientific Discovery for 50 Years

2008/2009 User Survey Results

Response Survey

Many thanks to the 421 users who responded to this year's User Survey. The response rate is comparable to last year's and both are significantly increased from previous years:

  • 77.4 percent of users who had used more than 250,000 XT4-based hours when the survey opened responded
  • 36.6 percent of users who had used between 10,000 and 250,000 XT4-based hours responded
  • The overall response rate for the 3,134 authorized users during the survey period was 13.4%.
  • The MPP hours used by the survey respondents represents 70.2 percent of total NERSC MPP usage as of the end of the survey period.
  • The PDSF hours used by the PDSF survey respondents represents 36.8 percent of total NERSC PDSF usage as of the end of the survey period.

The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics.

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

You can see the 2008/2009 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale or a 3-point usefulness scale.

Satisfaction
Score
Meaning Number of
Times Selected
7 Very Satisfied 8,053
6 Mostly Satisfied 6,219
5 Somewhat Satisfied 1,488
4 Neutral 1,032
3 Somewhat Dissatisfied 366
2 Mostly Dissatisfied 100
1 Very Dissatisfied 88
Importance ScoreMeaning
3 Very Important
2 Somewhat Important
1 Not Important
Usefulness ScoreMeaning
3 Very Useful
2 Somewhat Useful
1 Not at All Useful

The average satisfaction scores from this year's survey ranged from a high of 6.68 (very satisfied) to a low of 4.71 (somewhat satisfied). Across 94 questions, users chose the Very Satisfied rating 8,060 times, and the Very Dissatisfied rating 90 times. The scores for all questions averaged 6.15, and the average score for overall satisfaction with NERSC was 6.21. See All Satisfaction Ratings.

For questions that spanned previous surveys, the change in rating was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Significance of Change
significant increase (change from 2007)
significant decrease (change from 2007)
not significant

Highlights of the 2009 user survey responses include:

  • Areas with Highest User Satisfaction
  • Areas with Lowest User Satisfaction
  • Largest Increases in Satisfaction
  • Largest Decreases in Satisfaction
  • Satisfaction Patterns for Different MPP Respondents
  • Changes in Satisfaction for Active MPP Respondents
  • Changes in Satisfaction for PDSF Respondents
  • Survey Results Lead to Changes at NERSC
  • Users Provide Overall Comments about NERSC

The complete survey results are listed below and are also available from the left hand navigation column.  

Areas with Highest User Satisfaction

Areas with the highest user satisfaction are HPSS reliability and uptime, account and consulting support, grid job monitoring, NERSC Global Filesystem uptime and reliability, and network performance within the NERSC center.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
HPSS: Reliability (data integrity)     1 3 1 36 116 157 6.68 0.65 0.01
SERVICES: Account support 1     3 8 87 248 347 6.66 0.64 -0.05
HPSS: Uptime (Availability)       2 4 44 107 157 6.63 0.60 0.09
CONSULT: Timely initial response to consulting questions     2 4 10 89 221 326 6.60 0.67 0.05
GRID: Job Monitoring       2 2 17 41 62 6.56 0.72 0.48
OVERALL: Consulting and Support Services 1   3 7 11 108 256 386 6.56 0.76 -0.07
NGF: Uptime       4   19 46 69 6.55 0.78 -0.12
NGF: Reliability     1 2 1 19 46 69 6.55 0.80 -0.13
CONSULT: Overall   1 2 7 13 98 212 333 6.53 0.77 -0.04
NETWORK: Network performance within NERSC (e.g. Seaborg to HPSS) 1   1 5 5 56 117 185 6.51 0.83 -0.08

 

Areas with Lowest User Satisfaction

Areas with the lowest user satisfaction are Bassi queue wait times and Franklin uptime. This year only two questions received average scores lower than 5.5, and there were no average scores lower than 4.5. This compares with last year, when 1 average score was lower than 4.5 (Bassi wait time) and 9 were between 4.5 and 5.5.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
Franklin: Uptime (Availability) 11 15 46 25 71 89 45 302 4.91 1.62 -0.13
Bassi: Batch wait time 7 9 21 11 27 38 16 129 4.71 1.72 0.25

 

Largest Increases in Satisfaction

The largest increases in satisfaction over last year's survey are for PDSF interactive services, grid job monitoring, Franklin !/O performance, the PDSF and Jacquard batch queue structure, and network connectivity.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
PDSF: Ability to run interactively 1   1 2 3 23 23 53 6.15 1.13 0.60
GRID: Job Monitoring       2 2 17 41 62 6.56 0.72 0.48
Franklin: Disk configuration and I/O performance 7 5 13 35 29 112 81 282 5.60 1.43 0.46
PDSF: Batch queue structure     1 1 7 20 23 52 6.21 0.89 0.33
Jacquard: Batch queue structure     1 3 11 44 36 95 6.17 0.83 0.25
OVERALL: Network connectivity 1 1 10 13 30 135 205 395 6.28 0.99 0.15

 

Largest Decreases in Satisfaction

The largest decreases in satisfaction over last year's survey are Franklin batch wait time, computer and network operations 24 by 7 support, and the NERSC web site.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
Franklin: Batch wait time 4 5 20 24 57 119 70 299 5.55 1.32 -0.30
SERVICES: Computer and network operations support (24x7)   3 10 20 30 111 172 346 6.17 1.09 -0.17
WEB SERVICES: www.nersc.gov overall   1 3 9 20 167 148 348 6.28 0.80 -0.10

 

Satisfaction Patterns for Different MPP Respondents

The MPP respondents were classified as "large" (if their usage was over 250,000 hours), "medium" (usage between 10,000 and 250,000 hours) and "small". Satisfaction differences between these three groups are shown in the table below. Comparing their scores with the scores of all the 2007/2008 respondents, this year's smaller users were the most satisfied, and the larger users the least satisfied.

Item Large MPP Users: Medium MPP Users: Small MPP Users:
Num Resp Avg Score Change 2007 Num Resp Avg Score Change 2007 Num Resp Avg Score Change 2007
GRID: Job Monitoring 13 6.54 -0.04 26 6.54 0.46 11 6.64 0.56
SERVICES: Account support 67 6.54 -0.17 130 6.63 -0.07 77 6.79 0.09
OVERALL: Security 72 6.12 -0.23 145 6.44 0.07 82 6.55 0.19
WEB SERVICES: NIM web interface 71 6.35 0.07 135 6.44 0.16 76 6.49 0.21
OVERALL: Network connectivity 74 6.08 -0.05 147 6.35 0.22 84 6.40 0.28
SERVICES: Computer and network operations support (24x7) 67 5.96 -0.39 128 6.14 -0.21 68 6.37 0.02
Jacquard: Batch queue structure 14 5.50 -0.42 36 6.17 0.25 31 6.39 0.47
NETWORK: Remote network performance to/from NERSC 67 5.94 -0.12 90 6.19 0.13 51 6.37 0.32
Jacquard: Disk configuration and I/O performance 13 5.31 -0.67 33 6.30 0.32 31 5.97 -0.01
HPSS: User interface 44 5.82 -0.14 53 6.02 0.06 29 6.38 0.42
OVERALL: Available Computing Hardware 73 5.62 -0.51 151 5.98 -0.14 86 6.20 0.07
OVERALL: Hardware management and configuration 72 5.64 -0.34 142 5.75 -0.23 79 5.89 -0.09
Franklin: Ability to run interactively 56 5.75 0.17 108 5.67 0.09 46 5.93 0.36
Bassi: Batch queue structure 18 5.17 -0.40 58 5.53 -0.03 33 5.94 0.37
OVERALL: Data analysis and visualization facilities 42 5.40 -0.08 75 5.51 0.03 43 6.00 0.50
Franklin: Disk configuration and I/O performance 70 5.41 0.27 133 5.60 0.46 56 5.71 0.57
Jacquard: Batch wait time 15 4.60 -0.87 38 5.37 -0.10 33 5.91 0.44
Franklin: Batch wait time 73 5.45 -0.40 142 5.49 -0.36 61 5.70 -0.14
Bassi: Batch wait time 18 3.61 -0.85 64 4.48 0.03 35 5.23 0.77

 

Changes in Satisfaction for Active MPP Respondents

The table below includes only those users who have run batch jobs on the MPP systems. It does not include interactive-only users or project managers who do not compute. This group of users showed an increase in satisfaction for the NERSC Information Management (NIM) web interface, which did not show up in the pool of all respondents. This group also showed a decrease in satisfaction for available computing hardware and hardware management and for two of the Jacquard questions.

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
GRID: Job Monitoring       2 2 12 34 62 6.56 0.76 0.48
Franklin: Disk configuration and I/O performance 7 5 13 31 26 105 72 259 5.58 1.46 0.43
OVERALL: Network connectivity   1 6 11 23 105 159 305 6.30 0.94 0.17
WEB SERVICES: NIM web interface     3 4 15 106 154 282 6.43 0.75 0.15
OVERALL: Available Computing Hardware 2 4 7 17 42 129 109 310 5.95 1.13 -0.17
SERVICES: Computer and network operations support (24x7)   3 10 14 21 84 131 263 6.15 1.14 -0.20
Jacquard: Uptime (availability) 1   1 2 4 36 45 89 6.28 0.86 -0.21
OVERALL: Hardware management and configuration 3 1 11 18 57 129 74 294 5.76 1.13 -0.22
Jacquard: Overall 1 1 2 6 5 46 29 90 5.97 1.15 -0.29
Franklin: Batch wait time 4 5 18 21 55 112 61 276 5.53 1.32 -0.32

 

Changes in Satisfaction for PDSF Respondents

The PDSF users are clearly less satisfied with web services at NERSC compared with the MPP users.

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
PDSF: Ability to run interactively 1   1 1 3 19 15 39 6.08 1.20 0.53
PDSF: Batch queue structure     1   5 14 17 37 6.24 0.89 0.36
WEB SERVICES: NIM web interface     2 4 2 17 14 39 5.95 1.15 -0.33
WEB SERVICES: www.nersc.gov overall     1 1 7 15 10 34 5.94 0.95 -0.44
WEB SERVICES: Ease of finding information     3 2 7 14 6 32 5.56 1.16 -0.49
SERVICES: Allocations process     4 1 4 11 8 28 5.64 1.34 -0.53
TRAINING: Web tutorials     1 4 3 7 4 19 5.47 1.22 -0.67

 

Survey Results Lead to Changes at NERSC

Every year we institute changes based on the previous year survey. In 2008 and early 2009 NERSC took a number of actions in response to suggestions from the 2007/2008 user survey.

  1. 2007/2008 user survey: On the 2007/2008 survey Franklin's Disk configuration and I/O performance received the third lowest average score (5.15).

    NERSC response: In the past year NERSC and Cray staff worked extensively on benchmarking and profiling collective I/O performance on Franklin, conducting a detailed exploration into the source of the low performance (less than 1 GB/s write bandwidth) reported by several individual researchers.

    A number of issues were explored at various levels of the system/software stack, from the high-level NetCDF calls to MPI-IO optimizations and hints, block and buffer size allocations on individual nodes, Lustre striping parameters, and the underlying I/O hardware.

    These metrics were instrumental in making the case for increased I/O hardware and for making software and configuration changes. Once implemented, the cumulative effect of the hardware, software and middleware improvements is that a class of applications is now able to achieve I/O bandwidths in the 6 GB/s range.

    On the 2009 survey Franklin's Disk configuration and I/O performance received an average score of 5.60, a statistically significant increase over the previous year by 0.46 points.

  2. 2007/2008 user survey: On the 2007/2008 survey Franklin uptime received the second lowest average score (5.04).

    NERSC response: In the past year NERSC and Cray assembled a team of about 20 people to thoroughly analyze system component layouts, cross interactions and settings; to review and analyze past causes of failures; and to propose and test software and hardware changes. Intense stabilization efforts took place between March and May, with improvements implemented throughout April and May.

    As a result of these efforts, Franklin's overall availability went from an average of 87.6 percent in the six months prior to April to an average of 94.97 percent in the April through July 2009 period. In the same period, Mean Time Between Interrupts improved from an average of 1 day 22 hours h39 minutes to 3 days 20 hours 36 minutes.

    The Franklin uptime score in the 2009 survey (which opened in May) did not reflect these improvements. NERSC anticipates an improved score on next year's survey.

  3. 2007/2008 user survey: On the 2007/2008survey the two lowest PDSF scores were "Ability to run interactively" and "Disk configuration and I/O performance".

    NERSC response: In 2008 NERSC improved the interactive PDSF nodes to more powerful, larger memory nodes. In early 2009, we re-organized the user file systems on PDSF to allow for failover, reducing the impact of hardware failures on the system. We also upgraded the network connectivity to the files ystem server nodes to allow for greater bandwidth. In addition, NERSC added a queue to allow for short debug jobs.

    On the 2009 survey the PDSF "Ability to run interactively" score increased significantly by 0.60 points and moved into the "mostly satisfied - high" range. The PDSF "Disk configuration and I/O performance" score increased by 0.41 points, but this increase was not statistically significant (at the 90 percent confidence level).

 

Users Provide Overall Comments about NERSC

130 users answered the question What does NERSC do best? How does NERSC distinguish itself from other computing centers you have used?  

  • 65 respondents mentioned good consulting, staff support and communications;
  • 50 users mentioned computational resources or HPC resources for science;
  • 20 highlighted good software support;
  • 15 pointed to good queue management or job turnaround;
  • 15 were generally happy;
  • 14 mentioned good documentation and web services;
  • 9 were pleased with data services (HPSS, large disk space, data management);
  • 8 complimented good networking, access and security.

Some representative comments are:

Organization is top notch. Queuing is excellent.
Nersc is good at communicating with its users, provides large amounts of resources, and is generally one of the most professional centers I've used.
EVERYTHING !!! From the computing centers that I have used NERSC is clearly a leader.
Very easy to use. The excellent website is very helpful as a new user. Ability to run different jobsizes, not only 2048*x as on the BG/P. In an ideal world I'd only run at NERSC!
NERSC tends to be more attuned to the scientific community than other computer centers. Although it has taken years of complaining to achieve, NERSC is better at providing 'permanent' disk storage on its systems than other places.
NERSC's documentation is very good and the consultants are very helpful. A nice thing about NERSC is that they provide a number of machines of different scale with a relatively uniform environment which can be accessed from a global allocation. This gives NERSC a large degree of flexibility compared to other computational facilities.
As a user of PDSF, I have at NERSC all the resources to analyze the STAR data in a speedy and reliable way, knowing that NERSC keep the latest version of data analysis software like ROOT. Thank you for the support.
NERSC has very reliable hardware, excellent administration, and a high throughput. Consultants there have helped me very much with projects and problems and responded with thoughtful messages for me and my problem, as opposed to terse or cryptic pointers to information elsewhere. The HPSS staff helped me set up one of the earliest data sharing archives in 1998, now part of a larger national effort toward Science Gateways. (see: http://www.lbl.gov/cs/Archive/news052609b.html) This archive has a venerable place in the lattice community and is known throughout the community as "The NERSC Archive". In fact until recently, the lingua franca for exchanging lattice QCD data was "NERSC format", a protocol developed for the archive at NERSC.
The quality of the technical staff is outstanding. They are competent, professional, and they can answer questions ranging from the trivial to the complex.
Getting users started! it can take months on other systems.
Very good documentation of systems and available software. Important information readily available on single web page that also contains links to the original documentation.

113 users responded to What can NERSC do to make you more productive? .

The top two areas of concern were Franklin stability and performance, and the need for more computing resources. Users made suggestions in the areas of data storage, job scheduling, software and allocations support, services, PDSF support and networking.

Some of the comments from this section are:

A few months ago I would have said "Fix Franklin please!!" but this has been done since then and Franklin is a LOT more stable. Thanks...
For any users needing multiple processors, Franklin is the only system. The instability, both planned and unplanned downtimes, of Franklin is *incredibly* frustrating. Add in the 24 hour run time limit, it is amazing that anyone can get any work done.
have more machines of different specialties to reduce the queue (waiting) time
Highly reliable, very stable, high performance architectures like Bassi and Jacquard.
When purchasing new systems, there are obviously many factors to consider. I believe that more weight should be given to continuity of architecture and OS. For example, the transition from Seaborg to Bassi was almost seemless for me, whereas the transition from Bassi to Franklin is causing a large drop in productivity, ie porting codes and learning how to work with the new system. I estimate my productivity has dropped by 50% for 6 months. To be clear, this is NOT a problem with Franklin, but rather the cost of porting, and learning how to work on a different architecture.
Put more memory per core on large-scale machines (>8 GB/core). Increase allowed wall clock times to 48 or 96 hours.
Enhance the computing power to meet the constrained the needs of high performance computation.
Save scratch files still longer
Get the compute nodes on Franklin to see NGF or get a new box.
Make more permanent disk space available on Franklin. It needs something line the project disk space to be visible to the compute nodes. The policies need to be changed to be more friendly to the user whose jobs use 10's or 100's pf processors, and stop making those of us who can't allocate 1000's of processors to a single job feel like second-class users. It should be at least as easy to run 100 50 CPU jobs as one 5000 CPU job. The current queue structure makes it difficult if not impossible for some of us to use our allocations.
Enable longer interactive jobs on Franklin login nodes. Some compile jobs require more than 60 minutes, making building a large code base -- or diagnosing problems with the build process -- difficult. Also, it would be useful to be able to run longer visualization jobs without copying large data sets from one systems /scratch to another. This would be for running visualization code that can't be run on compute nodes; for instance, some python packages require shared libraries.
it would be useful if it was easier to see why a job crashed. I find the output tends to be a little terse.
NERSC does an excellent job in adding new software as it becomes available. It is important to continue doing so.
Allocate more time!
We can always use man power to improve the performance and scaling of our codes.
Keep doing what you are doing. I'm particularly interested in the development of the Science Gateways.

23 users responded to If there is anything important to you that is not covered in this survey, please tell us about it. .

Respondent Demographics

Number of respondents to the survey: 421

  • Respondents by DOE Office and User Role
  • Respondents by Organization
  • How long have you used NERSC?
  • What desktop systems do you use to connect to NERSC?
  • Web Browser Used to Take Survey
  • Operating System Used to Take Survey

 

Respondents by DOE Office and User Role:

Office Respondents Percent
ASCR 39 9.3%
BER 57 13.5%
BES 151 35.9%
FES 59 14.0%
HEP 47 11.1%
NP 65 15.4%
User Role Number Percent
Principal Investigators 54 12.8%
PI Proxies 62 14.7%
Project Managers 4 1.0%
Users 301 71.5%

 

Respondents by Organization:

Organization Type Number Percent
Universities 266 63.2%
DOE Labs 116 27.6%
Industry 20 4.8%
Other Govt Labs 16 3.8%
Private labs 3 0.7%
Organization Number Percent
Berkeley Lab 52 12.4%
UC Berkeley 36 8.g%
U. Wisconsin 14 3.3%
Oak Ridge 13 3.1%
PNNL 11 2.6%
U. Washington 10 2.4%
NREL 7 1.7%
SLAC 7 1.7%
Tech-X Corp 7 1.7%
Texas A&M 7 1.7%
U. Maryland 7 1.7%
UC Santa Barbara 7 1.7%
Argonne 6 1.4%
Auburn University 6 1.4%
Brookhaven 6 1.4%
Shanghai Institute of Applied Physics 6 1.4%
Stanford University 6 1.4%
University of Colorado 6 1.4%
UCLA 6 1.4%
Vanderbilt University 6 1.4%
Organization Number Percent
George Washington University 5 1.2%
NCAR 5 1.2%
PPPL 5 1.2%
University of Illinois 5 1.2%
University of Oklahoma 5 1.2%
UC Davis 5 1.2%
Colorado State University 4 1.0%
North Carolina State University 4 1.0%
Northwestern University 4 1.0%
University of Texas 4 1.0%
Creighton University 3 0.7%
General Atomics 3 0.7%
Harvard University 3 0.7%
Iowa State University 3 0.7%
Michigan State University 3 0.7%
Ohio State University 3 0.7%
Sandia Lab - California 3 0.7%
University of Central Florida 3 0.7%
University of Missouri 3 0.7%
UC San Diego 3 0.7%
West Virginia University 3 0.7%
William & Mary 3 0.7%
Other Universities 89 21.1%
Other Industry 10 2.4%
Other DOE Labs 6 1.4%
Private labs 3 0.7%
Other Gov. Labs 1 0.2%

 

How long have you used NERSC?

TimeNumberPercent
less than 6 months 64 15.4%
6 months - 3 years 192 46.3%
more than 3 years 159 38.3%

 

What desktop systems do you use to connect to NERSC?

SystemResponses
Unix Total 325
PC Total 230
Mac Total 177
Linux 295
OS X 176
Windows XP 160
Windows Vista 63
Sun Solaris 13
IBM AIX 8
Other Unix 8
Windows 2000 4
Other PC 3
HP HPUX 2
SGI IRIX 2
MacOS 1

 

Web Browser Used to Take Survey:

BrowserNumberPercent
Firefox 3 1351 55.01
Safari 414 16.86
MSIE 7 214 8.71
Firefox 2 180 7.33
MSIE 6 114 4.64
Mozilla 95 3.87
Netscape 4 46 1.87
Opera 24 0.98
Firefox 1 18 0.73

 

Operating System Used to Take Survey:

OSNumberPercent
Mac OS X 799 32.53
Linux 716 29.15
Windows XP 676 27.52
Windows Vista 211 8.59
SunOS 24 0.98
Windows Server 2003 18 0.73
Windows 2000 12 0.49

Overall Satisfaction and Importance

  • Legend
  • Overall Satisfaction with NERSC
  • How important to you is?
  • General Comments about NERSC

 

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant increase
not significant

 

Overall Satisfaction with NERSC

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Num Resp Average Score Std. Dev. Change from 2007 Change from 2006 Change from 2003
1 2 3 4 5 6 7
OVERALL: Consulting and Support Services 1   3 7 11 108 256 386 6.56 0.76 0.11 -0.07 0.19
OVERALL: Security 5   2 25 13 94 246 385 6.39 1.07 0.04 0.05  
OVERALL: Network connectivity 1 1 10 13 30 135 205 395 6.28 0.99 0.15 0.01 0.05
OVERALL: Satisfaction with NERSC 1 2 7 3 38 190 172 413 6.23 0.88 -0.08 -0.10 -0.14
OVERALL: Available Software   1 2 29 33 142 181 388 6.21 0.95 -0.01 0.23 0.16
SERVICES: Computer and network operations support (24x7)   3 10 20 30 111 172 346 6.17 1.09 -0.17 0.14  
OVERALL: Software management and configuration   1 5 24 30 142 156 358 6.16 0.97 -0.02 0.11 0.12
OVERALL: Mass storage facilities     8 37 34 103 153 335 6.06 1.10 0.00 -0.09 -0.06
OVERALL: Available Computing Hardware 2 4 9 21 52 159 147 394 6.00 1.10 -0.12 -0.19 -0.13
OVERALL: Hardware management and configuration 3 2 11 23 63 160 108 370 5.85 1.12 -0.13 -0.22 -0.22
OVERALL: Data analysis and visualization facilities   2 2 61 20 71 68 224 5.61 1.25 0.13 0.24  

 

How important to you is?

3=Very, 2=Somewhat, 1=Not important

Item Num who rated this item as: Total Responses Average ScoreStd. Dev.
1 2 3
OVERALL: Satisfaction with NERSC 1 53 339 393 2.86 0.35
OVERALL: Available Computing Hardware 2 68 307 377 2.81 0.41
OVERALL: Network connectivity 1 78 289 368 2.78 0.42
OVERALL: Consulting and Support Services 4 87 283 374 2.75 0.46
OVERALL: Hardware management and configuration 8 125 220 353 2.60 0.53
SERVICES: Computer and network operations support (24x7) 19 110 223 352 2.58 0.59
OVERALL: Available Software 22 123 221 366 2.54 0.61
OVERALL: Software management and configuration 20 135 183 338 2.48 0.61
OVERALL: Security 34 144 180 358 2.41 0.66
OVERALL: Mass storage facilities 44 130 172 346 2.37 0.70
OVERALL: Data analysis and visualization facilities 114 80 91 285 1.92 0.85

All Satisfaction and Importance Ratings

  • Legend
  • All Satisfaction Topics - by Score
  • All Satisfaction Topics - by Number of Responses
  • All Importance Topics

 

Legend

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Somewhat Satisfied 4.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant increase
significant decrease
not significant

 

All Satisfaction Topics - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
HPSS: Reliability (data integrity)     1 3 1 36 116 157 6.68 0.65 0.01
SERVICES: Account support 1     3 8 87 248 347 6.66 0.64 -0.05
HPSS: Uptime (Availability)       2 4 44 107 157 6.63 0.60 0.09
CONSULT: Timely initial response to consulting questions     2 4 10 89 221 326 6.60 0.67 0.05
GRID: Job Monitoring       2 2 17 41 62 6.56 0.72 0.48
OVERALL: Consulting and Support Services 1   3 7 11 108 256 386 6.56 0.76 -0.07
NGF: Uptime       4   19 46 69 6.55 0.78 -0.12
NGF: Reliability     1 2 1 19 46 69 6.55 0.80 -0.13
CONSULT: Overall   1 2 7 13 98 212 333 6.53 0.77 -0.04
NETWORK: Network performance within NERSC (e.g. Seaborg to HPSS) 1   1 5 5 56 117 185 6.51 0.83 -0.08
GRID: Job Submission       3 2 18 38 61 6.49 0.79 0.19
CONSULT: Quality of technical advice     2 9 14 104 195 324 6.48 0.76 -0.00
Bassi: Uptime (Availability)   1   3 8 42 76 130 6.45 0.82 -0.09
DaVinci: Uptime (Availability)       1 1 18 21 41 6.44 0.67  
CONSULT: Followup to initial consulting questions   3 4 8 17 87 194 313 6.44 0.93 -0.04
HPSS: Overall satisfaction     3 3 5 63 93 167 6.44 0.80 -0.02
GRID: Access and Authentication   1 1 3 2 16 44 67 6.43 1.03 0.10
PDSF SW: Software environment       2 3 26 35 66 6.42 0.72 0.18
NERSC SW: Fortran compilers 1 1 3 10 10 88 155 268 6.40 0.93  
OVERALL: Security 5   2 25 13 94 246 385 6.39 1.07 0.04
CONSULT: Amount of time to resolve your issue   1 5 10 19 104 183 322 6.39 0.89 0.03
WEB SERVICES: NIM web interface     5 8 18 137 184 352 6.38 0.80 0.10
PDSF SW: C/C++ compilers 1     1 5 17 34 58 6.38 1.02 0.11
NERSC SW: Software environment     2 8 14 145 150 319 6.36 0.74  
PDSF: Uptime (availability)     1   5 22 28 56 6.36 0.80 0.23
HPSS: Data access time     1 6 11 56 81 155 6.35 0.83 0.05
CONSULT: On-line help desk     2 11 10 50 99 172 6.35 0.93 0.06
Jacquard: Uptime (Availability) 1   1 2 5 40 53 102 6.35 0.93 -0.19
NERSC SW: C/C++ compilers     1 18 10 77 130 236 6.34 0.91  
DaVinci: Disk configuration and I/O performance       3 2 14 22 41 6.34 0.88  
PDSF SW: Programming libraries       4 2 23 30 59 6.34 0.84 0.13
NERSC SW: Programming libraries     3 11 19 98 140 271 6.33 0.86  
WEB SERVICES: Accuracy of information 1 2 1 5 26 132 157 324 6.32 0.85 -0.08
CONSULT: Response to special requests (e.g. disk quota increases, etc.)     4 20 12 48 134 218 6.32 1.05 0.03
DaVinci: Ability to run interactively 1     3   14 24 42 6.31 1.18  
NGF: Overall 1     4 3 28 39 75 6.31 1.01 -0.13
GRID: File Transfer     4 1 2 22 34 63 6.29 1.07 0.17
PDSF: Overall satisfaction     2   2 28 24 56 6.29 0.85 0.19
WEB SERVICES: www.nersc.gov overall   1 3 9 20 167 148 348 6.28 0.80 -0.10
OVERALL: Network connectivity 1 1 10 13 30 135 205 395 6.28 0.99 0.15
HPSS: Data transfer rates   2 5 4 12 52 83 158 6.25 1.06 -0.14
NGF: I/O Bandwidth     1 2 8 26 31 68 6.24 0.88 0.17
PDSF SW: Applications software       5 3 18 25 51 6.24 0.95 0.29
OVERALL: Satisfaction with NERSC 1 2 7 3 38 190 172 413 6.21 0.92 -0.08
DaVinci: Overall 1   1 2   21 22 47 6.21 1.16  
PDSF: Batch queue structure     1 1 7 20 23 52 6.21 0.89 0.33
OVERALL: Available Software   1 2 29 33 142 181 388 6.21 0.95 -0.01
WEB SERVICES: Timeliness of information 1   6 9 29 143 137 325 6.21 0.91 -0.07
NGF: File and Directory Operations     3 2 6 24 33 68 6.21 1.03 -0.03
PDSF SW: STAR     1 5 3 5 23 37 6.19 1.22  
SERVICES: Computer and network operations support (24x7)   3 10 20 30 111 172 346 6.17 1.09 -0.17
Jacquard: Batch queue structure     1 3 11 44 36 95 6.17 0.83 0.25
OVERALL: Software management and configuration   1 5 24 30 142 156 358 6.16 0.97 -0.02
Jacquard: Ability to run interactively       9 9 23 40 81 6.16 1.02 0.23
PDSF: Ability to run interactively 1   1 2 3 23 23 53 6.15 1.13 0.60
NETWORK: Remote network performance to/from NERSC (e.g. Seaborg to your home institution)   2 6 11 21 90 104 234 6.15 1.04 0.09
TRAINING: New User's Guide 1   5 11 23 92 98 230 6.14 1.00 -0.07
PDSF SW: General tools and utilities     1 5 4 22 25 57 6.14 1.01 0.14
NERSC SW: General tools and utilities       21 22 96 97 236 6.14 0.92  
PDSF SW: CHOS     1 4 7 14 24 50 6.12 1.06  
NERSC SW: Applications software 2   3 19 20 87 100 231 6.10 1.08  
Bassi: Disk configuration and I/O performance   1 1 13 6 40 50 111 6.10 1.10 0.04
OVERALL: Mass storage facilities     8 37 34 103 153 335 6.06 1.10 0.00
Jacquard: Overall 1 1 2 6 6 50 38 104 6.05 1.11 -0.21
TRAINING: Web tutorials 1   7 10 23 83 77 201 6.04 1.07 -0.10
Jacquard: Disk configuration and I/O performance 1 1 1 8 7 33 38 89 6.03 1.20 0.05
SERVICES: Allocations process 2 1 10 20 24 109 117 283 6.03 1.16 -0.14
HPSS: User interface (hsi, pftp, ftp) 3 1 6 8 17 50 73 158 6.02 1.30 0.06
OVERALL: Available Computing Hardware 2 4 9 21 52 159 147 394 6.00 1.10 -0.12
CONSULT: Software bug resolution 1 1 2 30 26 55 97 212 5.98 1.20 -0.05
PDSF SW: Performance and debugging tools 1     6 5 18 20 50 5.96 1.23 -0.04
WEB SERVICES: Ease of finding information 1 4 12 10 38 177 98 340 5.95 1.05 -0.10
PDSF: Disk configuration and I/O performance 1   2 5 3 20 21 52 5.94 1.30 0.41
NERSC SW: ACTS Collection       13 5 17 27 62 5.94 1.17  
SERVICES: Data analysis and visualization services 1   1 10 10 36 32 90 5.93 1.14  
Bassi: Overall 1 3 6 4 14 60 47 135 5.93 1.23 -0.04
NERSC SW: Visualization software 1   3 20 8 44 51 127 5.91 1.23  
Franklin: Batch queue structure 2 2 9 25 33 120 100 291 5.90 1.16 -0.12
NERSC SW: Performance and debugging tools   1 2 27 34 79 70 213 5.87 1.07  
OVERALL: Hardware management and configuration 3 2 11 23 63 160 108 370 5.85 1.12 -0.13
NERSC SW: Data analysis software 2   1 19 9 39 42 112 5.84 1.28  
SERVICES: Data analysis and visualization consulting       14 6 17 24 61 5.84 1.19  
Franklin: Ability to run interactively 2 1 10 32 31 72 83 231 5.76 1.29 0.18
Franklin: Overall 4 8 12 10 54 133 84 305 5.74 1.27 0.04
Bassi: Ability to run interactively 2 2 2 13 13 33 36 101 5.73 1.39 0.11
WEB SERVICES: Searching 1 4 6 21 48 100 54 234 5.68 1.14 -0.03
Bassi: Batch queue structure 2   5 12 25 48 28 120 5.62 1.22 0.05
OVERALL: Data analysis and visualization facilities   2 2 61 20 71 68 224 5.61 1.25 0.13
Franklin: Disk configuration and I/O performance 7 5 13 35 29 112 81 282 5.60 1.43 0.46
TRAINING: NERSC classes       20 11 22 21 74 5.59 1.17 0.20
Jacquard: Batch wait time 2 4 6 5 15 42 26 100 5.57 1.46 0.10
Franklin: Batch wait time 4 5 20 24 57 119 70 299 5.55 1.32 -0.30
Franklin: Uptime (Availability) 11 15 46 25 71 89 45 302 4.91 1.62 -0.13
Bassi: Batch wait time 7 9 21 11 27 38 16 129 4.71 1.72 0.25

 

All Satisfaction Topics - by Number of Responses

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
OVERALL: Satisfaction with NERSC 2 2 7 3 38 190 171 413 6.21 0.92 -0.08
OVERALL: Network connectivity 1 1 10 13 30 135 205 395 6.28 0.99 0.15
OVERALL: Available Computing Hardware 2 4 9 21 52 159 147 394 6.00 1.10 -0.12
OVERALL: Available Software   1 2 29 33 142 181 388 6.21 0.95 -0.01
OVERALL: Consulting and Support Services 1   3 7 11 108 256 386 6.56 0.76 -0.07
OVERALL: Security 5   2 25 13 94 246 385 6.39 1.07 0.04
OVERALL: Hardware management and configuration 3 2 11 23 63 160 108 370 5.85 1.12 -0.13
OVERALL: Software management and configuration   1 5 24 30 142 156 358 6.16 0.97 -0.02
WEB SERVICES: NIM web interface     5 8 18 137 184 352 6.38 0.80 0.10
WEB SERVICES: www.nersc.gov overall   1 3 9 20 167 148 348 6.28 0.80 -0.10
SERVICES: Account support 1     3 8 87 248 347 6.66 0.64 -0.05
SERVICES: Computer and network operations support (24x7)   3 10 20 30 111 172 346 6.17 1.09 -0.17
WEB SERVICES: Ease of finding information 1 4 12 10 38 177 98 340 5.95 1.05 -0.10
OVERALL: Mass storage facilities     8 37 34 103 153 335 6.06 1.10 0.00
CONSULT: Overall   1 2 7 13 98 212 333 6.53 0.77 -0.04
CONSULT: Timely initial response to consulting questions     2 4 10 89 221 326 6.60 0.67 0.05
WEB SERVICES: Timeliness of information 1   6 9 29 143 137 325 6.21 0.91 -0.07
CONSULT: Quality of technical advice     2 9 14 104 195 324 6.48 0.76 -0.00
WEB SERVICES: Accuracy of information 1 2 1 5 26 132 157 324 6.32 0.85 -0.08
CONSULT: Amount of time to resolve your issue   1 5 10 19 104 183 322 6.39 0.89 0.03
NERSC SW: Software environment     2 8 14 145 150 319 6.36 0.74  
CONSULT: Followup to initial consulting questions   3 4 8 17 87 194 313 6.44 0.93 -0.04
Franklin: Overall 4 8 12 10 54 133 84 305 5.74 1.27 0.04
Franklin: Uptime (Availability) 11 15 46 25 71 89 45 302 4.91 1.62 -0.13
Franklin: Batch wait time 4 5 20 24 57 119 70 299 5.55 1.32 -0.30
Franklin: Batch queue structure 2 2 9 25 33 120 100 291 5.90 1.16 -0.12
SERVICES: Allocations process 2 1 10 20 24 109 117 283 6.03 1.16 -0.14
Franklin: Disk configuration and I/O performance 7 5 13 35 29 112 81 282 5.60 1.43 0.46
NERSC SW: Programming libraries     3 11 19 98 140 271 6.33 0.86  
NERSC SW: Fortran compilers 1 1 3 10 10 88 155 268 6.40 0.93  
NERSC SW: C/C++ compilers     1 18 10 77 130 236 6.34 0.91  
NERSC SW: General tools and utilities       21 22 96 97 236 6.14 0.92  
NETWORK: Remote network performance to/from NERSC (e.g. Seaborg to your home institution)   2 6 11 21 90 104 234 6.15 1.04 0.09
WEB SERVICES: Searching 1 4 6 21 48 100 54 234 5.68 1.14 -0.03
Franklin: Ability to run interactively 2 1 10 32 31 72 83 231 5.76 1.29 0.18
NERSC SW: Applications software 2   3 19 20 87 100 231 6.10 1.08  
TRAINING: New User's Guide 1   5 11 23 92 98 230 6.14 1.00 -0.07
OVERALL: Data analysis and visualization facilities   2 2 61 20 71 68 224 5.61 1.25 0.13
CONSULT: Response to special requests (e.g. disk quota increases, etc.)     4 20 12 48 134 218 6.32 1.05 0.03
NERSC SW: Performance and debugging tools   1 2 27 34 79 70 213 5.87 1.07  
CONSULT: Software bug resolution 1 1 2 30 26 55 97 212 5.98 1.20 -0.05
TRAINING: Web tutorials 1   7 10 23 83 77 201 6.04 1.07 -0.10
NETWORK: Network performance within NERSC (e.g. Seaborg to HPSS) 1   1 5 5 56 117 185 6.51 0.83 -0.08
CONSULT: On-line help desk     2 11 10 50 99 172 6.35 0.93 0.06
HPSS: Overall satisfaction     3 3 5 63 93 167 6.44 0.80 -0.02
HPSS: Data transfer rates   2 5 4 12 52 83 158 6.25 1.06 -0.14
HPSS: User interface (hsi, pftp, ftp) 3 1 6 8 17 50 73 158 6.02 1.30 0.06
HPSS: Reliability (data integrity)     1 3 1 36 116 157 6.68 0.65 0.01
HPSS: Uptime (Availability)       2 4 44 107 157 6.63 0.60 0.09
HPSS: Data access time     1 6 11 56 81 155 6.35 0.83 0.05
Bassi: Overall 1 3 6 4 14 60 47 135 5.93 1.23 -0.04
Bassi: Uptime (Availability)   1   3 8 42 76 130 6.45 0.82 -0.09
Bassi: Batch wait time 7 9 21 11 27 38 16 129 4.71 1.72 0.25
NERSC SW: Visualization software 1   3 20 8 44 51 127 5.91 1.23  
Bassi: Batch queue structure 2   5 12 25 48 28 120 5.62 1.22 0.05
NERSC SW: Data analysis software 2   1 19 9 39 42 112 5.84 1.28  
Bassi: Disk configuration and I/O performance   1 1 13 6 40 50 111 6.10 1.10 0.04
Jacquard: Overall 1 1 2 6 6 50 38 104 6.05 1.11 -0.21
Jacquard: Uptime (Availability) 1   1 2 5 40 53 102 6.35 0.93 -0.19
Bassi: Ability to run interactively 2 2 2 13 13 33 36 101 5.73 1.39 0.11
Jacquard: Batch wait time 2 4 6 5 15 42 26 100 5.57 1.46 0.10
Jacquard: Batch queue structure     1 3 11 44 36 95 6.17 0.83 0.25
SERVICES: Data analysis and visualization services 1   1 10 10 36 32 90 5.93 1.14  
Jacquard: Disk configuration and I/O performance 1 1 1 8 7 33 38 89 6.03 1.20 0.05
Jacquard: Ability to run interactively       9 9 23 40 81 6.16 1.02 0.23
NGF: Overall 1     4 3 28 39 75 6.31 1.01 -0.13
TRAINING: NERSC classes       20 11 22 21 74 5.59 1.17 0.20
NGF: Uptime       4   19 46 69 6.55 0.78 -0.12
NGF: Reliability     1 2 1 19 46 69 6.55 0.80 -0.13
NGF: File and Directory Operations     3 2 6 24 33 68 6.21 1.03 -0.03
NGF: I/O Bandwidth     1 2 8 26 31 68 6.24 0.88 0.17
GRID: Access and Authentication   1 1 3 2 16 44 67 6.43 1.03 0.10
PDSF SW: Software environment       2 3 26 35 66 6.42 0.72 0.18
NERSC SW: ACTS Collection 1     13 5 17 27 63 5.86 1.32  
GRID: File Transfer     4 1 2 22 34 63 6.29 1.07 0.17
GRID: Job Monitoring       2 2 17 41 62 6.56 0.72 0.48
GRID: Job Submission       3 2 18 38 61 6.49 0.79 0.19
SERVICES: Data analysis and visualization consulting       14 6 17 24 61 5.84 1.19  
PDSF SW: Programming libraries       4 2 23 30 59 6.34 0.84 0.13
PDSF SW: C/C++ compilers 1     1 5 17 34 58 6.38 1.02 0.11
PDSF SW: General tools and utilities     1 5 4 22 25 57 6.14 1.01 0.14
PDSF: Overall satisfaction     2   2 28 24 56 6.29 0.85 0.19
PDSF: Uptime (availability)     1   5 22 28 56 6.36 0.80 0.23
PDSF: Ability to run interactively 1   1 2 3 23 23 53 6.15 1.13 0.60
PDSF: Batch queue structure     1 1 7 20 23 52 6.21 0.89 0.33
PDSF: Disk configuration and I/O performance 1   2 5 3 20 21 52 5.94 1.30 0.41
PDSF SW: Applications software       5 3 18 25 51 6.24 0.95 0.29
PDSF SW: Performance and debugging tools 1     6 5 18 20 50 5.96 1.23 -0.04
PDSF SW: CHOS     1 4 7 14 24 50 6.12 1.06  
DaVinci: Overall 1   1 2   21 22 47 6.21 1.16  
DaVinci: Ability to run interactively 1     3   14 24 42 6.31 1.18  
DaVinci: Uptime (Availability)       1 1 18 21 41 6.44 0.67  
DaVinci: Disk configuration and I/O performance       3 2 14 22 41 6.34 0.88  
PDSF SW: STAR     1 5 3 5 23 37 6.19 1.22  

 

All Importance Topics

Importance Ratings: 3=Very important, 2=Somewhat important, 1=Not important
Satisfaction Ratings: 7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses for Importance Average Importance ScoreStd. Dev. Total Responses for Satisfaction Average Satisfaction ScoreStd. Dev. Change from 2007
1 2 3
OVERALL: Satisfaction with NERSC 1 53 339 393 2.86 0.35 413 6.21 0.92 -0.08
OVERALL: Available Computing Hardware 2 68 307 377 2.81 0.41 394 6.00 1.10 -0.12
OVERALL: Network connectivity 1 78 289 368 2.78 0.42 395 6.28 0.99 0.15
OVERALL: Consulting and Support Services 4 87 283 374 2.75 0.46 386 6.56 0.76 -0.07
SERVICES: Account support 5 75 239 319 2.73 0.48 347 6.66 0.64 -0.05
OVERALL: Hardware management and configuration 8 125 220 353 2.60 0.53 370 5.85 1.12 -0.13
SERVICES: Allocations process 12 82 171 265 2.60 0.58 283 6.03 1.16 -0.14
SERVICES: Computer and network operations support (24x7) 19 110 223 352 2.58 0.59 346 6.17 1.09 -0.17
OVERALL: Available Software 22 123 221 366 2.54 0.61 388 6.21 0.95 -0.01
OVERALL: Software management and configuration 20 135 183 338 2.48 0.61 358 6.16 0.97 -0.02
WEB SERVICES: NIM web interface 11 161 151 323 2.43 0.56 352 6.38 0.80 0.10
OVERALL: Security 34 144 180 358 2.41 0.66 385 6.39 1.07 0.14
OVERALL: Mass storage facilities 44 130 172 346 2.37 0.70 335 6.06 1.10 0.00
SERVICES: Data analysis and visualization services 42 43 51 136 2.07 0.83 90 5.93 1.14  
OVERALL: Data analysis and visualization facilities 114 80 91 285 1.92 0.85 224 5.61 1.25 0.13
SERVICES: Data analysis and visualization consulting 43 42 30 115 1.89 0.79 61 5.84 1.19  

Hardware Resources

  • Legend
  • Hardware Satisfaction - by Score
  • Hardware Satisfaction - by Platform

 

Legend:

Satisfaction Average Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Somewhat Satisfied 4.50 - 5.49
Significance of Change
significant increase
significant decrease
not significant

 

Hardware Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
HPSS: Reliability (data integrity)     1 3 1 36 116 157 6.68 0.65 0.01
HPSS: Uptime (Availability)       2 4 44 107 157 6.63 0.60 0.09
GRID: Job Monitoring       2 2 17 41 62 6.56 0.72 0.48
NGF: Uptime       4   19 46 69 6.55 0.78 -0.12
NGF: Reliability     1 2 1 19 46 69 6.55 0.80 -0.13
NETWORK: Network performance within NERSC (e.g. Franklin to HPSS) 1   1 5 5 56 117 185 6.51 0.83 -0.08
GRID: Job Submission       3 2 18 38 61 6.49 0.79 0.19
Bassi: Uptime (Availability)   1   3 8 42 76 130 6.45 0.82 -0.09
DaVinci: Uptime (Availability)       1 1 18 21 41 6.44 0.67  
HPSS: Overall satisfaction     3 3 5 63 93 167 6.44 0.80 -0.02
GRID: Access and Authentication   1 1 3 2 16 44 67 6.43 1.03 0.10
PDSF: Uptime (availability)     1   5 22 28 56 6.36 0.80 0.23
HPSS: Data access time     1 6 11 56 81 155 6.35 0.83 0.05
Jacquard: Uptime (Availability) 1   1 2 5 40 53 102 6.35 0.93 -0.19
DaVinci: Disk configuration and I/O performance       3 2 14 22 41 6.34 0.88  
DaVinci: Ability to run interactively 1     3   14 24 42 6.31 1.18  
NGF: Overall 1     4 3 28 39 75 6.31 1.01 -0.13
GRID: File Transfer     4 1 2 22 34 63 6.29 1.07 0.17
PDSF: Overall satisfaction     2   2 28 24 56 6.29 0.85 0.19
HPSS: Data transfer rates   2 5 4 12 52 83 158 6.25 1.06 -0.14
NGF: I/O Bandwidth     1 2 8 26 31 68 6.24 0.88 0.17
DaVinci: Overall 1   1 2   21 22 47 6.21 1.16  
PDSF: Batch queue structure     1 1 7 20 23 52 6.21 0.89 0.33
NGF: File and Directory Operations     3 2 6 24 33 68 6.21 1.03 -0.03
Jacquard: Batch queue structure     1 3 11 44 36 95 6.17 0.83 0.25
Jacquard: Ability to run interactively       9 9 23 40 81 6.16 1.02 0.23
PDSF: Ability to run interactively 1   1 2 3 23 23 53 6.15 1.13 0.60
NETWORK: Remote network performance to/from NERSC (e.g. Franklin to your home institution)   2 6 11 21 90 104 234 6.15 1.04 0.09
Bassi: Disk configuration and I/O performance   1 1 13 6 40 50 111 6.10 1.10 0.04
Jacquard: Overall 1 1 2 6 6 50 38 104 6.05 1.11 -0.21
Jacquard: Disk configuration and I/O performance 1 1 1 8 7 33 38 89 6.03 1.20 0.05
HPSS: User interface (hsi, pftp, ftp) 3 1 6 8 17 50 73 158 6.02 1.30 0.06
PDSF: Disk configuration and I/O performance 1   2 5 3 20 21 52 5.94 1.30 0.41
Bassi: Overall 1 3 6 4 14 60 47 135 5.93 1.23 -0.04
Franklin: Batch queue structure 2 2 9 25 33 120 100 291 5.90 1.16 -0.12
Franklin: Ability to run interactively 2 1 10 32 31 72 83 231 5.76 1.29 0.18
Franklin: Overall 4 8 12 10 54 133 84 305 5.74 1.27 0.04
Bassi: Ability to run interactively 2 2 2 13 13 33 36 101 5.73 1.39 0.11
Bassi: Batch queue structure 2   5 12 25 48 28 120 5.62 1.22 0.05
Franklin: Disk configuration and I/O performance 7 5 13 35 29 112 81 282 5.60 1.43 0.46
Jacquard: Batch wait time 2 4 6 5 15 42 26 100 5.57 1.46 0.10
Franklin: Batch wait time 4 5 20 24 57 119 70 299 5.55 1.32 -0.30
Franklin: Uptime (Availability) 11 15 46 25 71 89 45 302 4.91 1.62 -0.13
Bassi: Batch wait time 7 9 21 11 27 38 16 129 4.71 1.72 0.25

 

Hardware Satisfaction - by Platform

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
Bassi - IBM POWER5 p575
Bassi: Uptime (Availability)   1   3 8 42 76 130 6.45 0.82 -0.09
Bassi: Disk configuration and I/O performance   1 1 13 6 40 50 111 6.10 1.10 0.04
Bassi: Overall 1 3 6 4 14 60 47 135 5.93 1.23 -0.04
Bassi: Ability to run interactively 2 2 2 13 13 33 36 101 5.73 1.39 0.11
Bassi: Batch queue structure 2   5 12 25 48 28 120 5.62 1.22 0.05
Bassi: Batch wait time 7 9 21 11 27 38 16 129 4.71 1.72 0.25
DaVinci - SGI Altix
DaVinci: Uptime (Availability)       1 1 18 21 41 6.44 0.67  
DaVinci: Disk configuration and I/O performance       3 2 14 22 41 6.34 0.88  
DaVinci: Ability to run interactively 1     3   14 24 42 6.31 1.18  
DaVinci: Overall 1   1 2   21 22 47 6.21 1.16  
Franklin - Cray XT4
Franklin: Batch queue structure 2 2 9 25 33 120 100 291 5.90 1.16 -0.12
Franklin: Ability to run interactively 2 1 10 32 31 72 83 231 5.76 1.29 0.18
Franklin: Overall 4 8 12 10 54 133 84 305 5.74 1.27 0.04
Franklin: Disk configuration and I/O performance 7 5 13 35 29 112 81 282 5.60 1.43 0.46
Franklin: Batch wait time 4 5 20 24 57 119 70 299 5.55 1.32 -0.30
Franklin: Uptime (Availability) 11 15 46 25 71 89 45 302 4.91 1.62 -0.13
Jacquard - Opteron/Infiniband Linux Cluster
Jacquard: Uptime (Availability) 1   1 2 5 40 53 102 6.35 0.93 -0.19
Jacquard: Batch queue structure     1 3 11 44 36 95 6.17 0.83 0.25
Jacquard: Ability to run interactively       9 9 23 40 81 6.16 1.02 0.23
Jacquard: Overall 1 1 2 6 6 50 38 104 6.05 1.11 -0.21
Jacquard: Disk configuration and I/O performance 1 1 1 8 7 33 38 89 6.03 1.20 0.05
Jacquard: Batch wait time 2 4 6 5 15 42 26 100 5.57 1.46 0.10
PDSF - Physics Linux Cluster
PDSF: Uptime (availability)     1   5 22 28 56 6.36 0.80 0.23
PDSF: Overall satisfaction     2   2 28 24 56 6.29 0.85 0.19
PDSF: Batch queue structure     1 1 7 20 23 52 6.21 0.89 0.33
PDSF: Ability to run interactively 1   1 2 3 23 23 53 6.15 1.13 0.60
PDSF: Disk configuration and I/O performance 1   2 5 3 20 21 52 5.94 1.30 0.41
Grid Services
GRID: Job Monitoring       2 2 17 41 62 6.56 0.72 0.48
GRID: Job Submission       3 2 18 38 61 6.49 0.79 0.19
GRID: Access and Authentication   1 1 3 2 16 44 67 6.43 1.03 0.10
GRID: File Transfer     4 1 2 22 34 63 6.29 1.07 0.17
HPSS - Mass Storage System
HPSS: Reliability (data integrity)     1 3 1 36 116 157 6.68 0.65 0.01
HPSS: Uptime (Availability)       2 4 44 107 157 6.63 0.60 0.09
HPSS: Overall satisfaction     3 3 5 63 93 167 6.44 0.80 -0.02
HPSS: Data access time     1 6 11 56 81 155 6.35 0.83 0.05
HPSS: Data transfer rates   2 5 4 12 52 83 158 6.25 1.06 -0.14
HPSS: User interface (hsi, pftp, ftp) 3 1 6 8 17 50 73 158 6.02 1.30 0.06
NERSC Network
NETWORK: Network performance within NERSC (e.g. Franklin to HPSS) 1   1 5 5 56 117 185 6.51 0.83 -0.08
NETWORK: Remote network performance to/from NERSC (e.g. Franklin to your home institution)   2 6 11 21 90 104 234 6.15 1.04 0.09
NGF - NERSC Global Filesystem
NGF: Uptime       4   19 46 69 6.55 0.78 -0.12
NGF: Reliability     1 2 1 19 46 69 6.55 0.80 -0.13
NGF: Overall 1     4 3 28 39 75 6.31 1.01 -0.13
NGF: I/O Bandwidth     1 2 8 26 31 68 6.24 0.88 0.17
NGF: File and Directory Operations     3 2 6 24 33 68 6.21 1.03 -0.03

 

Software

  • Legend
  • Software Satisfaction - by Score
  • Software Satisfaction - NERSC and PDSF

 

Legend:

Satisfaction Average Score
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.49
Significance of Change
not significant

 

Software Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
PDSF SW: Software environment       2 3 26 35 66 6.42 0.72 0.18
NERSC SW: Fortran compilers 1 1 3 10 10 88 155 268 6.40 0.93  
PDSF SW: C/C++ compilers 1     1 5 17 34 58 6.38 1.02 0.11
NERSC SW: Software environment     2 8 14 145 150 319 6.36 0.74  
NERSC SW: C/C++ compilers     1 18 10 77 130 236 6.34 0.91  
PDSF SW: Programming libraries       4 2 23 30 59 6.34 0.84 0.13
NERSC SW: Programming libraries     3 11 19 98 140 271 6.33 0.86  
PDSF SW: Applications software       5 3 18 25 51 6.24 0.95 0.29
PDSF SW: STAR     1 5 3 5 23 37 6.19 1.22  
PDSF SW: General tools and utilities     1 5 4 22 25 57 6.14 1.01 0.14
NERSC SW: General tools and utilities       21 22 96 97 236 6.14 0.92  
PDSF SW: CHOS     1 4 7 14 24 50 6.12 1.06  
NERSC SW: Applications software 2   3 19 20 87 100 231 6.10 1.08  
PDSF SW: Performance and debugging tools 1     6 5 18 20 50 5.96 1.23 -0.04
NERSC SW: Visualization software 1   3 20 8 44 51 127 5.91 1.23  
NERSC SW: Performance and debugging tools   1 2 27 34 79 70 213 5.87 1.07  
NERSC SW: ACTS Collection 1     13 5 17 27 63 5.86 1.32  
NERSC SW: Data analysis software 2   1 19 9 39 42 112 5.84 1.28  

 

Software Satisfaction - NERSC and PDSF

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2006
1 2 3 4 5 6 7
NERSC Software
NERSC SW: Fortran compilers 1 1 3 10 10 88 155 268 6.40 0.93  
NERSC SW: Software environment     2 8 14 145 150 319 6.36 0.74  
NERSC SW: C/C++ compilers     1 18 10 77 130 236 6.34 0.91  
NERSC SW: Programming libraries     3 11 19 98 140 271 6.33 0.86  
NERSC SW: General tools and utilities       21 22 96 97 236 6.14 0.92  
NERSC SW: Applications software 2   3 19 20 87 100 231 6.10 1.08  
NERSC SW: Visualization software 1   3 20 8 44 51 127 5.91 1.23  
NERSC SW: Performance and debugging tools   1 2 27 34 79 70 213 5.87 1.07  
NERSC SW: ACTS Collection 1     13 5 17 27 63 5.86 1.32  
NERSC SW: Data analysis software 2   1 19 9 39 42 112 5.84 1.28  
PDSF Software
PDSF SW: Software environment       2 3 26 35 66 6.42 0.72 0.18
PDSF SW: C/C++ compilers 1     1 5 17 34 58 6.38 1.02 0.11
PDSF SW: Programming libraries       4 2 23 30 59 6.34 0.84 0.13
PDSF SW: Applications software       5 3 18 25 51 6.24 0.95 0.29
PDSF SW: STAR     1 5 3 5 23 37 6.19 1.22  
PDSF SW: General tools and utilities     1 5 4 22 25 57 6.14 1.01 0.14
PDSF SW: CHOS     1 4 7 14 24 50 6.12 1.06  
PDSF SW: Performance and debugging tools 1     6 5 18 20 50 5.96 1.23 -0.04

HPC Consulting

Legend:

Satisfaction Average Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Significance of Change
not significant

Satisfaction with HPC Consulting

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
CONSULT: Timely initial response to consulting questions     2 4 10 89 221 326 6.60 0.67 0.05
CONSULT: Overall   1 2 7 13 98 212 333 6.53 0.77 -0.04
CONSULT: Quality of technical advice     2 9 14 104 195 324 6.48 0.76 -0.00
CONSULT: Followup to initial consulting questions   3 4 8 17 87 194 313 6.44 0.93 -0.04
CONSULT: Amount of time to resolve your issue   1 5 10 19 104 183 322 6.39 0.89 0.03
CONSULT: On-line help desk     2 11 10 50 99 172 6.35 0.93 0.06
CONSULT: Response to special requests (e.g. disk quota increases, etc.)     4 20 12 48 134 218 6.32 1.05 0.03
CONSULT: Software bug resolution 1 1 2 30 26 55 97 212 5.98 1.20 -0.05

 

Comments about Consulting:   32 respondents

  • Good service: 14 responses
  • Mixed evaluation / requests and suggestions: 14 responses
  • Unhappy: 6 responses

 

Good service:   17 responses

My experience with NERSC consulting has been absolutely fantastic--the best of any of the many supercomputing centers I have used.

NERSC consulting/support is the flag ship of DOE supercomputing. Excellent model.

The consulting support at NERSC is very good in comparison with other DOE supercomputer centers. I like this very much!

Excellent consulting service. The best among all HPC centers that i know. Thank you and thanks in particular to Zhengji, Katie and Woo-Sun!

I am extremely satisfied with the consulting support - namely Woo-Sun Yang and Zhengji Zhao were very helpful in resolving all my problems with my model runs.

Very satisfied. Keep up the good work.

keep up the longstanding excellent work

Thank you very much for your great job. ... Thanks again your great job to maintain supercomputers so that I can most concentrate on my research project.

keep up the good work!

Excellent job. Keep up the high-quality work.

A great organization devoted to provide SERVICE to the users. Just keep up the excellent work that you are doing.

No suggestions for improvement. I've always thought the NERSC consulting is great!

As the STAR/RNC liaison, I deal with the NERSC consulting services on almost a daily basis. I couldn't be happier with the professionalism, helpfulness, and dedication of the NERSC consulting staff. ...

No. The consulting support team does a great job!

 

Mixed evaluation / requests and suggestions:   14 responses

The consultants are always friendly and do their best, but it is a tough job.

Hire more consultants...

Rewrite the "New Users Guide." Make sure it is up to date and highly accessible. Having recently started using NERSC systems, I found that I was expected to know a lot of things that I didn't necessarily realize I needed to know. Include better sample programs and scripts.

Consultant support quality varies with the person involved. Some people are excellent and most try to respond quickly. In general response is good, but I have also had important questions sit for months until a general software change, maybe unrelated to the problem, finally solved the problem. Some of the MPP problems/software bugs are larger than just NERSC, so some of the dissatisfaction can't be helped. The lack of a usable interactive debugger on Franklin is a big problem, both for communicating with the consultants and for developing MPP code.

twice this past year i have been stymied, something not working. i explain it clearly. the consultant has been unable to figure it out, suggest things to try to isolate or identify the problem. "must be your system is preventing things," they say. these were both connection problems, from LLNL, behind a firewall. there was no further help the consultants could offer. then, both times, help arrived from someone here who had dealt with the same problem. in both cases there were simple solutions. it would have been a major negative if those two problems had not been solved. my opinion is you need a consultant who is much more savvy in this area. apart from these cases, i am very satisfied with the quality of technical advice.

I think it would be nice if someone could be on duty on holidays and weekends. Normally, I found the queues are not that busy. So I can run quite a few jobs. The problem is that if nobody is on duty, if problems occur, I can not solve it. The computer time is waste.

... The online help desk could be more useful if the answers to all (or most) questions - not just mine - were available to me.

maybe this suggestion is not fair which might be a Cray problem, but I do need your help. I always have problem when I try to use Openmp and PETSc together.

more computing time

Please reduce the times of maintenance and increase the time of users.

... In general, I most use Franklin. Although sometimes I experienced some unexpected problems but I really like it. I felt very satisfied. One thing that I concern is the limited 15Gb space. Somehow, it is too small for me to run a nanocluster calculation. Hopefully, in future there are some chances to improve/enlarge this limited space.

please install gromacs

Make sure the hard disk in the pdsf are safe.

Again, I would suggest to use rsync instead of HSI for backup.

 

Unhappy:   6 responses

Sometimes tickets seem to get lost after the initial response. It would be good issues are better followed to resolution.

My impression is that at least some of the consultants are not super-strong in Fortran issues. Also, it appears you no longer have any experts in HDF5 IO library support (but I amy be wrong there).

Consultants were not able to help setting up AMBER jobs. Someone must have done it on NERSC. How did they do it?

I often get answers that put me off. For example, if I'm having trouble with X, they'll ask me if I've tried Y.
It often takes a week to answer my question. If you could have more consultants with climate expertise that would help.
The franklin error messages are misleading at best. Upgrading them would work wonders towards helping me fix my own problems. For example, I get "out of memory" errors for everything from a memory leak to missing restart files to inability to write output data files.

Consulting support asked me to recompile my software. I asked for help, and my request was ignored. I later received an email to ask if I was able to get my software recompiled, as CS wanted to get some performance data. I replied, yes, I recompiled my software, but it was due to me spending much of my time trying to find where certain libraries were located that CS would have known quite easily. This software (AMBER) was the most recent version (10 - available April 2009), yet NERSC still does not offer that version.

It has been several months since I used NERSC computers. The main reason I stopped was frequent crashes of my simulations, which to me seemed unrelated to my software -- i.e. hardware-related. In past experience with NERSC support, resolution of issues was so slow that I either abandoned the project temporarily, or found another computing center to run at.

Services and Communications

  • Legend
  • Satisfaction with NERSC Services
  • How Important are NERSC Services to You?
  • How useful are NERSC Services to You?
  • Where do you perform data analysis and visualization of data produced at NERSC?
  • Are you well informed of changes?
  • Comments about Services

 

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant decrease
not significant
UsefulnessAverage Score
Very Useful 2.50 - 3.00
Somewhat Useful 1.50 - 2.49

 

Satisfaction with NERSC Services

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
1 2 3 4 5 6 7
SERVICES: Account support 1     3 8 87 248 347 6.66 0.64 -0.05
WEB SERVICES: NIM web interface     5 8 18 137 184 352 6.38 0.80 0.10
WEB SERVICES: Accuracy of information 1 2 1 5 26 132 157 324 6.32 0.85 -0.08
WEB SERVICES: www.nersc.gov overall   1 3 9 20 167 148 348 6.28 0.80 -0.10
WEB SERVICES: Timeliness of information 1   6 9 29 143 137 325 6.21 0.91 -0.07
TRAINING: New User's Guide 1   5 11 23 92 98 230 6.14 1.00 -0.07
TRAINING: Web tutorials 1   7 10 23 83 77 201 6.04 1.07 -0.10
SERVICES: Allocations process 2 1 10 20 24 109 117 283 6.03 1.16 -0.14
WEB SERVICES: Ease of finding information 1 4 12 10 38 177 98 340 5.95 1.05 -0.10
SERVICES: Data analysis and visualization services 1   1 10 10 36 32 90 5.93 1.14  
SERVICES: Data analysis and visualization consulting       14 6 17 24 61 5.84 1.19  
WEB SERVICES: Searching 1 4 6 21 48 100 54 234 5.68 1.14 -0.03
TRAINING: NERSC classes       20 11 22 21 74 5.59 1.17 0.20

 

How Important are NERSC Services to You?

3=Very important, 2=Somewhat important, 1=Not important

Item Num who rated this item as: Total Responses Average ScoreStd. Dev.
1 2 3
SERVICES: Account support 5 75 239 319 2.73 0.48
SERVICES: Allocations process 12 82 171 265 2.60 0.58
WEB SERVICES: NIM web interface 11 161 151 323 2.43 0.56
SERVICES: Data analysis and visualization services 42 43 51 136 2.07 0.83
SERVICES: Data analysis and visualization consulting 43 42 30 115 1.89 0.79

 

How useful are NERSC Services to You?

3=Very useful, 2=Somewhat useful, 1=Not useful

Item Num who rated this item as: Total Responses Average ScoreStd. Dev.
1 2 3
TRAINING: New User's Guide 13 46 169 228 2.68 0.58
SERVICES: E-mail lists 4 115 240 359 2.66 0.50
TRAINING: Web tutorials 19 49 143 211 2.59 0.65
MOTD (Message of the Day) 24 116 200 340 2.52 0.63
TRAINING: NERSC classes 35 53 38 126 2.02 0.76
Phone calls from NERSC 91 88 57 236 1.86 0.78

 

Where do you perform data analysis and visualization of data produced at NERSC?

LocationResponsesPercent
All at NERSC 23 6.4%
Most at NERSC 44 12.2%
Half at NERSC, half elsewhere 57 15.8%
Most elsewhere 107 29.6%
All elsewhere 106 29.4%
I don't need data analysis or visualization 24 6.6%

 

Are you well informed of changes?

Do you feel you are adequately informed about NERSC changes?

AnswerResponsesPercent
Yes 362 97.6%
No 9 2.7%

Are you aware of major changes at least one month in advance?

AnswerResponsesPercent
Yes 334 91.5%
No 31 8.5%

Are you aware of planned outages 24 hours in advance?

AnswerResponsesPercent
Yes 347 93.0%
No 22 6.0%

 

Comments about Services:   18 respondents

Analytics Comments and Suggestions:   4 responses

NERSC visualization tutorials would be nice.

Many years back I felt NERSC was somewhat weak in providing information concerning movie making. My impression is that things improved but I do not know the current situation. In the last couple years i have used Quicktime Pro to create movies but wonder if tools at NRSC provide more or better capability .
Again, I am ignorant of the current situation and whether there are good tutorials on the NERSC website.

Improved visualization and analysis for large scale data. I would like to see parallel AVS support. I use the existing AVS Express on davinci extensively, but it is already becoming too small and too slow for the size of (M3D nonlinear plasma simulations) run on Franklin, even for jobs that are small by Franklin standards. One reason I am not pushing too hard to increase the size of my Franklin jobs beyond a few hundred cpus is that it will be very difficult to visualize the data at the higher resolution. I don't really know Visit, but from what I have seen the experts do with it, it just does not compare in quality to a developed AVS Express interface.

I need to catch up by using more analysis at NERSC.

Software Comments and Suggestions:   4 responses

it would be great to have python, numpy and sm all linked on the nersc computers.

please install gromacs

Matlab installed by NERSC does not provide complete math support compare to the matlab installed in my own personal computer.

The latest version of NCL (5.1.1) *correctly installed* on davinci ...

Hardware Comments and Suggestions:   3 responses

Many core SMP machines are extremely useful for many of the theories we use, which are necessarily very communication-heavy.

... A new davinci with more standard hardware (not Itanium) and much more of it

clusters with lots of memory and fast turn-around for short jobs

No additional services needed:   3 responses

Again, NERSC is the flag ship of DOE supercomputing. A great and valuable resource.

I cannot think of anything more than what NERSC offers now.

None right now.

Data Storage Comments and Suggestions:   2 responses

When I analyze my data, I need a place to do it and computer time to do it with. In theory, davinci would be the place. However davinci has very small disk allocations, so I cannot store the data on davinci for long enough to finish a run. I've been using /scratch, but I keep getting messages about needing to reduce my holdings there.
Alternatively, I can work on franklin:/project. But that's not accessible from batch jobs, and it doesn't accomplish your goal of having me process my data on your data processing machine.

larger space

Other Suggestions:   3 responses

Lots more help porting code to new platforms. Codes like NIMROD have 5-10 different version, so realize that 5-10 versions need to be ported to new platforms. I think lots of consulting time needs to be allocated to porting code when a new system is brought up.

It would be useful to me if the MOTD were available as a news feed (RSS/ATOM).

I appreciate the generous allocation to our project, but I just found out that our allocation is reduced from 1,000,000 to 750,000. I guess this is due to the policy in place that if some percentage is not used, it will be returned, which I can understand. I also understand that NERSC may return some portion to me, if I request.
However, I would suggest that NERSC should consider this carefully. For instance, I normally test and develop my codes during a first few months of the year, and I do teach classes. But in the summer, I have lots of time. Secondly, we are fortunate to have access NERSC computers, so we use the computing time very carefully and save some time for later big runs. In some cases, the referees will ask us to provide more data, but we do not have time to run it. This is the caution that we have to build in when we use NERSC computers. I have more to say about this, but I suggest to consider this. One option would be to let the users choose a time frame when their peak time period would be during the year.

Comments


What does NERSC do best? How does NERSC distinguish itself from other computing centers you have used?

In their comments:

  • 65 users mentioned ease of use, good consulting, good staff support and communications;
  • 50 users mentioned computational resources or HPC resources for science;
  • 20 mentioned good software support
  • 15 queue management or job turnaround;
  • 15 overall satisfaction with NERSC;
  • 14 good documentation and web services;
  • 9 data services (HPSS, large disk space, data management);
  • 8 good networking, access and security

Their responses have been grouped for display as follows:

  • NERSC's hardware and services are good / is overall a good center
  • Provides good machines and cycles
  • Good support services and staff
  • Good web documentation
  • Good software / easy to use environment
  • Good networking and security
  • Other comments

What can NERSC do to make you more productive?

 

40: Improve Franklin stability and performance / less down time
37: Provide faster turnaround / more computing resources / architecture suggestions
16: Data Storage suggestions
16: Job scheduling suggestions
13: Software suggestions
11: Allocations suggestions
10: More or Better Services
 9: PDSF suggestions
 4: Network suggestions

If there is anything important to you that is not covered in this survey, please tell us about it

 

6: Areas not covered by the survey
6: Additional feedback - Franklin
4: Additional feedback - allocations
7: Additional feedback - other

 


What does NERSC do best? How does NERSC distinguish itself from other computing centers you have used?   132 respondents

  NERSC's hardware and services are good / is overall a good center

The software and hardware is top notch

Very easy to use. The excellent website is very helpful as a new user. Ability to run different jobsizes, not only 2048*x as on the BG/P. In an ideal world I'd only run at NERSC!

Excellent support, hardware and software geared toward scientific applications.

Organization is top notch. Queuing is excellent.

Nersc is good at communicating with its users, provides large amounts of resources, and is generally one of the most professional centers I've used.

EVERYTHING !!! From the computing centers that I have used NERSC is clearly a leader.

NERSC have very good machines and very good staff, both make the difference from other computing centers

Overall the service is very good.

keep doing!

Availability of HPC resources, and application software management and performance optimization.

pdsf interactive with supported software

NERSC tends to be more attuned to the scientific community than other computer centers. Although it has taken years of complaining to achieve, NERSC is better at providing 'permanent' disk storage on its systems than other places.

NERSC researches & supports high-performance networking & data storage in addition to pure number-crunching.

NERSC's strengths are quick responses from consulting, quick network connection (to Berkeley campus), and no onerous security procedures like SecureID tokens.

The machines are very well-run and well documented. There is a wealth of chemistry software available and compiling our own is easy; the support is great. Allocations are both fair and simple, and we are given plenty of hours to support our projects. The large pool of memory and CPUs per node on Bassi makes it a great machine for the software we use.

Franklin is a superior machine, with lots of cycles for its users. That is, given you have time on the machine, the wait queue is reasonable.
The consultant staff is almost always available during their stated time frame, is courteous and evidently aims to please. In my opinion, this is very important for the success of the institution.

Provides a stable long-term environment with hassle-free continuation of the allocation from year to year.

Writing as the PI of a moderate sized repo, NERSC provides a vital computational resource with lightweight admin/management overhead: we are able to get on with our science. User support is very good compared to other centers.

Enable massively-parallel computing with easy-to-learn, transparent procedures.

NERSC's documentation is very good and the consultants are very helpful. A nice thing about NERSC is that they provide a number of machines of different scale with a relatively uniform environment which can be accessed from a global allocation. This gives NERSC a large degree of flexibility compared to other computational facilities.

As a user of PDSF, I have at NERSC all the resources to analyze the STAR data in a speedy and reliable way, knowing that NERSC keep the latest version of data analysis software like ROOT. Thank you for the support.

Speed, both in terms of computing performances and in terms of technical support

Fair and balancing queuing on a robust platform (bassi), and the support for technical questions is good.

Customer support is the best. And NERSC has much more resources for access than other computing centers.

NERSC has very reliable hardware, excellent administration, and a high throughput. Consultants there have helped me very much with projects and problems and responded with thoughtful messages for me and my problem, as opposed to terse or cryptic pointers to information elsewhere. The HPSS staff helped me set up one of the earliest data sharing archives in 1998, now part of a larger national effort toward Science Gateways. (see: http://www.lbl.gov/cs/Archive/news052609b.html) This archive has a venerable place in the lattice community and is known throughout the community as "The NERSC Archive". In fact until recently, the lingua franca for exchanging lattice QCD data was "NERSC format", a protocol developed for the archive at NERSC.

Resources and software are superior.

I mostly used franklin for my computing. Franklin was stable most of the time except that period when it changed duel core to qual core. I think nersc has done a great job to keep the supercomputers stable 24x7 which is very important to increase our production. Also nersc consulting support is great in comparison with other computing centers.

I have been using NERSC facilities for over a decade and I acknowledge gratefully that NERSC facility is sine quo non for my research in the investigation of Physics and Chemistry of Superheavy elements. The Relativistic coupled-Cluster calculations carried out by us at NERSC for the atomic and molecular systems of the superheavy elements(SHE) Rutherfordium ( Z=104) through Eka-plutonium element 126 are well nigh impossible to perform at any other computing facility.This is due to extraordinary demands not only on CPU but also on disk storage and Memory requirements.
We have published some of our recent results on the various SHE and this has been possible only due to the untiring efforts , help and advice of David Turner and most generous grants of additional CPU times by Dr. Sid Coon and currently by Dr. Ted Barnes. Ms. Francesca Verdier has been a tower of strength and always willing and ready to iron out when we ran into problems . Last but not least I am most grateful to my Principal Investigator and distinguished colleague Prof. Walter Loveland who has most generously supported my theoretical research in the SHE. It is impossible for me to pay my debt to Prof. Loveland except by expressing once again my sincerest thanks to him for his guidance, advice and encouragement throughout our research supported by the US DOE Division of Theory of Nuclear Physics.
In conclusion, I express my sincerest thanks especially to all those mentioned above and other very kind and helpful men and women who have made NERSC a most user-friendly place to work in.I look forward to continue using the state-of the art second to none NERSC Supercomputing facility for my research for many years.

NERSC generally provides a reliable computing environment with expert consultants. The hardware is more reliable than NCCS and the consultants are more informed.

Provide the start-of-the-art computing facilities and necessary scientific softwares for the purpose of conducting frontier research.

  Provides good machines and cycles

Top of the line production cycle provider in a high performance supercomputing environment

Unbelievably fast and sincere maintenance of systems dedicated to scientific users.

access to a range of systems (Bassi, Jacquard, Franklin) suitable for relatively small jobs (a handfull of cores) to large jobs that need (tens of) thousands of cores.

I had a very pleasant computing experience at NERSC, especially on Franklin. I admire how well and reliably I can run both small and large (several thousand procs) jobs on Franklin. A good thing is, that is is convenient to run also smaller jobs (8-128 procs) which is advantageous for development and testing or for the running of lots of small jobs each with very good parallel performance. Also the available time a job can spend in the queue varies on a reasonably large scale. There is practically no limit on the number of jobs I can submit for consecutive executing each taking a relative short time, utilizing temporarily available processors.

NERSC provides resources that would not otherwise be available.

The NERSC machines are more reliable in terms of uptime.

Size of the clusters.

I like HPSS.

Providing me with the computational resources I need. NERSC is the best managed supercomputing center I know.

I am mostly satisfied with NERSC. Please keep on running the servers well.

NERSC has the most powerful computers I have access to; therefore my research works can't be done without NERSC.

I mostly use PDSF. There, the focus is on data analysis/production, and the computing emphasis reflects this: availability and uptime, which (in my opinion) are excellent.

NERSC provides exceptional computing power and remote data storage. However, these resources still (over the last year) have not reached an acceptable level of reliability. I have not used other computing centers.

Providing large amount of computer power difficult to find elsewhere in a relatively stable fashion. I think the queues work very efficiently, at least compared to other systems I've used.

It's quicker than other computer I have used.

With NERSC I have access to larger machines (franklin) than anywhere else.

This is the only computing center I use. I am pleased with the resources I can use, although uptime on Franklin can be an issue.

Excellent management of the Franklin computing system along with rapid turnaround on medium to large jobs. Scratch files are saved longer than on comparble computers elsewhere.

the connect with pdsf and disk space seem best.

I like Bassi most that is very good for my shared memory parallel jobs with somewhat MPI.

The best thing is the power of clustering in terms of numbers of processors, resources, ...

allow me to run jobs that would be impossible to run on a local machine

convenience of getting an allocation if one works for DOE

NERSC provides accessible large-scale (>2000 core) machines.

The software I am using is well optimized to help fast calculations for my project. It is also much faster than local resource available so that I can get results soon.

Short queue times! Teragrid queues are at least 3+ days. I've never waited more than 12 hours for a job to start on Jacquard.

NERSC has high quality machines and plenty of option for interactive debugging and development.

a fantastic system!!!

I have been very pleased with the queue times on Franklin.

For me, the main distinguishing feature is that big jobs (thousands of processors) go through the queue much faster at NERSC than at other centers.

Support and turnaround times for 'medium size' MPP jobs of a few hundred to a few thousand cpus. (So far, I have just used the few hundred). Since understanding the physics requires parameter scans, this is much more useful than one very large job. Also, since I run highly nonlinear fluid-based simulations, the time step is closely related to the spatial resolution and the medium resolution at this size job runs in a reasonable wall clock time. A large job would require proportionately more time to cover the same simulated interval (changing from a few weeks for a fairly complete run to several months). This is not really affordable. Running several smaller, faster jobs that are designed to be compared against each other also means that software bugs and other problems are more quickly recognized and solved. This is important for the continuing improvement of MPP computer systems. This is a very important computational service that NERSC should continue to support.

  Good support services and staff

Very competent and timely user support.

Very helpful, knowledgeable support staff and consultants.

NERSC consulting is the most responsive of any computing center I have used.

NERSC is very responsive to both individual questions and problems and system issues. I get the feeling that there is a team of people trying very hard to keep the computers up and running and the users able to use them.

The best technical and consulting support !

Consulting. Advice. And software updates.

user support

The quality of the technical staff is outstanding. They are competent, professional, and they can answer questions ranging from the trivial to the complex.

Local resource. In general very adaptive to specific needs.

NERSC is user-friendly, its web-site is good (though not great), its staff is very knowledgeable.

NERSC is a great example for user support and outreach.

Getting users started! it can take months on other systems.

Very good at providing access to HPC. Very helpful staff.

Far and away it is the people that work for NERSC and the service they provide, from data analytics to the help desk and everything in between.

t seems like the account support is very helpful and quick to respond.

NERSC is doing a super job on supporting the users. It is this user-friendly environment that keeps me with them all the time. I should add that their action is all for the users, even if that means more complications for them. I appreciate what they are doing.

The support team at NERSC is great, far better than other computing centers I have used!

Mostly satisfied. By its quality of service.

The consultants are very friendly and very helpful.

I feel like the response time is very quick and professional. The fact that it's in the same timezone probably helps on the quick response.

The support is the best that I've experienced, absolutely fantastic. Keep up the good work.

Consulting support! The best among all HPC centers that I know.

better consulting services than others. Generally easier to use.

Resource Management. Advance messages about any updates or downtime of the system. MOTD is important and useful. Can easily find the status of all system in one click.

I am grateful that it is easy to get accounts for new users quickly, even if they are not U.S. citizens.
I have always found that NERSC responds very quickly to all requests for assistance, including help desk requests and also requests to Francesca Verdier for information on how to get additional allocations.

Serves a range of users.

NERSC is doing excellent jobs on account support!

NERSC has been very responsive to comments.

NERSC is very user friendly and the staff is excellent, in striking contrast to most other computing centers.

NERSC excels in support, and in active engagement with users. I have not only received responses to my questions but have been called by technical staff actively looking for ways to streamline our computing process which has been very helpful. We have a very productive collaboration with the visualization staff.

MOTD
Consulting
Keeping users informed
More than one batch queue

NERSC is excellent at responding to service requests and being flexible about dealing with problems. They are better at communicating with users than other centers.

NERSC provides quick feedback on issues, regarding information from a team of experts.

Nersc is very good at responding when I email them with concerns or questions.

good user support, good on-line documents

Your technical support staff is really on the ball!

good technical support
good user support

service/consulting is helpful and prompt;
appears to be efficient at solving and dealing problems (e.g., system failures)

  Good web documentation

The information on the website and the reliability of that information.

Great tutorials and user guides on the web pages

A nice and clear website.
A strong and quick response team of support.

Very good documentation of systems and available software. Important information readily available on single web page that also contains links to the original documentation.

Things seem to be well organized in NERSC. The web interface is very user-friendly and well maintained.

Web page is nice compared to other computer centers I have used.

user friendly web site service and comprehensive information.
efficient use and allocation of computer resource

NERSC provides excellent information on its website on how to use its resources. Further, whenever I've called for help, the staff have been fantastic at helping me track down problems. Both of these features help NERSC to distinguish itself from other computing facilities I've used.

  Good software / easy to use environment

NERSC has great tech support and supports their software well. Other computing centers don't install anything and leave you to suffer through building MPI libraries and ScaLAPACK and all kinds of painful things like that. NERSC always has that nasty-to-build, nasty-to-install stuff already built and installed for you, which is very helpful. They are also impressively available at all hours of the day and night for account issues.

NERSC does an excellent job providing state-of-the-art popular applications softwares for most common uses, such as in quantum-chemistry and materials simulations.
NERSC also does a good job communicating upgrades/problems with users.
Account support is very customer-oriented.

Software support
Disk space management
Consultant support

Nearby. Good for code development.

NERSC has consistently been a more stable place for development than other computing centers I use. Unfortunately, NERSC is a victim of their own success because then a lot of people try to use the resources, which results in slow turn-around.

Good maintenance about software.

more useful software and STAR environment.

Well-compiled quantum mechanics codes, stable math libraries.

Programs can be easily compiled.

NERSC is a mature system and is relatively easy to use.

 

  Good networking and security

It is the convenience of access that makes NERSC distinguished from other centers. Still maintaining the overall security, NERSC provides excellent points of access that makes users more comfortable with their use of computing facilities. An excellent institute indeed is NERSC.

The non-firewalled network configuration at NERSC is extremely valuable. I can always use scp on my laptop to get results from PDSF disks. Compare this to e.g. the BNL cluster. User home or data disks are not visible from the "gateway" nodes that are the only externally accessible ones. If a laptop is also behind a firewall there is no easy way to to get data from BNL to the laptop.

Ease of login without a SecureID or equivalent makes using NERSC machines much more enjoyable. It also greatly simplifies data transfers when the home institution (PNNL) has very tight security that can get in the way if both sides have very tight security (such as when doing transfers to/from NCAR).

Network.

NERSC is much easier to use than other centers, in particular because of the absence of key fobs.

Relatively open and easy to use. No crazy security hoops to jump through, which is nice.

 

  Other comments

not spelling. it's spelling is undistinguished.
i use only one other large computing center, at LLNL, and that not enough to draw a meaningful comparison.

I am ok with current status.

One of the things that NERSC has been doing extremely well is the emphasis on the scientific aspects of the research projects the center supports.

NERSC has been the best computing center I have used. However the I/O issues on Franklin and the fact that the fortran compilers on Franklin are not fully F95 compliant makes life difficult.

It's the only one I have!

In the past I have found franklin to be unreliable (crashes). In addition, before I stopped using Franklin, my jobs would sit in queues for days. I think that queues/allocation should be such that most jobs begin within 1 day.

The pending time is a little bit longer than other computing centers.

 


What can NERSC do to make you more productive?   113 respondents

  Improve Franklin stability and performance / less down time:   40 comments

improve stability / up time:

Less problems with franklin. Speedier resolution when it goes offline.

Improve stability in Franklin.

If FRANKLIN could be more stable and require much less frequent hardware maintenance, my efficiency would be much more improved.

more stable system and ...

[A batch queues with less QC (< 64 nodes, <256 processors) and larger Max Wallclock (3~7 days).] Of course, firstly, the system should be stable enough.

Improve Franklin's runtime! It's incredibly unreliable, and there is at least a shutdown per week...it migth be a very fast machine, but you can't trust it, cause it goes down unexpectedly so often...The bad functioning of franklin has seriously affected the performance of my work, and of many other users of franklin that I've talked to.

More system stable.

Less downtime.

More stability of the Franklin system

Franklin uptime may be improved.

Improve Franklin uptime, ...

keep improving machine stability and decreasing down time.

Less downtime and ...

The beginning of the year had a lot of down time that got in the way of productivity.

Job failure rates on franklin have been crippling. I know you're doing what you can to mitigate this, but I'm still seeing very high failure rates. ...

Improved reliability of leadership machines (this seems to have improved lately). ...

It would be nice, if there are less down times of computers.

Make Franklin more stable, [increase memory per processor.]

A few months ago I would have said "Fix Franklin please!!" but this has been done since then and Franklin is a LOT more stable. Thanks...

Franklin up-time has been a bit a stumbling block, but that's obviously not a NERSC-only problem.

Stable machine up time

Continue `hardening' Franklin (I probably did not have to write this.)

The only problem I have is that Franklin was often down when I needed to use it, but that has gotten better.

... better uptimes ...

[Increase the memory size on franklin] and improve her stability

Franklin stability has been largely improved, which is most critical to the productivity.

For any users needing multiple processors, Franklin is the only system. The instability, both planned and unplanned downtimes, of Franklin is *incredibly* frustrating. Add in the 24 hour run time limit, it is amazing that anyone can get any work done.

1) Stop jobs from crashing (maybe you already did this) ...

avoid system and hardware crashes

Avoid node failures!

Franklin is a terrible computer. I often have jobs die and the solution is to resubmit them with no changes. ...

... Better MPP computational reliability. Although it has improved since the worst levels earlier this year, I still regularly have jobs fail periodically for unknown and irreproducible reasons. I write restart files very frequently, particularly for large size jobs, which probably not very efficient even with parallel io. This is roughly 4x more than in seaborg/bassi days (every 500 times steps versus every 2000), while 6-8 hr wallclock jobs now run 4000-6000 time steps, at higher resolution instead of 2000-4000.

improve stability, I/O, and performance:

Improve uptime and file I/O performance of Franklin, [and make these top priorities for the next supercomputer procurement.]

It would be nice to to see higher stability and scalability

Fix the I/O issues on Franklin ...

... The login nodes are very underpowered, I had issues in April with two htars overloading the node. I have often found myself waiting for an 'ls' to complete. I put htars into batch scripts because they will exceed the interactive time limit.

scheduled downtime issues:

Keep scheduled maintenance to a minimum. It's nice that Franklin is getting more stable finally.

... much less frequent hardware maintenance

... and having maintenance on monday instead. thank you

NERSC is about to take out Bassi and Jacquard, but Franklin is most of the time on maintenance; so the only reliable computer that will be left is Davinci. Can you do something to fix Franklin maintenance schedule, the maintenance frequency is too high....it happens to often, and this is not good for the long run.

  Provide faster turnaround / more computing resources / architecture suggestions:   37 comments

Improve queue turnaround times:

Shorten the queue time on Bassi

... Larger number of jobs in queue [Bassi user]

The main problem was the long waiting queue time esp. on bassi, faster turn around time in queue would increase our productivity.

Reduce the waiting time of the scheduled jobs. [Bassi user]

The time wait on queue is too long. ...[Bassi user]

the chief limit for me is allocation and batch wait time. i do not see how you can make improvements here. [Bassi user]

PLEASE!!!! Change the Queue system on Bassi. It is not only slow, but I can't put enough jobs into the queue to make working there at all useful. I much prefer the system on Franklin, which allows me to run more jobs more quickly.

... Faster queue throughput is always appreciated! [Franklin / Bassi user]

Shorter queues [Jacquard / Franklin user]

have more machines of different specialties to reduce the queue (waiting) time [Franklin / Jacquard / DaVinci user]

As the number of users inevitably increases, I hope that the queuing time goes inversely proportional with the increasing user number counterintuitively. [Franklin / Jacquard / Bassi user]

decrease the queue time per job. [Franklin user]

... and somehow reduce queue wait time for the average user on Franklin.

Fix the batch and queue system. The queues in the past have been absurdly long..forcing me to use the debug queue over and over and limiting what I can run at NERSC. [Franklin user]

... faster turnaround ... [Franklin user]

... 2) Decrease pending job time [Franklin user]

Architecture suggestions:

Highly reliable, very stable, high performance architectures like Bassi and Jacquard.

Provide more resources that have 95+% uptime.

[Improve uptime and file I/O performance of Franklin,] and make these top priorities for the next supercomputer procurement.

Keep BASSI.

Most of our codes will port seamlessly to Franklin, but decommissioning Bassi will inevitably hit our projects hard.

have more machines of different specialties to reduce the queue (waiting) time

The majority of our cpu cycles are spent on ab initio electronic structure calculations. In principle Jacquard and Franklin would be very attractive systems for us to run on. Unfortunately, these applications are very I/O intensive. The global scratch space on these clusters makes running these electronic structure codes on them very inefficient. We have attempted to run these codes (primarily Molpro) in parallel across more than one node on Franklin and Jacquard, and this has proved to be extremely inefficient on Franklin. Trying to do this on Jacquard crashed the compute nodes. This makes running big jobs at NERSC largely counterproductive.

When purchasing new systems, there are obviously many factors to consider. I believe that more weight should be given to continuity of architecture and OS. For example, the transition from Seaborg to Bassi was almost seemless for me, whereas the transition from Bassi to Franklin is causing a large drop in productivity, ie porting codes and learning how to work with the new system. I estimate my productivity has dropped by 50% for 6 months. To be clear, this is NOT a problem with Franklin, but rather the cost of porting, and learning how to work on a different architecture.

At the moment, my group has shifted most of our supercomputing to NASA Ames, where the available systems (Columbia, Pleiades, and Schirra) and the visualization hardware and staff are better suited to our needs. I hope that NERSC will upgrade to more powerful systems like these soon.

Get some data processing machines [and tools] that actually work

NERSC needs a large vector processor machine to go with the Cray XT multi-core machine

provide more memory:

Larger memory quota for HOME directory. ... [Jacquard user]

Increase the per-core memory of the machines.

Put more memory per core on large-scale machines (>8 GB/core). ...

... , increase memory per processor. [Franklin / Bassi user]

Increase the memory size on franklin ...

provide more cycles:

Enhance the computing power to meet the constrained the needs of high performance computation.

... Have a bigger computer!!! [Franklin user]

Build more Franklin type machines. ...

Get more computers.

keep Franklin running, more computer hardware

  Data Storage suggestions:   16 comments

more quota / more disk capacity / better stability:

Larger disk quota on scratch directory. ... [Bassi user]

More quota. [Bassi / Frankllin user]

... Sometimes I need TBs-of disk space, quick availability of extended disk spaces for limited time is good (when needed). ... [Franklin user]

More Scratch Space. ... [Franklin user]

Increased disk capacity.

[Larger memory quota for HOME directory.] Stable SCRATCH system. [Jacquard user]

more stable scratch file systems [Franklin / Jacquard user]

Save scratch files still longer [Franklin user]

Franklin access to NGF:

Make more permanent disk space available on Franklin. It needs something like the project disk space to be visible to the compute nodes. ...

Get the compute nodes on Franklin to see NGF or get a new box.

Improved Franklin/NGF integration. [Better remote download services]

... I am very much looking forward to universal home directories and to having franklin:/project accessible during batch job runs.

Improve HPSS interfaces:

... better interface to the mass storage system [Franklin user]

The slowing of network access to NERSC may be understandable as the result of increased usage, but the denial of service attack by archival storage that has recently interfered with my work is not so readily explained.
Apparently, archive now refuses service for any more than one hsi session (to my UID). This eliminates (as though by design) the option of uploads to archive from multiple UIUC machines. This also potentially eliminates NERSC archival usability for access to outputs from our projects, whether or not generated on NERSC.
The failure of hsi on data transfer nodes dtn0[1,2].nersc.gov is an additional unpleasant surprise from NERSC. The Web pages indicate that this should work, but it does not.

... Data storage tools. htar does not work on longer file names, which are the easiest way to transparently index different simulations. The old pipeline commands from hsi to tar no longer seem to work to read files out of hpss. The entire tar file is read out to disk. so I have had to cut down the size of tar files and try to avoid accessing some of the larger old files. Different computers (eg, Franklin and davinci) have problems with each other's hsi/tar utilities, so the file has to be read out on the computer it was stored on, then transferred. This requires both computers to be up simultaneously. ...

  Job scheduling suggestions:   16 comments

more support for mid range jobs / longer wall times:

... The policies need to be changed to be more friendly to the user whose jobs use 10's or 100's pf processors, and stop making those of us who can't allocate 1000's of processors to a single job feel like second-class users. It should be at least as easy to run 100 50 CPU jobs as one 5000 CPU job. The current queue structure makes it difficult if not impossible for some of us to use our allocations. [Franklin / Bassi user]

A batch queues with less QC (< 64 nodes, <256 processors) and larger Max Wallclock (3~7 days). .... [Franklin user]

It would be nice if a subset of nodes allowed wallclock times up to 3 or 4 days. [Franklin / Jacquard user]

Since Franklin is now stabilized, longer wall-time limits for queues will attract more jobs.

[Put more memory per core on large-scale machines (>8 GB/core).] Increase allowed wall clock times to 48 or 96 hours.

... Add in the 24 hour run time limit, it is amazing that anyone can get any work done. [Franklin user]

Many of the cases I simulate have to run for a longer time (several days) but do not use a tremendous amount of nodes (say 500). I wonder whether it is feasible to have a queue for such long-time runs. [Franklin user]

More simultaneous aprun commands. [Franklin user]

more interactive / debug support:

It can be difficult to get interactive time for tests & debugging, particularly if more than a few nodes are needed. The 30 minute limit is fine, but more nodes should be available. ... [Franklin / Bassi user]

... Also, it would be useful to be able to run longer visualization jobs without copying large data sets from one systems /scratch to another. This would be for running visualization code that can't be run on compute nodes; for instance, some python packages require shared libraries. [Franklin user]

... Continue efforts to allow large - memory and long-running serial analysis tasks without undue load on launch nodes (e.g. IDL). [Franklin user]

Enable longer interactive jobs on Franklin login nodes. Some compile jobs require more than 60 minutes, making building a large code base -- or diagnosing problems with the build process -- difficult. ...

The login nodes are very underpowered, ... I put htars into batch scripts because they will exceed the interactive time limit. [Franklin user]

better job information:

it would be useful if it was easier to see why a job crashed. I find the output tends to be a little terse. [Franklin user]

Queue wait times are not always consistent. I suggest that an estimated wait time be given after a job is queued, either on the website queue list or with the qstat command. Also, it would make my life easier if the job list invoked by the qstat command on franklin showed the number of cores for each job. Right now that column is blank. [Franklin / Jacquard user]

some more featured job/queue monitors [Franklin user]

  Software suggestions:   13 comments

... and more support for various computational chemistry codes. [Franklin user]

NERSC does an excellent job in adding new software as it becomes available. It is important to continue doing so.

I would like for NERSC to add Gaussview 4. [Bassi / Franklin / Jacquard user]

Install more development tools, like Git and Valgrind.

i would like to be able to use numpy, python and sm all together at nersc. [DaVinci user]

... Better remote download services

I'd like to see a fully developed gfortran environment. This would be compatible with the linux, open software systems many of us use, and I think could be more stable and responsive than what is available from some of the for-profits. gcc is at the heart of a great deal our OSs, seems like gfortran might satisfy our scientific computing needs. [Franklin user]

The whole compiling paradigm is not as much productive as it could be (as it is in other computing centers I have used). Compilers themselves are great but module loading should be easier and more effective. [Bassi / Jacquard user]

Continue and resolve work with HDF5 developers on parallel I/O issues with flexible domains. ...

My only complaint so far is the lack of distributed version control software, such as Mercurial: http://www.selenic.com/mercurial/wiki/ It's pretty ridiculous that the best repository option you have is subversion. [Jacquard user]

Get some data processing [machines and] tools that actually work

... and install the Intel Compliers on Franklin

All of theses are also more general limitations of MPP computation)
Better interactive MPP debugging tools for MPP codes !!!! The only really usable method on Franklin is print statements, since. much of the code development requires testing on multiple processors. DDT, especially recent versions (last year or so) are not very informative. Totalview was impossible over the web. (NX tunneling works very well on davinci to speed up interactive guis - consider installing it on the MPP computers).
Better visualization and data analysis for larger MPP jobs. The tools I use now are at their limits. I have a lot more data that I am unable to digest for presentation. For example, I would like to make movies of a number of quantities from my simulations, but I would have to extract and match the frames by hand from many different files, one for each time slice. ...

  Allocations suggestions:   11 comments

need a larger allocation:

the chief limit for me is allocation and ...

More allocation and more effective use of the allocation.

Increase my allocation... Seriously, you're very good.

Larger allocation?

Please allocate more CPU time to me.

provide larger allocations

the only thing that NERSC can make is to put me unlimit time, but I know that is impossible and I'm very satisfy with NERSC

Allocate more time!

Faster allocation: Currently, we do not have enough available computational time on NERSC, and we need a new allocation. NERSC provides excellent computational resources, and we will be happy to use them as soon as the allocation process is successfully finished.

improve allocation management:

Get me a grant :^). More flexibility in the allocation use would be nice. Sometimes we front-load our research and other times it's towards the end. We are punished for not using the resource at a constant rate and sometimes research just doesn't work that way.

for long term users with proven productivity make the allocation process easier. If people are producing peer reviewed papers using NERSC resources, make it QUICKER to get allocations when possible.

  More or Better Services:   10 comments

improved web and communications services:

Up to date help pages!

It is becoming significantly less important now, but NERSC could have done much better at easing the learning curve of using the systems. I could have accomplished a great deal more already if I had known exactly how each system worked. Make sure all the information pages are up to date, and include comprehensive information, not just random tidbits.

NERSC could improve user's manuals.

Keep doing what you are doing. I'm particularly interested in the development of the Science Gateways.

It would be good if the NIM website and the www.nersc.gov website did not require separate login to go between them. ...

... The search tool for the web site does not work very well.

They should allow the users (1) to upload their published papers on-line and (2) to have annual user conference to communicate each other and explain to the general public.

1. Make this survey shorter!
2. Send announcements (e.g. for maintenance) per email with an attached iCal or ics file such that it can easily be imported in a calendar program. Can you create an online calendar to which one can subscribe with common calendar programs?

more consulting help:

More willing to provide consulting help that requires more than 5 minutes of a consultant's attention. We are all pretty good with computers, so the problems that plague us may take several or many hours to resolve and your help is much appreciated. [Bassi / Franklin user]

We can always use man power to improve the performance and scaling of our codes.

  PDSF suggestions:   9 comments

Better support for ATLAS jobs.
Increased I/O performance to central discs.
Increased Network I/O performance.
Propper Integration in OSG/LCG GRID.

standardize the pdsf OS

The main problem I have is not being able to run any batch jobs when usage is heavy. It seems that when both STAR an d ATLAS are running jobs, it is impossible for my group's jobs to be run. This means that our jobs may sit in the queue for days at a time without any progress. It is very frustrating for our work to be brought to a complete standstill when other groups are using the system, especially since our needs are very small in comparison.

Shorter procurement cycles at PDSF.

make sure the safe of the hard disk where the data are saved.

The interactive session to the PDSF usually extremely slow. I'm not sure this is due to the network connectivity since it has also been seen when I've connected from the LBNL site. The speed of the connection is even slower than I've connected to the other places, like RCF at BNL. It would be very helpful for us to do the data analysis at the PDSF by improving the slowness of interactive session.

extend hard disk storage capability

larger disk space

faster and smarter NERSC: My 1st hope is to make pdsf more efficient. Because I am a Asian user, I hope I can run more jobs at our night. When I get up in the morning, I can deal with the gotten results. Could our server align the jobs by the world time zone? On the other hand, I feel the data transfer rate is not fast enough for me, when I transfer the big files from US to China. So my other hope is for it to be faster in some day. Anyway, I wish NERSC keep going dynamically.

  Network suggestions:   4 comments

... increased file transfer speed for the case of very large files between NERSC and ANL would also be a nice feature, though I usually extract from large files what I need and transfer only that.

Help users overcome the impact of high-latency network connections for terminal settings. Home connections, hotel connections, etc. all become close to unusable because of latency.

... Improved bandwidth between NERSC and other computational facilities (esp. DOE facilities).

Increase download speed. [North Carolina State University user]

  No suggestions:   2 comments

Continue doing what they have been doing.

Not much.


 

If there is anything important to you that is not covered in this survey, please tell us about it.   23 respondents

  Areas not covered by the survey:   6 comments

pdsf website

More survey about allocation time will be important to users like us.

No survey on the individual software satisfaction.

Runtimes on Franklin vary a lot for the same job (this is after the upgrade).

I think it would be wise to ask the users about what they would like to see in the next procurements, from the next gen viz machine to replace davinci to the big iron.

Changes in service over the last year, five years. What did you do well before that you don't do well now? What are you doing well now that was a problem before? New acquisitions and the transition from old to new can be addressed in detail. This is a big computer science type issue that physical scientists need help with.

  Additional feedback - Franklin:   6 comments

I've been using NERSC for 12 years, and this is first time when the whole Scratch file system was lost!!! And you've lost it on both Franklin and Jacquard! What are you doing there? It will take me a lot of time to recover all lost data and code upgrades.

Franklin has vastly improved over the last year, I hope the stability gains continue.

Franklin is the only one I am using now. It is not always stable. I don't know why.

I do not know where to vent my frustration with the poor performing franklin login nodes. The login nodes are relatively slow compared to basic workstations as well as being highly used, which of course makes them even slower. Very difficult to compile C++ code and do other basic tasks... Of course, some of this login node slowness, but not all, is likely due to lustre. And I didn't see where I could report that lustre has been difficult to work with. Losing all of my data on /scratch was particularly painful. The amount of space (and especially inodes) given to users is simply too small. I realize users can request more space (and I do), but I don't feel that the work I'm doing is particularly special with regard to disk space. It just seems unbalanced to have such a powerful machine and such a small amount of space to work with. I wish there was an easier way to give/take files to users on the machine. Creating a /project directory is too much overhead for simple give/takes.

Congratulations for Franklin and the general maintenance of NERSC systems!

There was a period that franklin was quite unstable. I am satisfied with franklin except for this problem.

  Additional feedback - allocations:   4 comments

It is very important to renew allocation time (get more resources).

It would be much better to accept applications for large allocations quarterly. With the annual application currently used, one has to guess what funding will be in place to be able to use the allocation and plan ahead. Then, if the funding does not match up with expectations, one is left with a lot of left over cycles.

About the allocations process - I am a junior faculty member at a University, and I would like to comment about the allocation reductions. I realize that it is important to have a program whereby unused or underused hours should be reallocated. However, as a junior faculty member who is establishing a research group, a good portion of my computational resources will be used over the summer months, at least until my students get established in research. As such, I am finding that I am becoming susceptible to the first and second quarter allocation reductions. The current system negatively impacts junior faculty members disproportionately. I am not sure what to suggest to make it better, though perhaps some lenience could be given to junior faculty members as their research groups are established. Thanks!

Could I get more CPU time from other PIs who have a lot of surplus during the first half year? And this CPU time may be just specified that to Franklin so that such a management would not affect others.

  Additional feedback - other:   7 comments

The ticket system is designed to support individual users, but fails badly when there is a group-wide issue. One should be able to make it possible for others to add comments on one's ticket, but currently there is no way to even make a ticket visible to other users.

I am running NCAR climate models, and I guess there are other people who do that. I wish there is a web page (I think there used to be a web page but I cannot find it any more) so that we can get some help from it.

Thanks for such a fantastic resource (people and systems)!!! Mike Barad

NERSC is the BEST!

PDSF is pricing itself out of the market.

Many of the services are used by others in my group, I am a low level user so my answers may not be the most informative for some categories.

We do almost all of our post-processing using NCL which does not work well on davinci at all right now. This one fact renders NERSC practically useless to me.

Show Pagination