NERSCPowering Scientific Discovery for 50 Years

2010/2011 User Survey Results

Response Summary

A special thanks to the 411 users who responded to the 2011 survey, which was conducted from June 6-30, 2011. This represents a 13.1 percent response rate from the 3,130 users who had been active in the 12 months prior.  Your responses are important to us because they provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve.

The survey strives to be representative of all NERSC users. The hours used by the respondents represent about 71 percent of all MPP hours (used on Hopper, Franklin or Carver) used at the time the survey closed.  MPP respondents were classified according to their usage:

  • 60 respondnts had used over 1.5 million hours, generating a response rate of 77% from this community of "large MPP users".
  • 131 respondents had used between 100,000 and 1.5 million hours, generating a 38% response rate from the "medium MPP users"
  • 149 respondents had used fewer than 100,000 hours, generating a 14% response rate from the "small MPP users".
  • 70 respondents were not MPP users - they were either Principal Invesigators or project managers supervising the work of their NERSC users, or they were users of other NERSC resources, such as HPSS, PDSF, Euclid, or Dirac.

On this survey users scored satisfaction on a seven-point scale, where “1” is “very dissatisfied” and “7” indicates “very satisfied.”  The average satisfaction scores from this year's survey ranged from a high of 6.79 to a low of 5.16; the average score was 6.29.

Satisfaction
Score
MeaningNumber of
Times Selected
7 Very Satisfied 9,159
6 Mostly Satisfied 5,333
5 Somewhat Satisfied 1,280
4 Neutral 941
3 Somewhat Dissatisfied 210
2 Mostly Dissatisfied 62
1 Very Dissatisfied 42

For questions that spanned previous surveys, the change in scoring was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Areas with Highest User Satisfaction

Areas with the highest user satisfaction are those with average scores of more than 6.5.  NERSC resurces and services with average scores in this range were:

  • Global homes, project and scratch
  • HPSS mass storage system
  • Account support and technical consulting
  • Services and Security
  • NERSC overall
  • Carver
  • NERSC's internal network

The top 6 of the 18 questions that scored over 6.5 are shown below.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Num RespAverage ScoreStd. Dev.Change from 2010
1234567
GLOBAL HOMES: Reliability       2 3 35 188 228 6.79 0.49 0.15
HPSS: Reliability (data integrity)       4 3 29 127 163 6.71 0.63 0.02
HPSS: Uptime (Availability)       3 5 31 127 166 6.70 0.62 0.05
GLOBAL HOMES: Uptime     1 3 6 46 173 229 6.69 0.63 0.09
PROJECT: Reliability       3 1 27 88 119 6.68 0.62 0.05
SERVICES: Account support   1   9 13 56 262 341 6.67 0.72 0.06

Areas with Lowest User Satisfaction

Areas with the lowest user satisfaction are those with average scores of less than 5.5.

ItemNum who rated this item as:Num RespAverage ScoreStd. Dev.Change from 2010
1234567
FRANKLIN: Batch wait time 5 5 16 24 66 88 43 247 5.34 1.35 0.46
CARVER: Batch wait time   8 10 18 17 42 19 114 5.16 1.47 -0.65

Significant Increases in Satisfaction

16 questions scored significantly higher in 2011 compared with 2010.  NERSC has never before had so many increases in satisfaction!

Most of the significant improvements from 2010 were related to the Hopper transition from a small Cray XT5 to a large Cray XE6 (No. 5 on the November 2010 TOP500 list) and NERSC training initiatives.

The two lowest scores on the 2010 survey — Hopper and Franklin batch wait times — improved significantly in 2011, thanks to the new Hopper system.

ItemNum who rated this item as:Num RespAverage ScoreStd. Dev.Change from 2010
1234567
PDSF SW: STAR       1 1 13 8 23 6.22 0.74 0.69
HOPPER: Batch wait time 2 3 7 13 44 114 77 260 5.86 1.13 0.67
TRAINING: NERSC classes   1   18 10 29 40 98 5.90 1.19 0.51
HOPPER: Ability to run interactively 1   2 23 19 52 97 194 6.11 1.14 0.49
FRANKLIN: Batch wait time 5 5 16 24 66 88 43 247 5.34 1.35 0.46
SERVICES: Ability to perform data analysis   1   8 11 52 46 118 6.13 0.94 0.40
SERVICES: Data analysis and visualization assistance       12 7 35 37 91 6.07 1.01 0.40
HOPPER: Overall 1 2   3 12 92 156 266 6.47 0.82 0.38
HOPPER: Batch queue structure 2 2 6 16 25 106 102 259 6.03 1.13 0.37
NERSC SW: Data analysis software       37 15 40 52 144 5.74 1.20 0.27
OVERALL: Available Computing Hardware     3 5 24 134 236 402 6.48 0.73 0.25
HOPPER: Uptime (Availability)   1 2 4 15 83 157 262 6.47 0.79 0.24
FRANKLIN: Batch queue structure 3   5 20 34 104 78 244 5.89 1.13 0.18
GLOBAL HOMES: Reliability       2 3 35 188 228 6.79 0.49 0.15
GLOBAL HOMES: Overall     1 2 12 58 164 237 6.61 0.66 0.14
OVERALL: Satisfaction with NERSC     3 5 14 134 251 407 6.54 0.69 0.14

Significant Decreases in Satisfaction

The largest decrease in satisfaction came from batch wait times on the Carver cluster.  NERSC plans to address this by increasing the size of the Carver system with hardware from the Magellan project, which will conclude in late 2011.

ItemNum who rated this item as:Num RespAverage ScoreStd. Dev.Change from 2010
1234567
CARVER: Batch wait time   8 10 18 17 42 19 114 5.16 1.47 -0.65
PDSF: Uptime (availability)   2 1   2 14 19 38 6.16 1.28 -0.55

Satisfaction Patterns for Large, Medium and Small MPP Respondents

The MPP respondents were classified as "large" (if their usage was over 1.5 million hours), "medium" (usage between 100,000 and 1.5 million hours) and "small". Satisfaction differences between these three groups are shown in the table below.

The top increases in satisfaction for the large and medium MPP users were for Hopper, analytics, and training.  For the smaller MPP users the three top areas were the PDSF physics cluster, Franklin, and Hopper.

ItemAll Users:Large MPP Users:Medium MPP Users:Small MPP Users:
Avg ScoreNum RespAvg ScoreChange 2010Num RespAvg ScoreChange 2010Num RespAvg ScoreChange 2010
GLOBAL HOMES: Reliability 6.79 46 6.80 0.16 83 6.83 0.19 75 6.77 0.13
GLOBAL HOMES: Uptime 6.69 47 6.72 0.12 83 6.69 0.08 75 6.75 0.14
SERVICES: Account support 6.67 54 6.74 0.14 114 6.73 0.12 118 6.66 0.06
GLOBAL HOMES: Overall 6.61 49 6.55 0.08 86 6.64 0.16 76 6.67 0.17
OVERALL: Satisfaction with NERSC 6.54 60 6.80 0.40 130 6.53 0.13 148 6.47 0.08
OVERALL: Available Computing Hardware 6.48 60 6.62 0.39 129 6.50 0.27 148 6.48 0.25
HOPPER: Uptime (Availability) 6.47 55 6.69 0.46 107 6.38 0.15 86 6.43 0.20
HOPPER: Overall 6.47 55 6.62 0.53 109 6.41 0.33 88 6.44 0.36
WEB SERVICES: Accuracy of information 6.46 52 6.27 -0.10 90 6.58 0.21 108 6.48 0.11
CONSULT: Special requests (e.g. disk quota increases, etc.) 6.44 40 6.63 0.35 64 6.53 0.25 58 6.31 0.03
WEB: System Status Info 6.44 52 6.13 -0.38 93 6.57 0.06 109 6.50 -0.01
NERSC SW: Software environment 6.35 52 6.44 0.16 115 6.49 0.20 113 6.26 -0.03
PDSF SW: Software environment 6.35 1 6.00   0     9 6.67 0.44
WEB SERVICES: www.nersc.gov overall 6.35 56 6.34 -0.01 98 6.50 0.15 114 6.20 -0.15
WEB SERVICES: Timeliness of information 6.34 51 6.16 -0.13 87 6.49 0.21 107 6.33 0.04
NERSC SW: Applications software 6.33 45 6.44 0.23 105 6.40 0.18 102 6.25 0.04
PDSF SW: Programming libraries 6.25 1

6.00

  0     9 6.56 0.52
TRAINING: Web tutorials 6.24 27 6.22 0.17 57 6.44 0.38 49 6.12 007
TRAINING: New User's Guide 6.23 31 6.19 0.03 62 6.44 0.27 74 6.11 -0.06
PDSF SW: STAR 6.22 1 6.00   0     5 6.20 0.68
OVERALL: Mass storage facilities 6.18 56 6.41 0.24 108 6.16 -0.01 126 6.10 -0.08
SERVICES: Ability to perform data analysis 6.13 16 6.19 0.46 40 6.05 0.32 38 6.03 0.30
HOPPER: Ability to run interactively 6.11 41 6.44 0.82 74 5.97 0.35 65 6.06 0.44
WEB SERVICES: Ease of finding information 6.07 52 6.08 -0.02 95 6.29 0.20 112 5.93 -0.17
SERVICES: Data analysis and visualization assistance 6.07 12 6.17 0.50 33 6.21 0.55 30 5.87 0.20
HOPPER: Batch queue structure 6.03 56 6.13 0.46 106 6.04 0.37 84 6.05 0.38
HOPPER: Disk configuration and I/O performance 5.99 51 6.31 0.44 96 5.95 0.07 79 5.80 -0.08
TRAINING: NERSC classes 5.90 18 5.94 0.56 30 5.90 0.52 36 5.86 0.48
FRANKLIN: Batch queue structure 5.89 47 5.79 0.08 95 5.77 0.06 85 6.04 0.32
HOPPER: Batch wait time 5.86 56 6.02 0.83 106 5.93 0.74 84 5.77 0.58
NERSC SW: Visualization software 5.70 19 6.05 0.59 59 5.64 0.18 54 5.57 0.11
FRANKLIN: Batch wait time 5.34 48 5.29 0.42 95 5.18 0.31 86 5.47 0.59
CARVER: Batch wait time 5.16 18 4.89 -0.92 46 5.30 -0.51 41 5.02 -0.79

Survey Results Lead to Changes at NERSC

Every year we institute changes based on the previous year survey. In 2010 and early 2011 NERSC took a number of actions in response to suggestions from the 2009/2010 user survey.

On the 2009/2010 survey NERSC training workshops received the third lowest score, with an average satisfaction rating of 5.38 / 7.  In response, NERSC renewed its training efforts in 2010. In additional to its traditional training during the annual NERSC Users Group (NUG) Meeting, NERSC conducted a two-day workshop for Cray XE6 users at its facility in Oakland, joining with members of the Cielo team from Los Alamos National Laboratory and staff from Cray, Inc. Both the NUG training and the XE6 training were concurrently broadcast over the web. In addition, NERSC held a number of web-based training events (webinars) through 2010–2011. In all, NERSC put on eight events for its users from July 1, 2010 to June 30, 2011, with an aggregate attendance of about 375.   

NERSC’s users responded positively to the training classes as indicated by the satisfaction score increase of 0.51 points on the 2010/2011 User Survey. Additional surveys were conducted after each class, with 97.8% of respondents indicating that the training was “useful to me.”

Data analysis and visualization was another area which received  lower satisfaction ratings on the 2009/2010 survey.  In 2010 NERSC hired a new consultant to enhance NERSC’s visualization and data analysis software and services. A significant accomplishment was the robust implementation of an NX server that enabled remote X-Windows based graphical software. NERSC aggressively publicized this new service to its users and held training sessions. NERSC also re-organized the analytics materials on the new web site. As a result of these efforts, users’ satisfaction as measured by the 2010/2011 user survey increased significantly for three data analysis and visualization ratings.

In 2010 NERSC also replaced its data analysis/visualization machine, DaVinci, with a new Sun Sunfire platform, Euclid. User satisfaction with the new system was evident in the user survey results, with Euclid scoring a 6.10/7 satisfaction score, an increase of 0.27 points over DaVinci’s 2009/2010 rating.