NERSCPowering Scientific Discovery for 50 Years

2006 User Survey Results

Survey Results

Many thanks to the 256 users who responded to this year's User Survey. This represents a response rate of about 13 percent of the active NERSC users. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics.

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

You can see the 2006 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale or a 3-point usefulness scale.

Satisfaction
Score
MeaningNumber of
Times Selected
7 Very Satisfied 4,985
6 Mostly Satisfied 3,748
5 Somewhat Satisfied 832
4 Neutral 584
3 Somewhat Dissatisfied 251
2 Mostly Dissatisfied 75
1 Very Dissatisfied 51
Importance ScoreMeaning
3 Very Important
2 Somewhat Important
1 Not Important
Usefulness ScoreMeaning
3 Very Useful
2 Somewhat Useful
1 Not at All Useful

The average satisfaction scores from this year's survey ranged from a high of 6.7 (very satisfied) to a low of 4.9 (somewhat satisfied). Across 111 questions, users chose the Very Satisfied rating 4,985 times, and the Very Dissatisfied rating only 51 times. The scores for all questions averaged 6.1, and the average score for overall satisfaction with NERSC was 6.3. See All Satisfaction Ratings.

For questions that spanned the 2006 through 2003 surveys, the change in rating was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Significance of Change
significant increase (change from 2005)
significant decrease (change from 2005)
not significant

Areas with the highest user satisfaction include the HPSS mass storage system, account and consulting services, DaVinci C/C++ compilers, Jacquard uptime, network performance within the NERSC center, and Bassi Fortran compilers.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
HPSS: Reliability (data integrity)       2   22 69 93 6.70 0.59 -0.03
Account support services 1   1 4 2 47 147 202 6.64 0.76 -0.09
HPSS: Uptime (Availability)       1 2 29 62 94 6.62 0.59 -0.06
DaVinci SW: C/C++ compilers         1 3 9 13 6.62 0.65  
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73
CONSULT: Timely initial response to consulting questions   1 3 2 6 50 136 198 6.57 0.81 -0.08
Network performance within NERSC (e.g. Seaborg to HPSS)     2 1 3 38 72 116 6.53 0.75 -0.08
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20
Bassi SW: Fortran compilers 1 1     3 18 50 73 6.52 1.02  

Areas with the lowest user satisfaction include Seaborg batch wait times; PDSF disk storage, interactive services and performance tools; Bassi and Seaborg visualization software; and analytics facilities.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
PDSF SW: Performance and debugging tools 1   3 3 5 10 9 31 5.48 1.52 -0.52
Seaborg SW: Visualization software     1 12 5 15 9 42 5.45 1.19 -0.08
PDSF: Ability to run interactively 1 1 1 4 11 17 6 41 5.39 1.30 -0.40
OVERALL: Data analysis and visualization facilities   2 4 32 20 47 23 128 5.37 1.22 -0.28
Bassi SW: Visualization software 1 1   4 2 9 5 22 5.36 1.62  
PDSF: Disk configuration and I/O performance 1   7 5 6 13 7 39 5.10 1.54 -0.04
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99

The largest increases in satisfaction over last year's survey are for the Jacquard linux cluster; Seaborg batch wait times and queue structure; NERSC's available computing hardware; and the NERSC Information Management (NIM) system.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73
Seaborg: Batch queue structure 1 4 5 13 21 61 48 153 5.77 1.27 0.72
Jacquard: Batch wait time 1   3 5 10 40 23 82 5.87 1.13 0.71
Jacquard: overall   2   2 10 28 46 88 6.27 1.01 0.49
Jacquard: Batch queue structure   1 3 6 7 34 28 79 5.95 1.14 0.49
OVERALL: Available Computing Hardware     3 5 29 108 92 237 6.19 0.82 0.30
NIM     3 2 19 76 102 202 6.35 0.81 0.19

The largest decreases in satisfaction over last year's survey are shown below.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
PDSF SW: Programming libraries     1 3 7 9 11 31 5.84 1.13 -0.62
PDSF SW: General tools and utilities   1 2 4 4 14 9 34 5.62 1.33 -0.58
PDSF SW: Software environment   2   1 6 14 13 36 5.92 1.25 -0.52
Seaborg: Uptime (Availability)   1 4 3 20 52 79 159 6.23 0.99 -0.33
NERSC security 2 1 7 9 10 72 134 235 6.30 1.11 -0.31
Seaborg SW: Performance and debugging tools   3 6 7 13 38 28 95 5.69 1.31 -0.31
OVERALL: Available Software     6 24 22 85 82 219 5.97 1.08 -0.22
CONSULT: overall 1 1 2 3 9 59 124 199 6.47 0.90 -0.21
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20
OVERALL: Network connectivity     8 10 19 69 124 230 6.27 1.02 -0.18
CONSULT: Quality of technical advice 1   2 3 8 66 113 193 6.46 0.84 -0.16

Survey Results Lead to Changes at NERSC

Every year we institute changes based on the previous year survey. In 2006 NERSC took a number of actions in response to suggestions from the 2005 user survey.

  1. 2005 user survey: On the 2005 survey 24 users asked us to improve queue turnaround times. Seaborg wait time had the lowest satisfaction rating on the survey, with an average score of 3.95 (out of 7).

    NERSC response: In 2006, NERSC and DOE adjusted the duty cycle of NERSC systems to better balance throughput (reduced queue wait times) and overall utilization, and also agreed not to pre-allocate systems that are not yet in production. This approach has paid off: on the 2006 survey only 5 users commented on poor turnaround times, and the average satisfaction score for Seaborg wait times increased by almost one point.

  2. 2005 user survey: On the 2005 survey three Jacquard ratings were among the lowest seven ratings.

    NERSC response: In 2006 NERSC staff worked hard to improve Jacquard's computing infrastructure:

    • We implemented the Maui scheduler in order to manage the queues more effectively.
    • The system was greatly stabilized by reducing the system memory clock speed from 400 MHz to 333 MHz (more nodes were added to Jacquard to compensate for the reduced clock sped).
    • We worked with Linux Networx and its third party vendors to improve MVAPICH.
    • We worked with Mellanox to debug and fix several problems with the Infiniband drivers and firmware on the Infiniband switches that were preventing successful runs of large-concurrency jobs.

    On the 2006 survey four Jacqaurd ratings were significantly higher: those for up time, wait time, and queue structure, as well as overall satisfaction with Jacquard.

  3. 2005 user survey: On the 2005 survey four users mentioned that moving data between machines was an inhibitor to doing visualization.

    NERSC response: In early 2006 the NERSC Global Filesystem was deployed to address this issue. It is a large, shared filesystem that can be accessed from all the computational systems at NERSC.

    Moving files between machines did not come up as an issue on the 2006 survey, and users were mostly satisfied with NGF reliability and performance.

  4. 2005 user survey: On the 2005 survey 17 users requested more hardware resources.

    NERSC response: In addition to deploying the Bassi POWER5 system in early 2006, NERSC has announced plans to deploy a 19,344 processor Cray XT4 system in 2007. User satisfaction with available computing hardware at NERSC increased by 0.3 points on the 2006 survey, and only ten users requested additional computing resources in the Comments about NERSC section.