NERSCPowering Scientific Discovery for 50 Years

2005 User Survey Results

Response Summary

Many thanks to the 201 users who responded to this year's User Survey. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics.

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

You can see the 2005 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale or a 3-point usefulness scale.

Satisfaction ScoreMeaning
7 Very Satisfied
6 Mostly Satisfied
5 Somewhat Satisfied
4 Neutral
3 Somewhat Dissatisfied
2 Mostly Dissatisfied
1 Very Dissatisfied
Importance ScoreMeaning
3 Very Important
2 Somewhat Important
1 Not Important
Usefulness ScoreMeaning
3 Very Useful
2 Somewhat Useful
1 Not at All Useful

The average satisfaction scores from this year's survey ranged from a high of 6.73 (very satisfied) to a low of 3.95 (neutral). See All Satisfaction Ratings.

For questions that spanned the 2004 and 2005 surveys the change in rating was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Significance of Change
significant increase
significant decrease
not significant

Areas with the highest user satisfaction include the HPSS mass storage system, HPC consulting, and account support services:

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2004
1234567
Account support services     1 1 4 25 119 150 6.73 0.61 0.06
HPSS: Reliability (data integrity)       1 1 19 68 89 6.73 0.54 -0.01
OVERALL: Consulting and Support Services     1 1 2 38 137 179 6.73 0.57 0.06
CONSULT: overall     1 1 4 36 118 160 6.68 0.62 -0.01
HPSS: Uptime (Availability)       2 1 21 65 89 6.67 0.62 0.01

Areas with the lowest user satisfaction include batch wait times on both Seaborg and Jacquard, Seaborg's queue structure, PDSF disk stability, and Jacquard performance and debugging tools:

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2004
1234567
Jacquard SW: Performance and debugging tools 1   4 4 6 15 7 37 5.35 1.44  
Jacquard: Batch wait time 2 1 10 8 12 24 13 70 5.16 1.54  
PDSF: Disk configuration and I/O performance   1 2 8 10 8 6 35 5.14 1.29 -0.45
Seaborg: Batch queue structure 6 3 14 17 17 53 16 126 5.06 1.58 0.39
Seaborg: Batch wait time 17 15 28 13 33 27 5 138 3.95 1.76 0.10

The largest increases in satisfaction over last year's survey are shown below:

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2004
1234567
NERSC CVS server       2 5 15 17 39 6.21 0.86 0.87
Seaborg: Batch queue structure 6 3 14 17 17 53 16 126 5.06 1.58 0.39
PDSF SW: C/C++ compilers         1 9 18 28 6.61 0.57 0.37
Seaborg: Uptime (Availability)       3 2 48 85 138 6.56 0.64 0.30
OVERALL: Available Computing Hardware   3 3 4 37 88 46 181 5.89 0.98 0.24
OVERALL: Network connectivity 1   2 4 5 62 104 178 6.45 0.86 0.18

Only three areas were rated significantly lower this year: PDSF overall satisfaction and uptime, and the amount of time taken to resolve consulting issues. The introduction of three major ssytems in the last year combined with a reduction in consulting staff explain the latter.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2004
1234567
PDSF: Overall satisfaction       3 4 22 10 39 6.00 0.83 -0.52
PDSF: Uptime (availability)     1 5 3 16 12 37 5.89 1.10 -0.51
CONSULT: Amount of time to resolve your issue   2 1 3 7 54 86 153 6.41 0.89 -0.19

Survey Results Lead to Changes at NERSC

Every year we institute changes based on the previous year survey. In 2005 NERSC took a number of actions in response to suggestions from the 2004 user survey.

  1. 2004 user comments: On the 2004 survey 37 users asked us to change the job scheduling policies on Seaborg, 25 requesting more support for midrange jobs.

    NERSC response: In early 2005 NERSC implemented two changes to the queueing policies on Seaborg:

    1. we reduced the scheduling distance between midrange and large jobs
    2. we gave all premium jobs a higher scheduling priority than regular priority large-node jobs (prior to this change the scheduling priority for premium midrange jobs was lower than that for regular priority large-node jobs).

    User satisfaction with Seaborg's batch queue structure increased by .39 points on the 2005 survey.

  2. 2004 user comments: On the 2004 survey 25 users requested additional computing resources. In addition, another set of 25 users requested more support for midrange jobs.

    NERSC response: In August 2005 NERSC deployed Jacquard, a Linux cluster with 640 2.2 Ghz Opteron CPUs available for computations and a theoretical peak computational performance of 2.8 Teraflops. This was followed in January 2006 with Bassi, an IBM Power5 system with 888 processors available for computations and a theoretical peak computational performance of 6.7 Teraflops.

    User satisfaction with NERSC's available computing hardware increased by .24 points on the 2005 survey.

  3. 2004 user comment: "Faster network connectivity to the outside world. I realize that this may well be out of your hands, but it is a minor impediment to our daily usage."

    NERSC response: During 2005 NERSC upgraded its network infrastructure to 10 gigabits per second. User satisfaction with network connectivity increased by .18 points on the 2005 survey.

  4. 2004 user comment: "I want imagemagick on seaborg. Then I could make movies there, and that would complete my viz needs."

    NERSC response: NERSC installed imagemagick on Seaborg; it is also available on Jacquard and DaVinci.

Users are invited to provide overall comments about NERSC:

82 users answered the question What does NERSC do well?   47 respondents stated that NERSC gives them access to powerful computing resources without which they could not do their science; 32 mentioned excellent support services and NERSC's responsive staff; 30 pointed to very reliable and well managed hardware; and 11 said everything. Some representative comments are:

powerful is the reason to use NERSC

65 users responded to What should NERSC do differently?. The areas of greatest concern are the inter-related ones of queue turnaround times (24 comments), job scheduling and resource allocation policies (22 comments), and the need for more or different computational resources (17 comments). Users also voiced concerns about data management, software, group accounts, staffing and allocations. Some of the comments from this section are:

The most important improvement would be to reduce the amount of time that jobs wait in the queue; however, I understand that this can only be done by reducing the resource allocations.

A queued job sometimes takes too long to start. But I think that, given the amount of users, probably there would be no efficient queue management anyway.

Over-allocation is a mistake. Long waits in queues have been a disaster for getting science done in the last few years. INCITE had a negative affect on Fusion getting its science work done.

It's much better to have idle processors than idle scientists/physicists. What matters for getting science done is turnaround time. ...

... Interactive computing on Seaborg remains an issue that needs continued attention. Although it has greatly improved in the past year, I would appreciate yet more reliable availability.

Expand capabilities for biologists; add more computing facilities that don't emphasize the largest/fastest interconnect, to reduce queue times for people who want to runs lots of very loosely coupled jobs. More aggressively adapt to changes in the computing environment.

NERSC needs to expand the IBM-SP5 to 10000 processors to replace the IBM-SP3
Continue to test new machines, including the Cray products

NERSC needs to push to get more compute resources so that scientists can get adequate hours on the machine

51 users answered the question How does NERSC compare to other centers you have used?   Twenty six users stated that NERSC was an excellent center or was better than other centers they have used. Reasons given for preferring NERSC include its consulting services and responsiveness, its hardware and software management and the stability of its systems.

Twelve users said that NERSC was comparable to other centers or gave a mixed review and seven said that NERSC was not as good as another center they had used. The most common reason for finding dissatisfaction with NERSC is the oversubscription for its computational resources and the resulting long wait times. Among PDSF users, the most common dissatisfaction was with disk instability.

 

Here are the survey results:

  1. Respondent Demographics
  2. Overall Satisfaction and Importance
  3. All Satisfaction, Importance and Usefulness Ratings
  4. Hardware Resources
  5. Software
  6. Visualization and Data Analysis
  7. HPC Consulting
  8. Services and Communications
  9. Web Interfaces
  10. Training
  11. Comments about NERSC