NERSCPowering Scientific Discovery for 50 Years

2001 User Survey Results

Response Summary

NERSC extends its thanks to the 237 users who participated in this year's  survey; this compares with 134 respondents last year. The respondents represent all five DOE Science Offices and a variety of home institutions: see User Information.

Your responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve.  Every year we institute changes based on the survey;  some of the changes resulting from the FY 2000 survey are:

  • We increased the SP home inode and disk quotas as well as the SP scratch space.  SP disk configuration satisfaction was higher this year and only one user requested more inodes on this year's survey.
  • Last year one of the two top SP issues was that the "SP is hard to use".    Based on comments we received in last year's survey we wrote more SP web documents and made changes to the user environment..  This year only 12% (compared with 25% last year) of the comments reflected that the SP is hard to use.
  • We added resources to the T3E pe512 queue and created a new long64 queue:          satisfaction with T3E turnaround time improved this year.
  • Last year we moved PVP interactive services from the J90 to the SV1 architecture and provided more disk resources. Overall PVP satisfaction was rated higher in this year's survey.

Users rated us on a 7-point satisfaction scale, with 7 corresponding to  Very Satisfied and 1 to Very Dissatisfied. Based on responses from the Overall Satisfaction with NERSC questions, we are doing as well as or better than last year.  Two areas showed significant improvement:

  • available computing hardware
  • allocations process

The areas of most importance to users are:

  • available computing hardware
  • overall running of the center
  • network access

See Overall Satisfaction and Importance

The average satisfaction scores from the questions about specific NERSC resources ranged from a high of 6.6  to a low of 4.5. Areas with high user satisfaction include

  • HPSS reliability, performance and uptime
  • Consulting responsiveness, quality of technical advice, and follow-up
  • Cray programming environment
  • PVP uptime
  • Account support

Areas with lower user satisfaction include

  • Visualization services
  • Batch wait times on all platforms
  • SP interactive services
  • Training services
  • SP performance and debugging tools

The largest increases in user satisfaction came from the PVP cluster: four PVP ratings increased by 0.3 to 0.8 points. This was true last year as well (where the increase in satisfaction from 1999 was even greater). Other areas showing a significant increase in satisfaction are

  • T3E and SP batch wait times
  • SP disk configuration
  • SP Fortran compilers
  • HPSS
  • allocations process

Several scores were significantly lower this year than last:

  • training scores
  • SP uptime
  • SP interactive resources
  • PVP Fortran compilers

See All Satisfaction Questions     and Changes from Previous Years.

When asked what NERSC does well, 35 respondents pointed to our stable and well managed production environment, and 31 focussed on NERSC's excellent support services. Other areas singled out include well done documentation, good software and tools, and the mass storage environment. When asked what NERSC should do differently the most common responses were to provide more hardware resources, and to enhance our software offerings. Of the 49 users who compared NERSC to other centers, 57% said NERSC is the best or better than other centers.  Several sample responses below give the flavor of these comments; for more details see  Comments about NERSC.

  • "NERSC makes it possible for our group to do simulations on a scale that would otherwise be unaffordable."
  • "The availability of the hardware is highly predictable and appears to be managed in an outstanding way."
  • "Provides computing resources in a manner that makes it easy for the user. NERSC is well run and makes the effort of putting the users first, in stark contrast to many other computer centers."
  • "Consulting by telephone and e-mail. Listens to users, and tries to setup systems to satisfy users and not some managerial idea of how we should compute"
  • "The web page, hpcf.nersc.gov, is well structured and complete. Also, information about scheduled down times is reliable and useful."

Some of the suggestions for improvements:

  • "Don't become oversubscribed. I'm worried that SciDAC will push for oversubscription, please don't go there."
  • "Get more hardware. DOE is falling way behind NSF."
  • "Install zsh, please."
  • "more debugging and optimization support for MPP platforms like seaborg"
  • "I want something 10 times faster than Killeen but not MPP".
  • "More access to capability machines that let long jobs of 32-64 pes go for 8 hours or more. Although many applications can use a lot of processors, science studies often ramp up and down in size as one walks through parameter spaces. Having a complement of smaller parallel machines to match the big one is very useful. These smaller machines do not need to scale much past 64 pes."
  • Better indexing of the sprawling website. Finding, e.g. compiler options or queue limits takes some knowledge."