NERSCPowering Scientific Discovery for 50 Years

2002 User Survey Results

Response Summary

Many thanks to the 300 users who responded to this year's User Survey -- this represents the highest response level in the five years we have conducted the survey. The respondents represent all five DOE Science Offices and a variety of home institutions: see User Information.

You can see the FY 2002 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale.

Satisfaction ScoreMeaning
7 Very Satisfied
6 Mostly Satisfied
5 Somewhat Satisfied
4 Neutral
3 Somewhat Dissatisfied
2 Mostly Dissatisfied
1 Very Dissatisfied
 
Importance ScoreMeaning
3 Very Important
2 Somewhat Important
1 Not Important

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

Every year we institute changes based on the survey; this past year's efforts include:

  1. With the NERSC User Group we established a queue committee whose task was to investigate queue issues and recommend improvements. This year's rating for SP: queue structure went up by 0.7 points. Based on the committee's recommendations NERSC did the following:
    • Improved debug and interactive turnaround during prime time by setting aside 5% of the SP compute pool for interactive and debug jobs from 5:00 AM to 6:00 PM Pacific Time Monday to Friday. This year's rating for SP: ability to run interactively went up by 0.8 points.
    • Implemented priority aging for regular class jobs: jobs in the regular class for more than 36 hours will not be preempted by new premium jobs.
    • Provided a new regular_long class with a connect time limit of 24 hours for jobs using 32 nodes or less. Such jobs are not drained for system outages so self-checkpointing is very important for regular_long jobs.
    The NUG queue committee recommended to NOT implement serial queues on Seaborg.

     

  2. NERSC provided more performance analysis tools on the SP along with documentation and training on how to use them. See Programming Tools. This year's rating for SP: performance and debugging tools went up by .8 points.

     

  3. NERSC installed new visualization tools on the Vis Server, Escher, as well as on Seaborg, and streamlined visualization documentation. See Visualization Packages. This year's rating for Visualization Services went up by .3 points.

     

  4. NERSC wrote a number of scripts to improve SP management procedures. This year's rating for SP: uptime went up by 1 point, the largest increase in satisfaction of the whole survey.

     

  5. NERSC started to conduct monthly training sessions on the internet using Access Grid Node technology. This technology is still not completely mature and there have been a few rough spots along the way. Satisfaction with training remains at the same level as last year and we will work to improve our training program in the upcoming year.

The average satisfaction scores from this year's survey ranged from a high of 6.6 to a low of 4.8. Areas with the highest user satisfaction were:

  1. SP: uptime
  2. Consulting: timely response
  3. HPSS: reliability
  4. PDSF: uptime

Areas with the lowest user satisfaction were:

  1. PVP: batch wait time
  2. Visualization services
  3. Training

The largest increases in satisfaction came from the SP: 9 of the 18 ratings that were significantly higher this year than last year were SP ratings. Other areas showing significant improvements were the T3E (queue structure, tools and utilities, uptime), visualization services, hardware and software configuration, and the New Users Guide.

Only two areas were rated significantly lower this year: PVP performance and debugging tools, and the allocations process.

92 users answered the question What does NERSC do well?   71 respondents pointed out that NERSC is a well run center with good hardware. 42 singled out User Support and NERSC's staff, 16 NERSC's documentation and 13 job scheduling and batch throughput. Some representative comments are:

Among the supercomputing facilities I tried until now, NERSC excels in most aspects. I am most satisfied with the overall stability of the system. This must come from the outstanding competence of the technicians.

I really appreciate the job from consult. They always did their best to help me to resolve my technique problems, especially at starting to use seaborg.

The available hardware and software is very good. It meets my needs well. There is an abundance of documentation I have benefited from. Account support has also been very good. I also appreciate the seeming concern about security.

66 users responded to What should NERSC do differently? The following issues were raised and will be addressed in the upcoming year:

  • SP scheduling:
    • Could more resources be devoted to the regular_long class (more nodes, a longer run time, better throughput)?
    • Could longer run time limits be implemented across the board?
    • Could more services be devoted to interactive jobs?
    • Could there be a serial queue?
  • SP software:
    • Could the Unix environment be more user-friendly (e.g. more editors and shells in the default path)?
    • Could there be more data analysis software, including matlab?
  • Computing resources:
    • NERSC needs more computational power overall
    • Could a PVP resource be provided?
    • Could mid-range computing or cluster resources be provided?
  • Documentation:
    • Provide better searching, navigation, organization of the information.
    • Enhance SP documentation.
  • Training:
    • Provide more training on performance analysis, optimization and debugging.
    • Provide more information in the New Users Guide.

  Here are the survey results:

  1. User Information
  2. Overall Satisfaction and Importance
  3. All Satisfaction Questions and Changes from Previous Years
  4. Visualization and Grid Computing
  5. Web, NIM, and Communications
  6. Hardware Resources
  7. Software Resources
  8. Training
  9. User Services
  10. Comments about NERSC