NERSCPowering Scientific Discovery for 50 Years

2009/2010 User Survey Results

Response Summary

Many thanks to the 395 users who responded to this year's User Survey. The response rate is comparable to last two years' and both are significantly increased from previous years:

  • 77.8 percent of the 126 users who had used more than 250,000 XT4-based hours when the survey opened responded
  • 30.9 percent of the 479 users who had used between 10,000 and 250,000 XT4-based hours responded
  • The overall response rate for the 3,533 authorized users during the survey period was 11.2%.
  • The MPP hours used by the survey respondents represents 66.8 percent of total NERSC MPP usage as of the end of the survey period.
  • The PDSF hours used by the PDSF survey respondents represents 20.0 percent of total NERSC PDSF usage as of the end of the survey period.

The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics.

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

We used the 2009/2010 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale or a 3-point usefulness scale.

Satisfaction
Score
Meaning Number of
Times Selected
7 Very Satisfied 8,053
6 Mostly Satisfied 6,219
5 Somewhat Satisfied 1,488
4 Neutral 1,032
3 Somewhat Dissatisfied 366
2 Mostly Dissatisfied 100
1 Very Dissatisfied 88
Importance ScoreMeaning
3 Very Important
2 Somewhat Important
1 Not Important
Usefulness ScoreMeaning
3 Very Useful
2 Somewhat Useful
1 Not at All Useful

The average satisfaction scores from this year's survey ranged from a high of 6.71 (very satisfied) to a low of 4.87 (somewhat satisfied). Across 94 questions, users chose the Very Satisfied rating 7,901 times, and the Very Dissatisfied rating 75 times. The scores for all questions averaged 6.16, and the average score for overall satisfaction with NERSC was 6.40. See All Satisfaction Ratings.

For questions that spanned previous surveys, the change in rating was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Significance of Change
significant increase (change from 2009)
significant decrease (change from 2009)
not significant

Highlights of the 2010 user survey responses include:

  • 2008/2009 user survey: On the 2008/2009 survey Franklin uptime received the second lowest average score (4.91).

    NERSC response: In the first half of 2009 Franklin underwent an intensive stabilization period. Tiger teams were formed with close collaborations with Cray to address system instability. These efforts were continued in the second half of 2009 and throughout 2010C, when NERSC engaged in a project to understand system initiated causes of hung jobs, and to implement corrective actions to reduce their number. These investigations revealed bugs in the Seastar interconnect as well as in the Lustre file system. These bugs were reported to Cray and were fixed in March 2010 when Franklin was upgraded to Cray Linux Environment 2.2. i. As a result, Franklin's Mean Time Between Failures improved from a low of about 3 days in 2008 to 9 days in 2010.

    On the 2010 survey Franklin uptime received an average score of 5.99, a statistically significant increase over the previous year by 1.08 points. Two other Franklin scores (overall satisfaction and Disk configuration and I/O performance) were significantly improved as well.

    Another indication of increased satisfaction with Franklin is that on the 2009 survey 40 users requested improvements in Franklin uptime or performance, whereas only 10 made such requests on the 2010 survey.

  • 2008/2009 user survey: On the 2008/2009 survey ten users requested improvements for the NERSC web site.

    NERSC response: User services staff removed older documentation and made sure that the remaining documentation was up-to-date.

    On the 2010 survey the score for "ease of finding information on the NERSC web site" was significanty improved. Also, for the medium scale MPP users the scores for the web site overall and for the accuracy of information on the web showed significant improvement.

    1. Respondent Demographics
    2. Overall Satisfaction and Importance
    3. All Satisfaction and Importance Ratings
    4. HPC Resources
    5. Software
    6. Services
    7. Comments about NERSC
    • 73 respondents mentioned ease of use, good consulting, staff support and communications;
    • 65 users mentioned computational systems or HPC resources for science;
    • 22 highlighted good software support;
    • 19 were generally happy;
    • 14 mentioned good documentation and web services;
    • 10 pointed to good queue management or job turnaround;
    • 8 were pleased with data services (HPSS, large disk space, data management);
    • 4 complimented good networking, access and security.
  • The complete survey results are listed below and are also available from the left hand navigation column.  

    User Satisfaction with NERSC

    Areas with Highest User Satisfaction

    Areas with the highest user satisfaction are those with average scores of more than 6.5.

    7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

    Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
    1 2 3 4 5 6 7
    PDSF: Uptime (availability)




    9 22 31 6.71 0.46 0.35
    HPSS: Reliability (data integrity)

    2 2 2 33 124 163 6.69 0.68 0.01
    HPSS: Uptime (Availability)


    2 5 41 115 163 6.65 0.60 0.02
    CONSULT: Overall 1
    1 5 9 59 212 287 6.64 0.74 0.12
    GLOBALHOMES: Reliability


    7 5 47 160 219 6.64 0.68  
    PROJECT: Reliability


    3 4 24 79 110 6.63 0.69 0.08
    SERVICES: Account support 1 1 2 6 12 73 243 338 6.60 0.80 -0.06
    GLOBALHOMES: Uptime

    1 4 7 58 151 221 6.60 0.68  
    CONSULT: Response time 1
    2 6 12 61 205 287 6.59 0.80 -0.01
    PROJECT: Uptime

    1 4 3 24 78 110 6.58 0.79 0.03
    PROJECT: Overall


    3 5 32 79 119 6.57 0.70 0.26
    CONSULT: Quality of technical advice

    1 7 14 67 190 279 6.57 0.74 0.09
    OVERALL: Services

    1 6 15 114 246 382 6.57 0.67  
    OVERALL: Security

    2 11 11 69 202 295 6.55 0.79 0.16
    NETWORK: Network performance within NERSC (e.g. Seaborg to HPSS)


    3 10 51 114 178 6.55 0.68 0.04
    WEB: System Status Info

    1 5 12 104 181 303 6.51 0.68  

     

    Areas with Lowest User Satisfaction

    Areas with the lowest user satisfaction are those with average scores of less than 5.5.

    7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

    Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
    1 2 3 4 5 6 7
    NERSC SW: Data analysis software 4 1 4 36 13 43 44 145 5.47 1.47 -0.37
    NERSC SW: Visualization software 4 3 6 31 16 39 49 148 5.47 1.54 -0.45
    NERSC SW: ACTS Collection 3

    31 9 22 29 94 5.39 1.48 -0.54
    TRAINING: Workshops 1 1
    19 8 13 18 60 5.38 1.43 -0.21
    HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41  
    FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68

     

    Significant Increases in Satisfaction

    The three survey results with the most significant improvement from 2009 were all related to the Franklin system. NERSC and Cray have worked hard to improve Franklin's stability in the past two years, and the improved scores demonstrate that these efforts directly resulted in improvements recognized by the users.

    7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

    Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
    1 2 3 4 5 6 7
    FRANKLIN: Uptime (Availability)
    4 13 12 42 119 118 308 5.99 1.13 1.08
    FRANKLIN: Overall
    2 6 10 35 142 117 312 6.12 0.94 0.37
    FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35
    PDSF: Uptime (availability)




    9 22 31 6.71 0.46 0.35
    PROJECT: Overall


    3 5 32 79 119 6.57 0.70 0.26
    SERVICES: Allocations process 1
    3 11 23 108 134 280 6.27 0.91 0.24
    OVERALL: Available Computing Hardware
    1 7 11 33 167 169 388 6.23 0.89 0.23
    OVERALL: Satisfaction with NERSC 1
    1 6 25 157 200 390 6.40 0.75 0.17
    OVERALL: Security

    2 11 11 69 202 295 6.55 0.79 0.16
    WEB: Ease of finding information 1 1 7 11 30 142 112 304 6.10 0.97 0.15
    CONSULT: Overall 1
    1 5 9 59 212 287 6.64 0.74 0.12

     

    Significant Decreases in Satisfaction

    The largest decrease in satisfaction over last year's survey was for Franklin batch wait time: as Franklin became more stable it also became more popular and batch wait times increased.

    7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

    Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2007
    1 2 3 4 5 6 7
    FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68
    PDSF SW: STAR

    3 2 3 7 6 21 5.52 1.40 -0.67
    NERSC SW: ACTS Collection 3

    31 9 22 29 94 5.39 1.48 -0.54
    DaVinci: Disk configuration and I/O performance
    1
    7 3 12 16 39 5.87 1.28 -0.47
    NERSC SW: Visualization software 4 3 6 31 16 39 49 148 5.47 1.54 -0.45
    NERSC SW: Data analysis software 4 1 4 36 13 43 44 145 5.47 1.47 -0.37
    FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19

     

    Satisfaction Patterns for Different MPP Respondents

    The MPP respondents were classified as "large" (if their usage was over 250,000 hours), "medium" (usage between 10,000 and 250,000 hours) and "small". Satisfaction differences between these three groups are shown in the table below.

    The smaller MPP users were especially happy with data storage resources, and the larger MPP users with consulting and web services. It is intersting to note that the larger MPP users were the least satisfied with Franklin's batch queue structure, wven though large jobs are favored on Franklin.

    Item Large MPP Users: Medium MPP Users: Small MPP Users:
    Num Resp Avg Score Change 2009 Num Resp Avg Score Change 2009 Num Resp Avg Score Change 2009
    HPSS: Overall 65 6.38 -0.05 58 6.28 -0.16 24 6.88 0.44
    PROJECT: Overall 42 6.40 0.10 37 6.62 0.31 20 6.80 0.49
    PROJECT: File and Directory Operations 36 6.31 0.10 34 6.47 0.26 17 6.76 0.56
    CONSULT: Overall 91 6.71 0.19 116 6.65 0.12 53 6.66 0.13
    CONSULT: Quality of technical advice 89 6.63 0.14 113 6.53 0.05 50 6.60 0.12
    Security 83 6.60 0.21 122 6.57 0.17 49 6.47 0.07
    WEB: Accuracy of information 91 6.49 0.17 120 6.42 0.09 55 6.27 -0.05
    OVERALL: Satisfaction with NERSC 105 6.44 0.21 148 6.30 0.08 77 6.45 0.23
    WEB: www.nersc.gov overall 95 6.39 0.11 130 6.44 0.16 58 6.34 0.07
    SERVICES: Allocations process 88 6.19 0.16 108 6.28 0.25 52 6.37 0.33
    TRAINING: New User's Guide 53 6.25 0.10 83 6.35 0.21 35 5.97 -0.17
    HPSS: User interface (hsi, pftp, ftp) 63 6.06 0.04 54 5.63 -0.39 23 6.35 0.33
    OVERALL: Available Computing Hardware 105 6.34 0.34 148 6.10 0.10 76 6.17 0.17
    OVERALL: Available Software 87 6.29 0.08 126 5.98 -0.23 64 6.22 0.01
    WEB: Ease of finding information 90 6.17 0.22 122 6.23 0.28 55 5.96 0.01
    FRANKLIN: Overall 103 6.16 0.41 132 6.05 0.31 61 6.11 0.40
    FRANKLIN: Uptime (Availability) 102 6.12 1.21 130 5.96 1.05 59 5.80 0.89
    DaVinci: Overall 13 5.31 -0.91 11 6.18 -0.03 9 6.11 -0.10
    SERVICES: Data analysis and visualization consulting 33 5.58 -0.26 31 5.06 -0.71 17 6.18 0.34
    FRANKLIN: Disk configuration and I/O performance 98 5.93 0.33 118 6.06 0.46 52 5.90 0.30
    FRANKLIN: Ability to run interactively 76 5.78 0.02 94 6.02 0.26 49 5.96 0.20
    DaVinci: Disk configuration and I/O performance 12 5.58 -0.76 11 6.00 -0.34 9 6.11 -0.23
    FRANKLIN: Batch queue structure 103 5.56 -0.34 127 5.76 -0.15 54 5.89 -0.01
    NERSC SW: Data analysis software 43 5.42 -0.42 44 4.98 -0.86 31 5.84 -0.00
    NERSC SW: Visualization software 47 5.53 -0.38 49 5.06 -0.85 30 5.77 -0.15
    NERSC SW: ACTS Collection 27 5.48 -0.45 37 5.19 -0.75 20 5.65 -0.29
    FRANKLIN: Batch wait time 103 4.59 -0.96 130 4.78 -0.76 56 5.32 -0.23

     

    Changes in Satisfaction for Active MPP Respondents

    The table below includes only those users who have run batch jobs on the MPP systems. It does not include interactive-only users or project managers who do not compute. This group of users showed an increase in satisfaction for the NERSC web site.

    Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
    1 2 3 4 5 6 7
    CONSULT: Overall

    1 4 9 51 195 260 6.67 0.66 0.15
    PROJECT: Overall


    3 4 26 66 99 6.57 0.72 0.26
    Security

    2 9 10 57 176 254 6.56 0.80 0.16
    WEB: www.nersc.gov overall

    2 2 13 129 137 283 6.40 0.68 0.12
    OVERALL: Satisfaction with NERSC 1
    1 6 23 130 169 330 6.38 0.78 0.15
    WEB: Timeliness of information

    1 7 20 113 123 264 6.33 0.76 0.12
    SERVICES: Allocations process 1
    2 9 22 97 117 248 6.27 0.90 0.23
    OVERALL: Available Computing Hardware
    1 7 9 29 147 136 329 6.19 0.90 0.19
    WEB: Ease of finding information

    5 10 27 122 103 267 6.15 0.89 0.20
    FRANKLIN: Overall
    1 6 10 35 135 109 296 6.11 0.92 0.36
    FRANKLIN: Uptime (Availability)
    3 13 11 41 114 109 291 5.98 1.11 1.07
    FRANKLIN: Disk configuration and I/O performance 1
    3 34 22 109 99 268 5.98 1.08 0.38
    FRANKLIN: Batch queue structure 3 2 8 36 43 112 80 284 5.71 1.22 -0.19
    SERVICES: Ability to perform data analysis 1 2 3 17 17 34 32 106 5.61 1.33 -0.32
    NERSC SW: Visualization software 3 3 6 28 14 32 40 126 5.40 1.54 -0.51
    NERSC SW: ACTS Collection 2

    29 9 18 26 84 5.39 1.43 -0.54
    NERSC SW: Data analysis software 3 1 3 34 11 34 32 118 5.36 1.46 -0.47
    FRANKLIN: Batch wait time 7 10 43 42 79 82 26 289 4.82 1.43 -0.73

     

    Changes in Satisfaction for PDSF Respondents

    The PDSF users are clearly more satisfied with data analysis resources than the MPP users.

    Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
    1 2 3 4 5 6 7
    PROJECT: I/O Bandwidth




    1 4 5 6.80 0.45 0.56
    NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)




    3 4 7 6.57 0.53 0.42
    SERVICES: Ability to perform data analysis




    4 5 9 6.56 0.53 0.62
    SERVICES: Allocations process



    1 4 6 11 6.45 0.69 0.42
    SERVICES: Data analysis and visualization assistance



    2 1 6 9 6.44 0.88 0.61
    OVERALL: Available Computing Hardware



    2 9 12 23 6.43 0.66 0.43
    NERSC SW: Performance and debugging tools




    8 2 10 6.20 0.42 0.33
    WEB: www.nersc.gov overall
    1 1
    1 6 2 11 5.45 1.57 -0.82
    PDSF SW: STAR

    2 1 3 4 3 13 5.38 1.39 -0.80
    TRAINING: New User's Guide 1

    1 2 4 1 9 5.11 1.76 -1.03
    WEB: Ease of finding information
    1 1
    3 6
    11 5.09 1.38 -0.86

     

    Survey Results Lead to Changes at NERSC

    Every year we institute changes based on the previous year survey. In 2009 and early 2010 NERSC took a number of actions in response to suggestions from the 2008/2009 user survey.

     

    Users Provide Overall Comments about NERSC

    132 users answered the question What does NERSC do well?

    Some representative comments are:

    User support is fantastic - timely and knowledgeable, including follow-up service. New machines are installed often, and they are state-of-the-art. The queues are crowded but fairly managed.
    Everything that is important for me. This is a model for how a computer user facility should operate.
    User support is very good. Diversity of computational resources.
    Website is first class, especially the clear instructions for compiling and running jobs.
    very good account management tools (nim), good software support
    HPSS is fast.
    I really like the new machine Carver. It is efficient.
    The account allocation process is very fast and efficient.
    NERSC has always been user centered - I have consistently been impressed by this.
    NERSC has proven extremely effective for running high resolution models of the earth's climate that require a large number of processors. Without NERSC I never would have been able to run these simulations at such a high resolution to predict future climate. Many thanks.
    We run data intensive jobs, and the network to batch nodes is great! The access to processors is also great.

    105 users responded to What can NERSC do to make you more productive? 

    The top areas of concern were long queue turnaround times, the need for more computing resources, queue policies, and software support. Some of the comments from this section are:

    There are a lot of users (it is good to be useful and popular), and the price of that success is long queues that lead to slower turn-around. A long-standing problem with no easy answer.
    Add more processors to carver. The long queue time makes progress slow. Carver clobbers hopper and franklin with the performance increase of my code. Also recompiling the code is much faster on carver. Yet because i have to wait longer to start and restart the simulations, it doesn't get me results faster overall.
    Turn around time is always an issue! More resources would be great!
    Add a machine with longer wall-clock limit and less core/node (to save allocation). Not all users have to chase ultra massive parallelization for their research.
    Allow longer jobs (such as one month or half year) on Carver and Hoppers. Let science run to the course.
    I often have a spectrum of job sizes to run (e.g., scaling studies, debugging and production runs) and the queuing structure/algorithms seem to be preferential to large runs. It would improve my productivity if this was more balanced or if there were nearly identical systems which had complementary queuing policies.
    Provide an interface which can help the user determine which of the NERSC machines is more appropriate at a given time for running a job based on the number of processors and runtime that are requested.
    During the working day, i would always encourage the availability of more development nodes over production ones.
    Flexibility of creating special queues for short term intensive use without extended waiting time. Hopefully it will not be too expensive, either.
    Better ability to manage group permissions and priorities for jobs, files, etc. The functionality of the idea of project accounts is still relevant.
    t would help developers if a more robust suite of profiling tools were available. For example there are some very good profiling tools for franklin, but they are not robust enough to analyze a very large program.
    Allow subproject management and helpful issue tracker (wikis, as well) ala github.com or bitbucket.org
    I could possibly use some more web-based tutorials on various topics: MPI programming, data analysis with NERSC tools, a tutorial on getting Visit (visualization tool) to work on my Linux machine.
    I also need a better way to perform remote visualization on Euclid with Visit.
    Increase home and scratch area quota in general. Lot of time is wasted in managing the scratch space and archiving and storing the data.
    Improve the performance and scaling of file I/O, preferably via HDF5.
    HPSS should allow users to view the file contents. Add a "less" "more" there. At present, I have to transfer files back to franklin and view to see whether those are the files that I need.
    Make hsi/htar software available on a wider variety of Linux distributions.
    Keep the supercomputers up more. Make them more stable. Reduce the variability in the wallclock times for identical jobs.
    If in future one can run jobs with more Memory than available at present , researchers in general would benefit tremendously.
    Allow more than 3 password tries before locking someone out
    If scheduled maintenance was at the weekend that would make my work more productive.

    15 users responded to If there is anything important to you that is not covered in this survey, please tell us about it. 

    Respondent Demographics

    Respondents by DOE Office and User Role:

    Office Respondents Percent
    ASCR 34 8.6%
    BER 68 17.2%
    BES 147 37.2%
    FES 45 11.4%
    HEP 48 12.2%
    NP 51 12.9%
    guests 2 0.1%
    User Role Number Percent
    Principal Investigators 63 15.9%
    PI Proxies 77 19.5%
    Users 255 64.6%

     

    Respondents by Organization Type and Organizations with the Most Respondents:

    Organization Type Number Percent
    Universities 259 65.6%
    DOE Labs 104 26.3%
    Other Govt Labs 18 4.6%
    Industry 12 3.0%
    Private labs 2 0.5%
    Organization Number Percent
    Berkeley Lab 49 12.4%
    UC Berkeley 27 6.8%
    U.Wisconsin - Madison 12 3.0%
    Oak Ridge 11 2.8%
    PNNL 10 2.5%
    U. Washington 10 2.5%
    Vanderbilt University 9 2.3%
    UCLA 8 2.0%
    Cal Tech 7 1.8%
    NCAR 7 1.8%
    Princeton University 7 1.8%
    Tech-X Corp 7 1.8%
    Texas A&M 7 1.8%
    U. Maryland 7 1.8%
    U. Texas at Austin 8 2.0%
    Argonne 6 1.5%
    Princeton University 7 1.8%
    Argonne 6 1.5%
    Auburn University 5 1.3%
    Northwestern University 5 1.3%
    PPPL 5 1.3%

     

    How long have you used NERSC?

    TimeNumberPercent
    less than 1 year 102 26.2%
    1 - 3 years 140 35.9%
    more than 3 years 148 37.9%

     

    What desktop systems do you use to connect to NERSC?

    SystemResponses
    Unix Total 273
    PC Total 232
    Mac Total 170
    Linux 256
    OS X 168
    Windows XP 118
    Windows 7 55
    Windows Vista 53
    Sun Solaris 6
    FreeBSD 5
    Windows 2000 4
    IBM AIX 3
    SGI IRIX 2
    Other PC 2
    Other Unix 1
    MacOS 1
    Other Mac 1

     

    Web Browser Used to Take Survey:

    BrowserNumberPercent
    Firefox 3 1,073 56.1
    Safari 279 14.6
    Google Chrome 194 10.2
    MSIE 8 157 8.2
    MSIE 7 90 4.7
    Firefox 2 49 2.6
    Mozilla 30 1.6
    MSIE 6 25 1.3
    Opera 15 0.8

     

    Operating System Used to Take Survey:

    OSNumberPercent
    Mac OS X 679 35.5
    Linux 503 26.3
    Windows XP 412 21.5
    Windows 7 168 8.8
    Windows Vista 135 7.1
    FreeBSD 5 0.3
    Windows Server 2003 5 0.3
    SunOS 5 0.3

     


    Overall Satisfaction

    Legend:

    SatisfactionAverage Score
    Very Satisfied 6.50 - 7.00
    Mostly Satisfied - High 6.00 - 6.49
    ImportanceAverage Score
    Very Important 2.50 - 3.00
    Somewhat Important 1.50 - 2.49
    Significance of Change
    significant increase
    not significant
     

    Overall Satisfaction with NERSC

    7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

    Item Num who rated this item as: Num Resp Average Score Std. Dev. Change from 2009 Change from 2007
    1 2 3 4 5 6 7
    OVERALL: Services

    1 6 15 114 246 382 6.57 0.67    
    OVERALL: Satisfaction with NERSC 1
    1 6 25 157 200 390 6.40 0.75 0.17 0.10
    OVERALL: Computing resources
    1 7 11 33 167 169 388 6.23 0.89 0.23 0.11
    OVERALL: Data storage resources 1 2 4 35 26 88 178 334 6.17 1.13 0.11 0.11
    OVERALL: HPC Software
    1 5 35 21 115 142 319 6.10 1.07 -0.11 -0.1

 How important to you is?

    3=Very, 2=Somewhat, 1=Not important

    Item Num who rated this item as: Total Responses Average ScoreStd. Dev.
    1 2 3
    OVERALL: Computing resources 2 30 342 374 2.91 0.31
    OVERALL: Satisfaction with NERSC
    68 306 374 2.82 0.39
    OVERALL: Services 7 107 258 372 2.67 0.51
    OVERALL: Data storage resources 45 111 189 345 2.42 0.71
    OVERALL: HPC Software 46 103 183 332 2.41 0.72


All Satisfaction and Importance Ratings

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Somewhat Satisfied 4.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant increase
significant decrease
not significant

 

All Satisfaction Topics - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Num Resp Average Score Std. Dev. Change from 2009 Change from 2007
1 2 3 4 5 6 7
PDSF: Uptime (availability)




9 22 31 6.71 0.46 0.35 0.58
HPSS: Reliability (data integrity)

2 2 2 33 124 163 6.69 0.68 0.01 0.02
HPSS: Uptime (Availability)


2 5 41 115 163 6.65 0.60 0.02 0.11
CONSULT: Overall 1
1 5 9 59 212 287 6.64 0.74 0.12 0.08
GLOBALHOMES: Reliability


7 5 47 160 219 6.64 0.68    
PROJECT: Reliability


3 4 24 79 110 6.63 0.69 0.08 -0.05
SERVICES: Account support and Passwords 1 1 2 6 12 73 243 338 6.60 0.80 -0.06 -0.10
GLOBALHOMES: Uptime

1 4 7 58 151 221 6.60 0.68    
CONSULT: Response time 1
2 6 12 61 205 287 6.59 0.80 -0.01 0.04
PROJECT: Uptime

1 4 3 24 78 110 6.58 0.79 0.03 -0.08
PROJECT: Overall


3 5 32 79 119 6.57 0.70 0.26 0.14
CONSULT: Quality of technical advice

1 7 14 67 190 279 6.57 0.74 0.09 0.08
OVERALL: Services

1 6 15 114 246 382 6.57 0.67    
Security

2 11 11 69 202 295 6.55 0.79 0.16 0.20
NETWORK: Network performance within NERSC (e.g. Hopper to HPSS)


3 10 51 114 178 6.55 0.68 0.04 -0.04
WEB: Status Info

1 5 12 104 181 303 6.51 0.68    
GRID: Access and Authentication


1 3 26 39 69 6.49 0.66 0.06 0.16
GLOBALHOMES: Overall 1 1 3 6 7 65 146 229 6.48 0.92    
HPSS: Overall satisfaction 2 1 1 1 7 55 104 171 6.46 0.95 0.02 -0.00
GRID: Job Submission


3 4 22 38 67 6.42 0.80 -0.07 0.12
PROJECT: File and Directory Operations
1 1 7 4 23 68 104 6.41 1.02 0.21 0.18
PDSF: Overall satisfaction



2 15 15 32 6.41 0.61 0.12 0.31
OVERALL: Satisfaction with NERSC 1
1 6 25 157 200 390 6.40 0.75 0.17 0.10
CONSULT: Time to solution 1 1 3 10 23 71 172 281 6.40 0.96 0.01 0.04
CARVER: Uptime (Availability) 1

8 5 26 68 108 6.39 1.03    
GRID: File Transfer

1 3
28 35 67 6.39 0.83 0.10 0.27
WEB: NIM web interface 1 1 4 6 25 105 180 322 6.38 0.90 -0.00 0.10
PROJECT: I/O Bandwidth

1 8 8 23 67 107 6.37 0.98 0.14 0.31
WEB: Accuracy of information 1
1 10 16 117 155 300 6.37 0.83 0.05 -0.03
GLOBALHOMES: I/O Bandwidth

2 13 13 61 122 211 6.36 0.92    
PDSF: Ability to run interactively


2 2 8 16 28 6.36 0.91 0.21 0.81
CARVER: Overall


6 6 41 57 110 6.35 0.82    
WEB: www.nersc.gov overall
1 4 3 15 147 150 320 6.35 0.77 0.07 -0.03
GLOBALHOMES: File and Directory Operations 2
3 11 10 59 118 203 6.33 1.06    
CONSULT: On-line help desk 1 1 1 12 11 34 99 159 6.33 1.10 -0.03 0.04
HPSS: Data transfer rates 1
3 4 15 51 90 164 6.32 0.98 0.07 -0.07
GRID: Job Monitoring 1 1
4 1 23 39 69 6.30 1.15 -0.26 0.22
WEB: Timeliness of information 1
2 9 22 129 136 299 6.28 0.85 0.08 0.00
NERSC SW: Programming environment 1
3 14 24 121 156 319 6.28 0.91 -0.08  
CONSULT: Special requests (e.g. disk quota increases, etc.) 2

22 11 33 116 184 6.28 1.17 -0.04 -0.01
NERSC SW: Programming libraries
1 4 16 24 105 154 304 6.27 0.95 -0.06  
HPSS: Data access time 1 1 3 5 9 61 80 160 6.27 1.02 -0.09 -0.04
SERVICES: Allocations process 1
3 11 23 108 134 280 6.27 0.91 0.24 0.10
PDSF: Disk configuration and I/O performance

1 1 3 10 16 31 6.26 1.00 0.32 0.72
PDSF SW: Performance and debugging tools


2 1 10 11 24 6.25 0.90 0.29 0.25
HOPPER: Uptime (Availability)

2 7 12 60 66 147 6.23 0.89    
OVERALL: Computing resources
1 7 11 33 167 169 388 6.23 0.89 0.23 0.11
PDSF: Batch queue structure


2 5 8 16 31 6.23 0.96 0.01 0.34
PDSF SW: Programming environment


2 3 12 14 31 6.23 0.88 -0.20 -0.02
NERSC SW: Applications software 1 1 5 15 23 112 142 299 6.22 1.00 0.12  
PDSF SW: General tools and utilities

1 2 1 11 14 29 6.21 1.05 0.07 0.21
OVERALL: Data storage resources 1 2 4 35 26 88 178 334 6.17 1.13 0.11 0.11
TRAINING: New User's Guide 2
2 8 15 91 80 198 6.17 0.98 0.02 -0.05
DaVinci: Uptime (Availability)

1 7
10 22 40 6.13 1.22 -0.31  
FRANKLIN: Overall
2 6 10 35 142 117 312 6.12 0.94 0.37 0.41
NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)

11 14 23 66 108 222 6.11 1.13 -0.04 0.05
OVERALL: HPC Software
1 5 35 21 115 142 319 6.10 1.07 -0.11 -0.12
WEB: Ease of finding information 1 1 7 11 30 142 112 304 6.10 0.97 0.15 0.05
NERSC SW: General tools and utilities 1 1 6 24 21 114 121 288 6.09 1.07 -0.05  
HOPPER: Overall 1
3 10 15 60 62 151 6.09 1.06    
Carver: Disk configuration and I/O performance 2
2 11 5 26 49 95 6.06 1.33    
TRAINING: Web tutorials 2
2 9 22 60 65 160 6.06 1.09 0.02 -0.08
PDSF SW: Programming libraries

3
4 8 14 29 6.03 1.27 -0.30 -0.17
FRANKLIN: Uptime (Availability)
4 13 12 42 119 118 308 5.99 1.13 1.08 0.95
PDSF SW: Applications software

1 1 4 14 8 28 5.96 0.96 -0.27 0.01
FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35 0.81
FRANKLIN: Ability to run interactively 1
3 38 19 76 94 231 5.94 1.17 0.18 0.36
HPSS: User interface (hsi, pftp, ftp) 2
6 14 25 46 70 163 5.93 1.25 -0.09 -0.03
PDSF SW: CHOS 1

1 6 8 11 27 5.93 1.33 -0.19  
CARVER: Ability to run interactively 1 1 1 11 9 19 37 79 5.92 1.34    
DaVinci: Ability to run interactively
1 1 7 1 9 19 38 5.92 1.40 -0.39  
CARVER: Batch queue structure 1 1 3 13 10 29 45 102 5.91 1.31    
NERSC SW: Performance and debugging tools 1
5 34 23 95 88 246 5.91 1.13 0.04  
HOPPER: Disk configuration and I/O pernkformance 3 1 1 19 12 45 55 136 5.88 1.34    
DaVinci: Disk configuration and I/O performance
1
7 3 12 16 39 5.87 1.28 -0.47  
DaVinci: Overall

3 5 5 11 17 41 5.83 1.30 -0.38  
CARVER: Batch wait time
3 8 9 12 22 47 101 5.81 1.45    
WEB: Searching
1 6 22 34 64 57 184 5.77 1.14 0.09 0.05
SERVICES: Ability to perform data analysis 1 2 4 18 17 43 43 128 5.73 1.30 -0.21  
FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19 -0.32
HOPPER: Batch queue structure 2
5 19 22 56 38 142 5.67 1.24    
SERVICES: Data analysis and visualization assistance 3
3 18 14 19 41 98 5.66 1.50 -0.17  
HOPPER: Ability to run interactively 2 1 2 28 2 33 38 106 5.62 1.45    
PDSF SW: STAR

3 2 3 7 6 21 5.52 1.40 -0.67  
NERSC SW: Data analysis software 4 1 4 36 13 43 44 145 5.47 1.47 -0.37  
NERSC SW: Visualization software 4 3 6 31 16 39 49 148 5.47 1.54 -0.45  
NERSC SW: ACTS Collection 3

31 9 22 29 94 5.39 1.48 -0.54  
TRAINING: Workshops 1 1
19 8 13 18 60 5.38 1.43 -0.21 -0.01
HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41    
FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68 -0.98

 

All Satisfaction Topics - by Number of Responses

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009 Change from 2007
1 2 3 4 5 6 7
OVERALL: Satisfaction with NERSC 1
1 6 25 157 200 390 6.40 0.75 0.17 0.10
OVERALL: Computing resources
1 7 11 33 167 169 388 6.23 0.89 0.23 0.11
OVERALL: Services

1 6 15 114 246 382 6.57 0.67    
SERVICES: Account support and Passwords 1 1 2 6 12 73 243 338 6.60 0.80 -0.06 -0.10
OVERALL: Data storage resources 1 2 4 35 26 88 178 334 6.17 1.13 0.11 0.11
WEB: NIM web interface 1 1 4 6 25 105 180 322 6.38 0.90 -0.00 0.10
WEB: www.nersc.gov overall
1 4 3 15 147 150 320 6.35 0.77 0.07 -0.03
NERSC SW: Programming environment 1
3 14 24 121 156 319 6.28 0.91 -0.08  
OVERALL: HPC Software
1 5 35 21 115 142 319 6.10 1.07 -0.11 -0.12
FRANKLIN: Overall
2 6 10 35 142 117 312 6.12 0.94 0.37 0.41
FRANKLIN: Uptime (Availability)
4 13 12 42 119 118 308 5.99 1.13 1.08 0.95
NERSC SW: Programming libraries
1 4 16 24 105 154 304 6.27 0.95 -0.06  
WEB: Ease of finding information 1 1 7 11 30 142 112 304 6.10 0.97 0.15 0.05
WEB: Status Info

1 5 12 104 181 303 6.51 0.68    
FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68 -0.98
WEB: Accuracy of information 1
1 10 16 117 155 300 6.37 0.83 0.05 -0.03
WEB: Timeliness of information 1
2 9 22 129 136 299 6.28 0.85 0.08 0.00
NERSC SW: Applications software 1 1 5 15 23 112 142 299 6.22 1.00 0.12  
FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19 -0.32
Security

2 11 11 69 202 295 6.55 0.79 0.16 0.20
NERSC SW: General tools and utilities 1 1 6 24 21 114 121 288 6.09 1.07 -0.05  
CONSULT: Response time 1
2 6 12 61 205 287 6.59 0.80 -0.01 0.04
CONSULT: Overall 1
1 5 9 59 212 287 6.64 0.74 0.12 0.08
CONSULT: Time to solution 1 1 3 10 23 71 172 281 6.40 0.96 0.01 0.04
FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35 0.81
SERVICES: Allocations process 1
3 11 23 108 134 280 6.27 0.91 0.24 0.10
CONSULT: Quality of technical advice

1 7 14 67 190 279 6.57 0.74 0.09 0.08
NERSC SW: Performance and debugging tools 1
5 34 23 95 88 246 5.91 1.13 0.04  
FRANKLIN: Ability to run interactively 1
3 38 19 76 94 231 5.94 1.17 0.18 0.36
GLOBALHOMES: Overall 1 1 3 6 7 65 146 229 6.48 0.92    
NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)

11 14 23 66 108 222 6.11 1.13 -0.04 0.05
GLOBALHOMES: Uptime

1 4 7 58 151 221 6.60 0.68    
GLOBALHOMES: Reliability


7 5 47 160 219 6.64 0.68    
GLOBALHOMES: I/O Bandwidth

2 13 13 61 122 211 6.36 0.92    
GLOBALHOMES: File and Directory Operations 2
3 11 10 59 118 203 6.33 1.06    
TRAINING: New User's Guide 2
2 8 15 91 80 198 6.17 0.98 0.02 -0.05
CONSULT: Special requests (e.g. disk quota increases, etc.) 2

22 11 33 116 184 6.28 1.17 -0.04 -0.01
WEB: Searching
1 6 22 34 64 57 184 5.77 1.14 0.09 0.05
NETWORK: Network performance within NERSC (e.g. Hopper to HPSS)


3 10 51 114 178 6.55 0.68 0.04 -0.04
HPSS: Overall satisfaction 2 1 1 1 7 55 104 171 6.46 0.95 0.02 -0.00
HPSS: Data transfer rates 1
3 4 15 51 90 164 6.32 0.98 0.07 -0.07
HPSS: Reliability (data integrity)

2 2 2 33 124 163 6.69 0.68 0.01 0.02
HPSS: Uptime (Availability)


2 5 41 115 163 6.65 0.60 0.02 0.11
HPSS: User interface (hsi, pftp, ftp) 2
6 14 25 46 70 163 5.93 1.25 -0.09 -0.03
HPSS: Data access time 1 1 3 5 9 61 80 160 6.27 1.02 -0.09 -0.04
TRAINING: Web tutorials 2
2 9 22 60 65 160 6.06 1.09 0.02 -0.08
CONSULT: On-line help desk 1 1 1 12 11 34 99 159 6.33 1.10 -0.03 0.04
HOPPER: Overall 1
3 10 15 60 62 151 6.09 1.06    
NERSC SW: Visualization software 4 3 6 31 16 39 49 148 5.47 1.54 -0.45  
HOPPER: Uptime (Availability)

2 7 12 60 66 147 6.23 0.89    
HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41    
NERSC SW: Data analysis software 4 1 4 36 13 43 44 145 5.47 1.47 -0.37  
HOPPER: Batch queue structure 2
5 19 22 56 38 142 5.67 1.24    
HOPPER: Disk configuration and I/O pernkformance 3 1 1 19 12 45 55 136 5.88 1.34    
SERVICES: Ability to perform data analysis 1 2 4 18 17 43 43 128 5.73 1.30 -0.21  
PROJECT: Overall


3 5 32 79 119 6.57 0.70 0.26 0.14
PROJECT: Uptime

1 4 3 24 78 110 6.58 0.79 0.03 -0.08
PROJECT: Reliability


3 4 24 79 110 6.63 0.69 0.08 -0.05
CARVER: Overall


6 6 41 57 110 6.35 0.82    
CARVER: Uptime (Availability) 1

8 5 26 68 108 6.39 1.03    
PROJECT: I/O Bandwidth

1 8 8 23 67 107 6.37 0.98 0.14 0.31
HOPPER: Ability to run interactively 2 1 2 28 2 33 38 106 5.62 1.45    
PROJECT: File and Directory Operations
1 1 7 4 23 68 104 6.41 1.02 0.21 0.18
CARVER: Batch queue structure 1 1 3 13 10 29 45 102 5.91 1.31    
CARVER: Batch wait time
3 8 9 12 22 47 101 5.81 1.45    
SERVICES: Data analysis and visualization assistance 3
3 18 14 19 41 98 5.66 1.50 -0.17  
Carver: Disk configuration and I/O performance 2
2 11 5 26 49 95 6.06 1.33    
NERSC SW: ACTS Collection 3

31 9 22 29 94 5.39 1.48 -0.54  
CARVER: Ability to run interactively 1 1 1 11 9 19 37 79 5.92 1.34    
GRID: Access and Authentication


1 3 26 39 69 6.49 0.66 0.06 0.16
GRID: Job Monitoring 1 1
4 1 23 39 69 6.30 1.15 -0.26 0.22
GRID: Job Submission


3 4 22 38 67 6.42 0.80 -0.07 0.12
GRID: File Transfer

1 3
28 35 67 6.39 0.83 0.10 0.27
TRAINING: Workshops 1 1
19 8 13 18 60 5.38 1.43 -0.21 -0.01
DaVinci: Overall

3 5 5 11 17 41 5.83 1.30 -0.38  
DaVinci: Uptime (Availability)

1 7
10 22 40 6.13 1.22 -0.31  
DaVinci: Disk configuration and I/O performance
1
7 3 12 16 39 5.87 1.28 -0.47  
DaVinci: Ability to run interactively
1 1 7 1 9 19 38 5.92 1.40 -0.39  
PDSF: Overall satisfaction



2 15 15 32 6.41 0.61 0.12 0.31
PDSF: Uptime (availability)




9 22 31 6.71 0.46 0.35 0.58
PDSF: Disk configuration and I/O performance

1 1 3 10 16 31 6.26 1.00 0.32 0.72
PDSF: Batch queue structure


2 5 8 16 31 6.23 0.96 0.01 0.34
PDSF SW: Programming environment


2 3 12 14 31 6.23 0.88 -0.20 -0.02
PDSF SW: General tools and utilities

1 2 1 11 14 29 6.21 1.05 0.07 0.21
PDSF SW: Programming libraries

3
4 8 14 29 6.03 1.27 -0.30 -0.17
PDSF: Ability to run interactively


2 2 8 16 28 6.36 0.91 0.21 0.81
PDSF SW: Applications software

1 1 4 14 8 28 5.96 0.96 -0.27 0.01
PDSF SW: CHOS 1

1 6 8 11 27 5.93 1.33 -0.19  
PDSF SW: Performance and debugging tools


2 1 10 11 24 6.25 0.90 0.29 0.25
PDSF SW: STAR

3 2 3 7 6 21 5.52 1.40 -0.67  

 

All Importance Topics

Importance Ratings: 3=Very important, 2=Somewhat important, 1=Not important

Item Num who rated this item as: Total Responses Average Rating Std. Dev.
1 2 3
OVERALL: Computing resources 2 30 342 374 2.91 0.31
OVERALL: Satisfaction with NERSC
68 306 374 2.82 0.39
OVERALL: Services 7 107 258 372 2.67 0.51
OVERALL: Date storage resources 45 111 189 345 2.42 0.71
OVERALL: Software 46 103 183 332 2.41 0.72
SERVICES: Ability to perform data analysis 33 40 73 146 2.27 0.81
SERVICES: Data analysis and visualization assistance 37 49 50 136 2.10 0.80

HPC Resources

Legend:

Satisfaction Average Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Somewhat Satisfied 4.50 - 5.49
Significance of Change
significant increase
significant decrease
not significant

 

Hardware Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
PDSF: Uptime (availability)




9 22 31 6.71 0.46 0.35
HPSS: Reliability (data integrity)

2 2 2 33 124 163 6.69 0.68 0.01
HPSS: Uptime (Availability)


2 5 41 115 163 6.65 0.60 0.02
GLOBALHOMES: Reliability


7 5 47 160 219 6.64 0.68  
PROJECT: Reliability


3 4 24 79 110 6.63 0.69 0.08
GLOBALHOMES: Uptime

1 4 7 58 151 221 6.60 0.68  
PROJECT: Uptime

1 4 3 24 78 110 6.58 0.79 0.03
PROJECT: Overall


3 5 32 79 119 6.57 0.70 0.26
NETWORK: Network performance within NERSC (e.g. Hopper to HPSS)


3 10 51 114 178 6.55 0.68 0.04
GRID: Access and Authentication


1 3 26 39 69 6.49 0.66 0.06
GLOBALHOMES: Overall 1 1 3 6 7 65 146 229 6.48 0.92  
HPSS: Overall satisfaction 2 1 1 1 7 55 104 171 6.46 0.95 0.02
GRID: Job Submission


3 4 22 38 67 6.42 0.80 -0.07
PROJECT: File and Directory Operations
1 1 7 4 23 68 104 6.41 1.02 0.21
PDSF: Overall satisfaction



2 15 15 32 6.41 0.61 0.12
CARVER: Uptime (Availability) 1

8 5 26 68 108 6.39 1.03  
GRID: File Transfer

1 3
28 35 67 6.39 0.83 0.10
PROJECT: I/O Bandwidth

1 8 8 23 67 107 6.37 0.98 0.14
GLOBALHOMES: I/O Bandwidth

2 13 13 61 122 211 6.36 0.92  
PDSF: Ability to run interactively


2 2 8 16 28 6.36 0.91 0.21
CARVER: Overall


6 6 41 57 110 6.35 0.82  
GLOBALHOMES: File and Directory Operations 2
3 11 10 59 118 203 6.33 1.06  
HPSS: Data transfer rates 1
3 4 15 51 90 164 6.32 0.98 0.07
GRID: Job Monitoring 1 1
4 1 23 39 69 6.30 1.15 -0.26
HPSS: Data access time 1 1 3 5 9 61 80 160 6.27 1.02 -0.09
PDSF: Disk configuration and I/O performance

1 1 3 10 16 31 6.26 1.00 0.32
PDSF SW: Performance and debugging tools


2 1 10 11 24 6.25 0.90 0.29
HOPPER: Uptime (Availability)

2 7 12 60 66 147 6.23 0.89  
PDSF SW: Programming environment


2 3 12 14 31 6.23 0.88 -0.20
PDSF: Batch queue structure


2 5 8 16 31 6.23 0.96 0.01
PDSF SW: General tools and utilities

1 2 1 11 14 29 6.21 1.05 0.07
DaVinci: Uptime (Availability)

1 7
10 22 40 6.13 1.22 -0.31
FRANKLIN: Overall
2 6 10 35 142 117 312 6.12 0.94 0.37
NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)

11 14 23 66 108 222 6.11 1.13 -0.04
HOPPER: Overall 1
3 10 15 60 62 151 6.09 1.06  
Carver: Disk configuration and I/O performance 2
2 11 5 26 49 95 6.06 1.33  
PDSF SW: Programming libraries

3
4 8 14 29 6.03 1.27 -0.30
FRANKLIN: Uptime (Availability)
4 13 12 42 119 118 308 5.99 1.13 1.08
PDSF SW: Applications software

1 1 4 14 8 28 5.96 0.96 -0.27
FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35
FRANKLIN: Ability to run interactively 1
3 38 19 76 94 231 5.94 1.17 0.18
HPSS: User interface (hsi, pftp, ftp) 2
6 14 25 46 70 163 5.93 1.25 -0.09
PDSF SW: CHOS 1

1 6 8 11 27 5.93 1.33 -0.19
CARVER: Ability to run interactively 1 1 1 11 9 19 37 79 5.92 1.34  
DaVinci: Ability to run interactively
1 1 7 1 9 19 38 5.92 1.40 -0.39
CARVER: Batch queue structure 1 1 3 13 10 29 45 102 5.91 1.31  
HOPPER: Disk configuration and I/O pernkformance 3 1 1 19 12 45 55 136 5.88 1.34  
DaVinci: Disk configuration and I/O performance
1
7 3 12 16 39 5.87 1.28 -0.47
DaVinci: Overall

3 5 5 11 17 41 5.83 1.30 -0.38
CARVER: Batch wait time
3 8 9 12 22 47 101 5.81 1.45  
FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19
HOPPER: Batch queue structure 2
5 19 22 56 38 142 5.67 1.24  
HOPPER: Ability to run interactively 2 1 2 28 2 33 38 106 5.62 1.45  
PDSF SW: STAR

3 2 3 7 6 21 5.52 1.40 -0.67
HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41  
FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68

 

Hardware Satisfaction - by Platform

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
Carver - IBM iDataPlex
CARVER: Uptime (Availability) 1

8 5 26 68 108 6.39 1.03  
CARVER: Overall


6 6 41 57 110 6.35 0.82  
CARVER: Disk configuration and I/O performance 2
2 11 5 26 49 95 6.06 1.33  
CARVER: Ability to run interactively 1 1 1 11 9 19 37 79 5.92 1.34  
CARVER: Batch queue structure 1 1 3 13 10 29 45 102 5.91 1.31  
CARVER: Batch wait time
3 8 9 12 22 47 101 5.81 1.45  
Franklin - Cray XT4
FRANKLIN: Overall
2 6 10 35 142 117 312 6.12 0.94 0.37
FRANKLIN: Uptime (Availability)
4 13 12 42 119 118 308 5.99 1.13 1.08
FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35
FRANKLIN: Ability to run interactively 1
3 38 19 76 94 231 5.94 1.17 0.18
FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19
FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68
Hopper Phase 1 - Cray XT5
HOPPER: Uptime (Availability)

2 7 12 60 66 147 6.23 0.89  
HOPPER: Overall 1
3 10 15 60 62 151 6.09 1.06  
HOPPER: Disk configuration and I/O pernkformance 3 1 1 19 12 45 55 136 5.88 1.34  
HOPPER: Batch queue structure 2
5 19 22 56 38 142 5.67 1.24  
HOPPER: Ability to run interactively 2 1 2 28 2 33 38 106 5.62 1.45  
HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41  
DaVinci - SGI Altix
DaVinci: Uptime (Availability)

1 7
10 22 40 6.13 1.22 -0.31
DaVinci: Ability to run interactively
1 1 7 1 9 19 38 5.92 1.40 -0.39
DaVinci: Disk configuration and I/O performance
1
7 3 12 16 39 5.87 1.28 -0.47
DaVinci: Overall

3 5 5 11 17 41 5.83 1.30 -0.38
PDSF - Physics Linux Cluster
PDSF: Uptime (availability)




9 22 31 6.71 0.46 0.35
PDSF: Overall satisfaction



2 15 15 32 6.41 0.61 0.12
PDSF: Ability to run interactively


2 2 8 16 28 6.36 0.91 0.21
PDSF: Disk configuration and I/O performance

1 1 3 10 16 31 6.26 1.00 0.32
PDSF SW: Performance and debugging tools


2 1 10 11 24 6.25 0.90 0.29
PDSF: Batch queue structure


2 5 8 16 31 6.23 0.96 0.01
PDSF SW: General tools and utilities

1 2 1 11 14 29 6.21 1.05 0.07
PDSF SW: Programming environment


2 3 12 14 31 6.23 0.88 -0.20
PDSF SW: Programming libraries

3
4 8 14 29 6.03 1.27 -0.30
PDSF SW: Applications software

1 1 4 14 8 28 5.96 0.96 -0.27
PDSF SW: CHOS 1

1 6 8 11 27 5.93 1.33 -0.19
PDSF SW: STAR

3 2 3 7 6 21 5.52 1.40 -0.67
NERSC Global Filesystem - Global Homes
GLOBALHOMES: Reliability


7 5 47 160 219 6.64 0.68  
GLOBALHOMES: Uptime

1 4 7 58 151 221 6.60 0.68  
GLOBALHOMES: Overall 1 1 3 6 7 65 146 229 6.48 0.92  
GLOBALHOMES: I/O Bandwidth

2 13 13 61 122 211 6.36 0.92  
GLOBALHOMES: File and Directory Operations 2
3 11 10 59 118 203 6.33 1.06  
NERSC Global Filesystem - Project
PROJECT: Reliability


3 4 24 79 110 6.63 0.69 0.08
PROJECT: Uptime

1 4 3 24 78 110 6.58 0.79 0.03
PROJECT: Overall


3 5 32 79 119 6.57 0.70 0.26
PROJECT: File and Directory Operations
1 1 7 4 23 68 104 6.41 1.02 0.21
PROJECT: I/O Bandwidth

1 8 8 23 67 107 6.37 0.98 0.14
HPSS - Mass Storage System
HPSS: Reliability (data integrity)

2 2 2 33 124 163 6.69 0.68 0.01
HPSS: Uptime (Availability)


2 5 41 115 163 6.65 0.60 0.02
HPSS: Overall satisfaction 2 1 1 1 7 55 104 171 6.46 0.95 0.02
HPSS: Data transfer rates 1
3 4 15 51 90 164 6.32 0.98 0.07
HPSS: Data access time 1 1 3 5 9 61 80 160 6.27 1.02 -0.09
HPSS: User interface (hsi, pftp, ftp) 2
6 14 25 46 70 163 5.93 1.25 -0.09
NERSC Network
NETWORK: Network performance within NERSC (e.g. Hopper to HPSS)


3 10 51 114 178 6.55 0.68 0.04
NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)

11 14 23 66 108 222 6.11 1.13 -0.04
Grid Services
GRID: Access and Authentication


1 3 26 39 69 6.49 0.66 0.06
GRID: Job Submission


3 4 22 38 67 6.42 0.80 -0.07
GRID: File Transfer

1 3
28 35 67 6.39 0.83 0.10
GRID: Job Monitoring 1 1
4 1 23 39 69 6.30 1.15 -0.26


NERSC Software 

  • Legend
  • Satisfaction with NERSC Services - by Score
  • Satisfaction with NERSC Services - by Type of Service
  • How useful are NERSC Services to You?
  • Are you well informed of changes?
  • How Important are Analytics Services to You?
  • Where do you perform data analysis and visualization of data produced at NERSC?
  • If your data analysis and visualization needs are not being met, please explain why
  • What additional services or information would you like to have on the NERSC web site?
  • NERSC's hardware and services are good / is overall a good center
  • Provides good machines and cycles
  • Good support services and staff
  • Good software / easy to use environment
  • Other comments

Legend:

Satisfaction Average Score
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.49
Somewhat Satisfied 4.50 - 5.49
Significance of Change
significant decrease
not significant

 

Software Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
NERSC SW: Programming environment 1
3 14 24 121 156 319 6.28 0.91 -0.08
NERSC SW: Programming libraries
1 4 16 24 105 154 304 6.27 0.95 -0.06
NERSC SW: Applications software 1 1 5 15 23 112 142 299 6.22 1.00 0.12
NERSC SW: General tools and utilities 1 1 6 24 21 114 121 288 6.09 1.07 -0.05
NERSC SW: Performance and debugging tools 1
5 34 23 95 88 246 5.91 1.13 0.04
NERSC SW: Data analysis software 4 1 4 36 13 43 44 145 5.47 1.47 -0.37
NERSC SW: Visualization software 4 3 6 31 16 39 49 148 5.47 1.54 -0.45
NERSC SW: ACTS Collection 3

31 9 22 29 94 5.39 1.48 -0.54

 


 

 

Services

 Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Somewhat Satisfied 4.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant increase
not significant
UsefulnessAverage Score
Very Useful 2.50 - 3.00
Somewhat Useful 1.50 - 2.49

 

Satisfaction with NERSC Services - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
CONSULT: Overall 1
1 5 9 59 212 287 6.64 0.74 0.12
SERVICES: Account support and Passwords 1 1 2 6 12 73 243 338 6.60 0.80 -0.06
CONSULT: Response time 1
2 6 12 61 205 287 6.59 0.80 -0.01
CONSULT: Quality of technical advice

1 7 14 67 190 279 6.57 0.74 0.09
Security

2 11 11 69 202 295 6.55 0.79 0.16
WEB: Status Info

1 5 12 104 181 303 6.51 0.68  
CONSULT: Time to solution 1 1 3 10 23 71 172 281 6.40 0.96 0.01
WEB: NIM web interface 1 1 4 6 25 105 180 322 6.38 0.90 -0.00
WEB: Accuracy of information 1
1 10 16 117 155 300 6.37 0.83 0.05
WEB: www.nersc.gov overall
1 4 3 15 147 150 320 6.35 0.77 0.07
CONSULT: On-line help desk 1 1 1 12 11 34 99 159 6.33 1.10 -0.03
WEB: Timeliness of information 1
2 9 22 129 136 299 6.28 0.85 0.08
CONSULT: Special requests (e.g. disk quota increases, etc.) 2

22 11 33 116 184 6.28 1.17 -0.04
SERVICES: Allocations process 1
3 11 23 108 134 280 6.27 0.91 0.24
TRAINING: New User's Guide 2
2 8 15 91 80 198 6.17 0.98 0.02
WEB: Ease of finding information 1 1 7 11 30 142 112 304 6.10 0.97 0.15
TRAINING: Web tutorials 2
2 9 22 60 65 160 6.06 1.09 0.02
WEB: Searching
1 6 22 34 64 57 184 5.77 1.14 0.09
SERVICES: Ability to perform data analysis 1 2 4 18 17 43 43 128 5.73 1.30 -0.21
SERVICES: Data analysis and visualization assistance 3
3 18 14 19 41 98 5.66 1.50 -0.17
TRAINING: Workshops 1 1
19 8 13 18 60 5.38 1.43 -0.21

 

Satisfaction with NERSC Services - by Type of Service

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

 

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
Accounts and Allocations
Account support and Passwords 1 1 2 6 12 73 243 338 6.60 0.80 -0.06
Allocations process 1
3 11 23 108 134 280 6.27 0.91 0.24
Analytics and Visualization
SERVICES: Ability to perform data analysis 1 2 4 18 17 43 43 128 5.73 1.30 -0.21
SERVICES: Data analysis and visualization assistance 3
3 18 14 19 41 98 5.66 1.50 -0.17
HPC Consulting
CONSULT: Overall 1
1 5 9 59 212 287 6.64 0.74 0.12
CONSULT: Response time 1
2 6 12 61 205 287 6.59 0.80 -0.01
CONSULT: Quality of technical advice

1 7 14 67 190 279 6.57 0.74 0.09
CONSULT: Time to solution 1 1 3 10 23 71 172 281 6.40 0.96 0.01
CONSULT: Special requests (e.g. disk quota increases, etc.) 2

22 11 33 116 184 6.28 1.17 -0.04
Security
Security

2 11 11 69 202 295 6.55 0.79 0.16
Training
TRAINING: New User's Guide 2
2 8 15 91 80 198 6.17 0.98 0.02
TRAINING: Web tutorials 2
2 9 22 60 65 160 6.06 1.09 0.02
TRAINING: Workshops 1 1
19 8 13 18 60 5.38 1.43 -0.21
Web Interfaces
WEB: Status Info

1 5 12 104 181 303 6.51 0.68  
WEB: NIM web interface 1 1 4 6 25 105 180 322 6.38 0.90 -0.00
WEB: Accuracy of information 1
1 10 16 117 155 300 6.37 0.83 0.05
WEB: www.nersc.gov overall
1 4 3 15 147 150 320 6.35 0.77 0.07
WEB: On-line help desk 1 1 1 12 11 34 99 159 6.33 1.10 -0.03
WEB: Timeliness of information 1
2 9 22 129 136 299 6.28 0.85 0.08
WEB: Ease of finding information 1 1 7 11 30 142 112 304 6.10 0.97 0.15
WEB: Searching
1 6 22 34 64 57 184 5.77 1.14 0.09

 

How Useful are these NERSC Services to You?

3=Very useful, 2=Somewhat useful, 1=Not useful

 

Item Num who rated this item as: Total Responses Average ScoreStd. Dev.
1 2 3
TRAINING: New User's Guide 10 35 134 179 2.69 0.57
WEB: Status Info 12 73 221 306 2.68 0.54
SERVICES: E-mail lists 5 102 209 316 2.65 0.51
TRAINING: Web tutorials 14 43 112 169 2.58 0.64
MOTD (Message of the Day) 18 98 187 303 2.56 0.61
TRAINING: Workshops 25 42 29 96 2.04 0.75

 

Are you well informed of changes?

Do you feel you are adequately informed about NERSC changes?

 

AnswerResponsesPercent
Yes 300 99.3%
No 2 0.7%

 

How Important are Analytics Services to You?

3=Very important, 2=Somewhat important, 1=Not important

 

Item Num who rated this item as: Total Responses Average ScoreStd. Dev.
1 2 3
SERVICES: Ability to perform data analysis 33 40 73 146 2.27 0.81
SERVICES: Data analysis and visualization assistance 37 49 50 136 2.10 0.80

 

Where do you perform data analysis and visualization of data produced at NERSC?

 

LocationResponsesPercent
All at NERSC 17 5.1%
Most at NERSC 40 12.0%
Half at NERSC, half elsewhere 67 20.2%
Most elsewhere 94 28.3%
All elsewhere 95 28.6%
I don't need data analysis or visualization 19 5.7%

 

If your data analysis and visualization needs are not being met, please explain why.   21 respondents

Need additional software:

NCAR Graphics is not correctly installed on davinci. (We use a version in Mary Haley's home directory instead.)
NCO is not available on davinci's replacements.

We are using mostly GrADS software to visualize data, so we do it mostly on our own machines. However, for quick preliminary results, visualization could be very helpful to detect any errors in our model simulations.

I use Davinci to use vcdat for my visualization needs. That particular software needs to be upgraded to more recent versions. The present version keeps on crashing with segmentation fault very frequently. That hinders my work progress.

We are using meteorological analysis tools like GrADS. So we have to take data to local machine to perform it.

I would further like to point out that, if you install GrADS on machines, we can readily check the output before we ftp the outputs to our systems.

Most of my pre- and post-processing code is in IDL. Having access to IDL on Franklin would significantly reduce the amount of data which needs to be transferred remotely.

I can't run scripts using ptraj (from Amber) on the Nersc machines, and the queue wait times are long, so I'm not sure it would be good for running serial data analysis scripts.

an installation of 'ecce', courtesy of pacific northwest national laboratory, would be immensely useful to quickly look at nwchem output, though i don't know if it is possible to obtain a license for that software from pnnl.

Performance issues:

I mostly find that the visualization tools are too complicated and the response time is too low. As for data analysis, I mostly use Matlab, but Matlab does not benefit much from parallel architectures and, for serial application, my local machines are faster than NERSC's.

Connecting from Europe prevents many visualization abilities...

Having trouble loading GUI over ssh (e.g. for xmgrace)

As my datasets become too large to transfer rapidly or to fit into the memory of my desktop machine, I increasingly rely on remote visualization from NERSC. Up until recently, I did this by running Visit on Davinci, accessed through the NoMachine desktop application. To the best of my knowledge, the server application (NX) needed for this type of access is not running on Euclid, Davinci's replacement; I have asked NERSC personnel about this and received no response.

I originally started out analyzing data produced by Franklin on Davinci using matlab. However Matlab was very slow loading large data files so I now scp the files to a hard drive and examine the data locally. Despite being annoying at first, this has actually proven to be a very good way for me to work as the transfer speed from Franklin to my remote machine is extremely fast.

Mathematica "Notebook Evaluation" process on Euclid seems much slower than in my local desktop. An actively running Mathematica doing "Evaluation" of cells on Euclid seems only use about 1% of CPU. Don't understand why or how to adjust that.

ViSiT comments:

I use ViSiT to visualize my data. Since my run generates too huge data, sometimes I can't visualize because of quota limit. Or if we want to make a movie out of all the data, my quota is not enough. I know I can ask for quota increase, but is that possible to cancel the disk quota restriction on visualization? I would really appreciate if you consider this.

The tool I mostly use is VisIt, which can be run remotely with a gui, which is nice. However, Franklin lacks the ability to allow using VisIt in cli mode, which is useful for loading python scripts onto visit and perform some set of tasks, like opening all datafiles and outputting one contour plot per datafile in order to create a movie, for example.

PDSF comments:

I installed my data analysis program ROOT myself because I am not the Star group user, there is still a problem, each time I log in, I need to run "source thisroot.csh" to set the Root environment variables manually, and this procedure can't be wrote in log in bash script.
Why can't these data analysis and visualization software be open to all users so they can choose themselves ?

gsl library on pdsf machines would be nice

Other comments:

It would be great if there were shared scratch space between franklin and davinci. As is I do all my analysis on franklin.

The code we use can easily be compiled on NERSC machines we just haven't gotten around to it yet.

Thank you. It is really great.

 

What additional services or information would you like to have on the NERSC web site?   26 respondents

Navigation and functionality suggestions:

Links that don't change font size or weight when you hover over them. This is very distracting.

Better organization. Eliminating conflicting information.

NERSC website has lots of content, I guess you can arrange it better.

drop off menu on the left column.
It is sometimes difficult to find the status link (use to be a tab at the top); or to go to the machine web page from the status page.

The one thing that has been a minor annoyance is that there is not a link to nim in the NERSC home page.

The most valuable things I've found on the site are pages that I only discovered from taking NERSC surveys. This points out to me that the site navigation needs improvement since I didn't discover them by browsing or searching the site itself.
Also, it is not uncommon to find out of date documentation on the site. This is really unfortunate as it undermines the credibility of the rest of the documentation.

More technical documentation:

Debugging software are not always transparent to the user. Maybe better help with the debuggers.

More information about new systems: carver, euclid

More details about the latency of each cluster's network.

Suggestions for new web services:

It would be useful if the MOTD were available as an RSS feed.

I don't know if I can choose environment variables on the web site to customize my needs on the supercomputer.
For example, I am not a Star group user, but I want to use the data analysis software ROOT, then I can choose this option from NERSC web site.

For the love of god, make mercurial and git servers for hosting projects

Suggestions for NIM:

Use plain language rather than contractions or acronyms on the account usage. What's a "repo"? What resources are MPP and STR?

better integration with NIM

Better communication of status:

I work mostly on Franklin. I find that down time for Franklin (unscheduled maintainance) is happening quite frequently. This can be very frustrating to users when you are in the middle of editing/compiling a program. The message often says that there is no estimated time when the machine will come up again. That makes it a little difficult for planning the work especially when this happens towards the end of the week when we are often planning to submit a big job to run over the weekend.

I would like to see some more technical information on the reasons for past unexpected outages.

Other comments:

I am grateful to everyone who contributes to the functionality, resourcefulness, friendliness, and productivity at NERSC. Thank you so much for all of you.

I am not sure at the moment

I find that the overall support and service from NERSC is very satisfying. The technical support and user information are adequate and useful. Though satisfied with the service, I would like to provide improvements in the existing structure. I would like to see either increase in wall time for most machines or decrease in the queue time for most of the machines.

Overall very good.

Best in the DOE! Great facility.

I'd like to be able to use gedit on Franklin. I think it's installed, but there's something wrong with my x-term. I think the problem is on my end, as "xclock" does not work, either. (But it does work for non-NERSC machines.)

As you can tell from all of my "I don't use this" answers, I'm just a lowly grad student doing runs on behalf of MILC.

Allowed length of job is short (24-48 hours or so). I hope that users can make requests for jobs that can't be done in two pieces. Occasionally, one large phonon spectrum calculation can take something like 200 hours on 64-256 processors.

When I have a problem it takes too long for someone to answer my question. For example, if I am stumped and have to wait 3 hours for an e-mail answer, I loose 3 hours of work. It would be much faster for me to directly contact the expert. That avoids e-mail tag.

Hi, pgplot is a simple plot package, useful for a first look at code results. There has been talk of installing it as a public library for a few years, but it hasn't happened yet (to my knowledge). It will be very useful to several heavy users of Franklin. Thanks.

 

Comments


What does NERSC do well?

73: ease of use, good consulting, good staff support and communications
65: computational resources or HPC resources for science
22: good software support
19: overall satisfaction with NERSC
14: good documentation and web services
10: queue management or job turnaround
 8: data services (HPSS, large disk space, data management)
 4: good networking, access and security

Their responses have been grouped for display as follows:


What can NERSC do to make you more productive?

27: Improve turnaround time / get more computational resources
26: Implement different queue policies
15: Provide more software / better software support
11: Provide more consulting / training / visualization
11: Things are currently good / not sure
10: Additional / different data storage services
10: Improve stability/reliability
 6: Provide more memory
 3: Better network performance
 3: Other comments

If there is anything important to you that is not covered in this survey, please tell us about it

4: Software comments
4: Storage comments
3: Job and Queue comments
2: Performance and Reliability comments
3: Other comments

 


What does NERSC do well?   132 respondents

  NERSC's hardware and services are good / is overall a good center

Everything. It is a pleasure to work with NERSC.

User support is very good. Diversity of computational resources.

User support is fantastic - timely and knowledgeable, including follow-up service. New machines are installed often, and they are state-of-the-art. The queues are crowded but fairly managed.

Everything that is important for me. This is a model for how a computer user facility should operate.

Provides state-of-the-art parallel computing resources to DOE scientists, with an emphasis on usability and scientific productivity.

NERSC handles computer problems so I do not have to and I can focus on chemistry.

Everything. Best in the DOE. I have used them all. NERSC is #1.

Everything, I'm extremely satisfied

Provide high quality computing resources with excellent an excellent technical support team.

NERSC has good machines and a good service

Communication and reliability.

NERSC is one of the largest and most reliable computing resources for our research group. The hardware performance and technical support staff are very exceptional.

Very well

Provides a robust computational facility and overall I am very satisfied with the facility.

NERSC is the exemplar supercomputing center in my opinion! In basically every category, NERSC gets it right. The computational performance of the systems is extremely high, the consulting help is immediate and helpful, and as a user with a special project I have had a great and very helpful relationship with developers at NERSC to help me build the project.

consulting is great, having lots of systems is great and they are reliable

t has taken only one month since I got my account, but so far I am satisfied.

Website is first class, especially the clear instructions for compiling and running jobs.
The blend of systems available is very nice: I can run big jobs on Franklin and then run various post-processing operations on Euclid etc.

The NERSC user experience is great. I think NERSC is performing well, esp. for a government organization. The phone support is of high quality and very reassuring to have. The computers are well run.

Almost in every respect (in comparison with others).

I think NERSC is doing extremely well. Nearly every staff, except one person, is very quick and responsible. One thing that I love NERSC is that they think in a way as a researcher, not as a system administrator. I think every national lab must learn from NERSC. NERSC is a role model and a leader in real life, not just on the web site.

Consulting. Having accessible computers with reasonable security. Fair sharing of time for a wide-range of job sizes, especially the "medium" scale jobs.

From my own experience, NERSC provides desired computational resources for research involving large scale scientific computation. The computational systems are very well-maintained and stable, which allows hassle-free scientific investigations. My feeling is that NERSC systems just like a reliable long-standing desktop with a magic power of computation. NERSC also provides a huge collection of software very useful for data analysis and visualization which I will explore more in the near future. The supporting services (both account and technical support) are very friendly, prompt and professional. The website is well-organized and informative. I like the News Center very much.

Access to world-class supercomputing facilities at NERSC is a rare privilege for me as a scientist from Canada. I think to serve the world-wide community of research scientists from diverse fields is what NERSC does best. This in itself is a monumental problem as thousands of researchers are using NERSC facilities for the last decade . Congratulations to all at NERSC for the best work done.
I would like to express my sincerest thanks to all at NERSC and in particular Dr. Ted Barnes, Francesca Verdier and David Turner for their unfailing help and advice which were sine que non for progress in our theoretical and computational research in the area of Physics and Chemistry of systems of Superheavy elements.

NERSC is excellent in terms of serving as a simulation facility. It is somewhat less useful as an analysis / processing facility, partially because of the "exploratory" nature of most of my data analysis. In general, I am very satisfied with performance of NERSC clusters and assistance provided by NERSC staff.

Everything we need. Ms. Verdier is amazing.

Give information about how to use the systems. Also data storage and file sharing through the /project directory system has been extremely helpful. Consulting is always very friendly and helpful as well.

*Good updates of the status of NERCS's machines in general.
*NERSC Offers a good remote access to the machines
*In general all the NERSC's machines are good to perform different kind of simulations
*Good support to the users

good performance, overall good maintenance of quantum chemistry software (although initially, some performance problems with 'molpro' occurred on carver and hopper), superior consulting team, that is very helpful and fast

Very good support, very good documentation for using systems and software, and reasonable queue waiting times.

software, consulting, queue times

I'm very impressed with the number of new systems that NERSC is bringing online this year. At the moment Franklin appears over-committed, however I suspect this will chance when Hopper is upgraded to the XE6 configuration. I'm also hopeful that we can get NAMD running efficiently on Dirac as it has shown impressive GPU benchmarks in the past. Also the support is very good, I usually get answers to my questions within a couple of hours.

* keep systems stable
* very good account management tools (nim), good software support

I shall repeat myself from previous years. NERSC provides a robust, large scale computing platform with a fair queuing system. Recently, it has facilitated code development to the point where you can 'make the leap' to the larger platforms such as franklin or jaguarpf. With the introduction of Carver it has addressed the problems of the small memory per process that the XT machines had.

I am impressed with the service both in terms of software maintenance and in assistance with problems. The resources are current and powerful. I am surprised that users are not making more use of the carver/magellan machines.

Just about everything. The machines are easy to use, they are fast, they are always accessible, etc. The people I've contacted by either phone or email have always been very helpful.

Keep the machines up and running. Service and consulting. Create a computational environment that is easy to use.

Overall very good.

I find that NERSC work very well for the calculations that we are doing. I find this to be an excellent facility and operation in general. We have been able to accomplish some major works (with ease) on Franklin.

Provides stable and efficient access to HPC computing resources with a good selection of software packages and tools for most scientific computing activities.

NERSC is simply the best managed SC center in the world

Stability, state-of-the-art computers, experiences for running a super computer center.
friendly consultant. and most nice thing is we got lots free time this year during testing period.
(carver, hopper.., nice! :))

HPSS is fast.
Online service is good

Good and varied machines are available. Not too many downtimes. Plenty of software/libraries.

It provides excellent computing resources and support. One thing that is useful and I have not experienced when using other computing resources is when individuals at NERSC take the time to inform you that a particular machine has a low load and that it you submitted your jobs there they will run quickly.

NERSC makes abundant high-performance computing resources available, providing excellent throughput for applications in which many modestly parallel (a few hundred cores each) jobs must run concurrently. HPSS works fast and flawlessly, Franklin uptime is (increasingly) good, and the large variety of available software libraries and tools is welcome. NERSC is also unmatched in the responsiveness and knowledgeability of its consulting staff.

practically everything

I am satisfied with everything about NERSC except for the long batch queue wait times.

NERSC provides stable, well documented computing environments along with a group of well trained and responsive people to help with problems, answer questions, and give guidance. They also continue to upgrade and improve these environments.

I have been and remain very pleased with all aspects of NERSC. The computers are great and the staff are always very helpful. Thanks!!!

NERSC is doing great in many aspect, which makes it a user friendly and efficient platform for scientific computation.

1. computing source is quite good.
2. consultants are so nice.
3. information is accurate.

There is a great range of machines, with very good software availability and support, and short queue times. The global home area has greatly simplified using all machines, and the /project directory makes maintaining software for use within our group very easy. The allocations process is very fair, and our needs are consistently met.

  Provides good machines and cycles

It provides a state-of-the-art parallel computing environment that is crucial for modern computational studies.

It provides a good production level computational environment that is relatively easy to use and relatively stable. I find it easier to do productive work at NERSC than at the other DOE computational resources.

Very good machines and good accessibility

keeps machines running well. franklin has better performance than other xt4 systems I have used.

HPSS is an outstanding resource.

I really like the new machine Carver. It is efficient.

Provides excellent machines to run our calculations on!

high throughput and reliable.

Providing computing resources

NERSC maintains a computer environment that is advanced and reliable.

NERSC systems are consistently stable and reliable, so are great for most development and simulations.

The computational resources are very good.

provide computer computational ability and space

HPSS storage and transfer to and from

The waiting time for each job is short, I love it.

Availability of computational resources is impressive

Very high performance

Provides extensive computational resources with easy access, and frequent maintenance/upgrade of all such resources.

Providing computing resources.

Extremely well with upgrades and transitions to new, improved, faster and better HPC systems.

Maintaining such large systems.

Computer is VERY reliable.
Downtimes are minimal
scratch area is not swept too soon

NERSC is quite powerful in the parallel calculation, which makes it possible to run large jobs. The debug setup is almost perfect which allows to debug quite quickly.

Providing and maintaining HPC facilities.

We run data intensive jobs, and the network to batch nodes is great! The access to processors is also great.

Uptime

NERSC provides about 50% of the CPU that I need. CRAY-XT4/5 are very good platforms, the code I am using (QUANTUM Espresso) performs well.
It is amazing the I/O speed to write restart files.

Provides great stable resources, fast queue times on large jobs.

Provide a variety of computing platforms which support a wide range of high performance computing requirements and job sizes.

NERSC has proven extremely effective for running high resolution models of the earths climate that require a large number of processors. Without NERSC I never would have been able to run these simulations at such a high resolution to predict future climate. Many thanks.

  Good support services and staff

Any time I am having trouble with logging in or resetting my password, I can always call and get immediate, helpful, and courteous assistance from the person that answers the phone.

The account allocation process is very fast and efficient.

Tech support is phenomenal.

NERSC user services has been the best of any of the centers on which I compute. I can not say anything but positive things about the help desk, response times to my requests, allocation help, software help, etc. On those topics, NERSC stands out well above the rest.

NERSC has always been user centered - I have consistently been impressed by this.

consulting

Easy access the documents/website and consult services.

I am very happy with the support and service by Francesca Verdier. Every time I call or write email, she always responses promptly and accurately.

New user accounts!

Keep users informed of the status of the machines.

My interactions with consulting and other nersc support staff are always useful and productive.

Keep users updated about status. Supply information for new users. Have someone available over the phone for account problems.

NERSC supports users better than other computing facilities

Consulting support is great.

In my opinion it is technical support, namely : Woo-Sun Yang and Helen He. Without their help I would not be able to install the models we use for our research.

Overall support from NERSC is encouraging. The technical support and user informations are excellent and helpful. We used to enjoy support for model installation and debugging.

very good user support.

People from support staff at accounts and helpdesk are very helpful and quick to respond.

User Services is excellent.

User services continue to perform very, very well!!

Nersc responds to my needs and questions well. Also, it is pretty straight forward to figure out how to submit and run jobs using the website.

Quick response on technical support requests or bug reports.

Nersc is generally able to meet all my computing needs, but what has always seemed really outstanding to me at NERSC is the consulting help service. It is highly accessible, responsive, and has always resolved my issues -- and the people seem very friendly. Also, the staff that manages accounts seems very easy to work with, and has always helped me to maintain sufficient account status.

Support users

I really like the NERSC Consulting team. They have been very helpful, and responded to my questions and solved my problem in a timely way.

Communicating changes, rapid response to questions

I am really pleased with the NERSC support team. Typically. I can get a response within 20 minutes.

I don't really know that much about NERSC's structure, as I am just a graduate student. My experience with NERSC is limited to running noninteractive lattice QCD jobs on franklin.
NERSC's help desk has been very helpful when I had problems getting my account set up (my password was mailed to the wrong place).

Support team

- very helpful, efficient support staff
- keeps us informed of developments at NERSC
- great details on website for programming environments, software, etc

The staff is very helpful on account issues. Very responsive.

In general NERSC has been more responsive to the needs of the scientific community than other supercomputer centres.

Provide reliable service

The information on the web site is very easy to find and well organised. Response to problems and downtimes are very short.

Compared to other HPC facilites I have used NERSC provides superior support/consulting for the user.

Responsiveness, by telephone, to user questions and needs, password problems and so on, is excellent. It is a pleasure to work with NERSC consultants.

I have been very pleased with the helpdesk response in both time and service.

Good user support and website

NERSC keeps me informed on system changes and responds quickly and helpfully to questions and requests.

  Good software / easy to use environment

NERSC provides the systems and environments necessary for tool development/testing and makes it fairly easy to provide the environments required by production software for performance testing and analysis.

Customer support; software/OS updates

Consultant
Software

NERSC does a very good job at providing a collection of application software, developer tools (compilers, api's, debugging), web pages, monitoring services and tutorials for their large and diverse user base.
Also in recent year's NERSC has done a much better job in setting up accounts for new users.

Precompiled codes are a lifesaver. One machine in the system always has what you need. I get very good performance from everything I use and it all scales wonderfully over the number of processors I use (up to 2k).

  Other comments

In old times (i.e. early 1980's), on line consulting help was readily available, and was very useful for the productivity. I hope I could get that kind of help now.

Though I understand the need for enforcing security with periodic password changes, it is annoying, especially since at the moment on my project I am the only one using the system and so only I know my password.

Making resources hard to get access to

They have a good [comment not finished]

Queue waiting time is too long on hopper.

 


What can NERSC do to make you more productive?   105 respondents

  Improve turnaround time / get more computational resources:   27 comments

The waiting time for very large parallel jobs is prohibitive; I run mainly scalability studies and I require large number of processors for short amount of time (max 10, 15 mins); I have to wait like a week sometimes when asking 16k cores.

Shorter queue times

Shorter queue waits or longer runtimes would be helpful. I run a lot of time-stepping code.

Make the queues shorter.

There are a lot of users (it is good to be useful and popular), and the price of that success is long queues that lead to slower turn-around. A long-standing problem with no easy answer.

Improve batch turnaround time. The amount of time currently needed for a job to start running is long. Specifically, for jobs that involve dependencies, it would help if jobs in the hold state would accumulate priority so that when they're released to the queue they don't have to start from scratch (in terms of waiting time.)

Improve turnaround times on Franklin (somehow)

Queueing time for long jobs, e.g. 24 hrs, can be quite long, even (or especially) with relatively few CPUs (e.g. 100). I understand that this encourages the development of parallelization; however, some problems are notoriously difficult to parallelize, and one often must run for a long time with few CPUs. The ability to run shorter calculations between restarts is a partial solution; the problem is that the queueing times for each calculation may add up to days.

add more processors to carver. The long queue time makes progress slow. Carver clobbers hopper and franklin with the performance increase of my code. Also recompiling the code is much faster on carver. Yet because i have to wait longer to start and restart the simulations, it doesn't get me results faster overall.

Shorten the queue waiting time and speed up the processors. ...

Shorten the computational queue time.

Shorter queue time ...

I think NERSC should have another Franklin. Current waiting time is too long, at least for my jobs. ...

Computational resources are limited, thus resulting in long queues on the larger machines I need to access for my simulations. However, I understand fixing this problem is difficult and bound by economic constraints.

Turn around time is always an issue! More resources would be great!

I feel like queue times are extremely long, especially on franklin. I use the "showstart" command and it gives me one time, then 3 days later my job will run. I do not understand why my jobs continually get pushed back from the original estimates.

Some of the machines has a long waiting time for batch regular jobs. It would be more productive if this waiting time could be somehow reduced.

make wait time less for the queue job

Queue times are a bit long especially for low priority jobs.

throughput is bad -- need to wait >24h, sometimes days for running 24h.
Limit users?

less wait time ...

... and faster queue turnaround time: I know everyone wants this, but it's really the only issue I have had. ...

faster and more CPUs

More cpu cores.

Reduce job queue times. On Franklin one often waits ~1 week to use 16k procs. This is too long to wait. ...

Batch turn around time on Franklin excessive. (Maybe I should be checking out other NERSC machines?)

The only thing that would substantially improve my productivity would be shorter batch queue wait times.

  Implement different queue policies:   26 comments

Longer wall time limits:

NERSC response: In 2010 NERSC implemented queues with longer wait times:

  • On Carver the reg_log queue has a wall limit of 168 hours (7 days) and reg_xlong's is 504 hours (21 days).
  • On Hopper Phase 1 the reg_long queue has a wall limit of 72 hours (3 days).

    The max walltime for interactive job is 30 minutes for the gpu cluster. I find it too short I need to resubmit job constantly as I was debugging my code. It would be nice to make it longer.

    Have some increased time limits for job runs.

    ... Adding more queues to Franklin and Hopper with larger wall clock (may be with smaller number of nodes) time could be very helpful.

    I prefer to have an increased wall clock time, especially in Franklin machine. Franklin queue is more and with short wall time it takes more time to finish the scheduled years model runs. ...

    Some codes, due to the scaling limitations, perform better on smaller number of processors. Therefore, I will be glad to have ability to run such codes longer than 24 hours.

    Add a machine with longer wall-clock limit and less core/node (to save allocation). Not all users have to chase ultra massive parallelization for their research.

    ... Fourth, allow longer jobs (such as one month or half year) on Carver and Hoppers. Let science run to the course.

    Allowed length of job is short (24-48 hours or so). I hope that users can make requests for jobs that can't be done in two pieces. Occasionally, one large phonon spectrum calculation can take something like 200 hours on 64-256 processors.

    Wall clock time limits need to be relaxed. Queuing systems need to be revised in terms of their priorities. In many cases 72 hr wall clock time is relatively short to obtain meaningful results. There should be a category that requires only a modest number of cores (say less than 256) but long wall clock time (up to 100 hrs).

    Please make the low Queue in Carver to be at least 24 hrs.

    NERSC response: Thank you for your feedback regarding NERSC queues. We've increased the queue lengths on the Carver. There are now queues which run for longer than a week. Also, the Carver system does not favor jobs based on size. Small jobs have the same priority as larger jobs.

    Better treatment of smaller core size jobs:

    Stop giving preferential treatment to users who can effectively use large numbers of cores for their jobs. This could be done by giving jobs using small numbers of cores the same or higher priority as those using large numbers and increasing the number of 'small' jobs that can be run concurrently.

    I will be very happy if the queue policy on NERSC will be more favor on those jobs requesting less than 200CPUs and the queue policy will favor jobs with 1/2 days time request. The current queue really favor extremely short jobs, at most 6 hours.

    I often have a spectrum of job sizes to run (e.g., scaling studies, debugging and production runs) and the queuing structure/algorithms seem to be preferential to large runs. It would improve my productivity if this was more balanced or if there were nearly identical systems which had complementary queuing policies.

    Change their priorities for batch jobs so that it is possible to run jobs on 16-64 nodes for 48 -72 hours. Currently such jobs have extremely long wait times which combined with a low cap on total number of jobs in the queue limits the possibility to run such jobs. These job sizes are typical for Density functional theory calculations, one of the main work horses in chemistry and material science and it makes no sense that nersc disfavors these.

    Provide better understanding of how jobs are scheduled / more flexibility in scheduling options:

    ... Second minor complaint:
    Other people's jobs who submit after me, who are asking for the same number of CPUs and length of time and submitted into the same queue, then somehow their jobs miraculously move ahead of mine due to some low-level hidden priority system. Extremely frustrating. I recognize that they think their science is of higher priority than mine and have somehow convinced NERSC that this is the case, but it gives the appearance that my science is of lower priority and valued less. It is my opinion that this practice should be terminated.

    NERSC response: NERSC recognizes that the tools available from our batch software provider do not allow for a good understanding of why jobs "move around" in the queues. As users release jobs put on user hold and as NERSC releases jobs put on system hold, and even as running jobs finish, the priority order in which jobs are listed changes. NERSC is working with moab to obtain a better "eligible to run" time stamp which would help clarify this. Only rarely are these changes in queue placement due to users who have been "favored".

    Tell me how to best submit the kind of jobs I do. They do a good job with the heads up: this machine is empty submit now.

    Provide an interface which can help the user determine which of the NERSC machines is more appropriate at a given time for running a job based on the number of processors and runtime that are requested.
    Alternatively, or in addition, provide the user with the ability to submit a job to a selection of NERSC machines, eventually asking for different # of processors and runtime for each machine based on their respective specifications, and let the "meta-scheduler" run the job on the first available machine.
    In addition, it would be nice to have more flexibility in the scheduling options. For example, being able to give a range of # of processors (and a range of runtimes) would be helpful (e.g., the user sometimes does not care whether the job will be performed using 1024 processors in 1 hour or 256 processors in 4 hours).
    These would help overall productivity and reduce the "need" for some users to submit multiple job requests (sometimes to multiple machines) and kill all but one when the first started.

    Higher run limits:

    Allow me to run more jobs at once, but then that wouldn't be fair to others.

    ... able to run more small jobs simultaneously

    More resources for interactive and debug jobs:

    Increasing the number of available processors for debug queue in Carver will help me to shorten the debug cycle with more than 512 processors. Current it is 256.

    During the working day, i would always encourage the availability of more development nodes over production ones.

    Other comments:

    Would be great to get an email sent out when your jobs are done running.

    Flexibility of creating special queues for short term intensive use without extended waiting time. Hopefully it will not be too expensive, either.

    Better ability to manage group permissions and priorities for jobs, files, etc. The functionality of the idea of project accounts is still relevant.

    The wait time on a "low" priority setting can get somewhat ridiculous. I realize that the wait has to be sufficiently long, or else EVERYONE would use it. However, it seems that something could be done to ensure that X amount of progress is made in Y days for low priority jobs. Sometimes, given traffic these jobs may not progress in the queue at all over weeks and weeks. So .... some tweaking on that could be nice.

    improve the queue structure so that jobs are checked for ability to run before sitting in the queue for long periods of time...I have had a job sit for a day only to find out I had a path wrong and a file could not be found and then had to wait another day to run - if a job has waited its turn, but then is unable to execute the space should be reserved for a replacement job by the same user if submitted in a timely fashion or to be replaced by another job from the user which is still waiting to be run

      Provide more software / better software support:   15 comments

    continuous support for nag math library

    Find a way to allow shared libraries.

    The group I am in actively uses VASP. The Paul Kent version of VASP 4.6 at NERSC is extremely parallel allowing for more processors than atoms to be run and still scale well. Optimizing VASP 5.2 equally well would make me more productive. Although I am not sure NERSC has control over this.

    Debugging with totalview is difficult as the X window forwarding is slow for remote users. However, no easy solution comes to mind. (Other than using gdb, which isn't as nice)

    It would help developers if a more robust suite of profiling tools were available. For example there are some very good profiling tools for franklin, but they are not robust enough to analyze a very large program.

    Could you install gvim (vim -g) on Franklin?
    Is there any way jobs in the queue could have any estimated wait time attached to them based on the number of jobs in front of them?

    The tool showstart for predicting the time a job will spend on the queue before running starts is very inaccurate (so inaccurate it is useless).

    To provide support to code implementation in its platform. It is really painful to go through the process of trying to implement new version of codes and computer environment and libraries changes by NERSC. There is little care in NERSC about incompatibilities produced by changing in hardware, operating system, and libraries. NERSC should provide tools or support to analyze errors coming either from NERSC computer environment changes or new version of the codes. A simple tool to check differences among versions of codes or to correlate errors with OS/libraries changes should be available. It seems that this has to be done in a trial/error basis.

    ... Meteorological data analysis tools like GrADS need to be installed and distributed. This helps for a quick analysis of the desired model output. Now we are taking data to local machine for analysis. I hope the installation of software will save both computational time and man power.

    Sync STAR library more frequently to RCF at BNL would be useful for the data analysis as well as simulation

    environment explanation for STAR, though I am not sure it is your job

    I use PDSF to analyze STAR data. There are many important STAR datasets that are not available on PDSF. That significantly impacts my use of PDSF. Not sure this is exactly a NERSc issue...

    Sometimes we don't get batch nodes and have to talk to Eric Hjort to figure out why we're stuck. Sometimes this has been because the IDL licenses or resource names have changed and we just don't know when stuff like that will happen.
    We are most productive when the software/configuration is stable, but of course upgrading things is a necessary evil so we totally understand.

    Allow subproject management and helpful issue tracker (wikis, as well) ala github.com or bitbucket.org

    Make it easier to add users and collaborate on projects.
    Hosting projects in a distributed repository system like Git or Mercurial in addition to SVN would greatly help code development.

      Provide more consulting / training / visualization:   11 comments

    Provide more introductory training or links to training media (e.g. video, etc) on high performance computing in general.

    ... I could possibly use some more web-based tutorials on various topics:
    MPI programming, data analysis with NERSC tools, a tutorial on getting Visit (visualization tool) to work on my Linux machine.

    A discussion of the queue structure, suggestions for getting the best turnaround, strategies for optimizing code for NERSC machines would be great.
    Add (current web content is v. nice, I will consume all you can produce) answers to questions like the following:
    what exactly is ftn doing??
    do you know of any 'gossip' regarding which compiler flags are effective on the various NERSC platforms? what are you guys using and why?
    how do I determine the 'correct' node count for a given calculation?
    on which platform should I run code x?
    as computer guys, what advice do you have for practicing (computational) scientists?
    do you feel like the NERSC computers are being used in the manner in which they were intended to be used?

    Improve my ability to learn how to best use NERSC resources: mostly in the form of easy to find and up to date documentation on how to use the systems and tools effectively.

    There seems to be a strong push for concurrency in the jobs on large core machines especially hopper and often time to solution is ignored or not given enough weight when considering whether to install a particular application. Given scarcity of resources, this policy seems to force researchers to resort to less efficient codes for a particular purpose.
    Hence, If NERSC can provide some benchmark calculations and probably rate the softwares in particular category e.g. computational chemistry, solid state chemistry etc, it can be a tremendous help when deciding which software to use on a particular platform. ...

    I've had trouble with large jobs (4000+ cores) on Franklin, where I get various forms of errors due to the values of default MPI parameters (unex_buffer_size and many others). I find that I have to play around with these values to get the jobs to run -- this can be very frustrating given the semi-long queue times. This frustration has caused me to both run jobs at smaller core-counts (for longer time) and to run my largest jobs on the BG/P at ANL instead (although I would prefer to keep all my data at NERSC). While the helpdesk has been helpful in explaining which MPI parameters I should change etc, I still have not found any setting that removes all these issues. Alternatively, if I could learn how to change the code such that these problems don't occur I would do that -- but I don't know how. If the limitations on message/buffer sizes can't be removed, then maybe add a section on the website explaining what types of MPI calls might cause problems?

    Consulting should be easily available, see my comment above. [In old times (i.e. early 1980's), on line consulting help was readily available, and was very useful for the productivity. I hope I could get that kind of help now.]

    ... Faster response times from the help-desk on simple technical questions.

    Beside more time, that only things I can think about is visualization, and we (as a collaboration) have not made real effort to integrate visualization into our program, but this is something that must happen.

    ... I also need a better way to perform remote visualization on Euclid with Visit.

    I am really happy with the level of support and availability of computing resources. There are little things that could be adjusted, mostly related to the character of our project. For example, one of students had a trouble to execute VisIt on Franklin using batch system. I do not think the process is documented and explained. Ultimately, Hank Childs' assistance proved instrumental in resolving the problem. Also, our data sets are quite big and having twice larger scratch space to start with would make production easier. Again, this is quite specific to our application profile. ...

      Things are currently good / not sure:   11 comments

    the machines are really good

    Maintain the same level of overall high quality as so far.

    The level is already so satisfactory.

    i am not sure about this

    ... When I have time to be productive, NERSC systems and consultants are there to help. Unless you can give me a couple more days in the week, I can't think of anything NERSC could do.

    Keep it up.

    Obtaining the complete Hopper will be a great improvement on already fantastic national resource.

    By doing as now keeping abreast of new visualization and analysis technologies and transferring these capabilities to users.

    Can't think of anything.

    at this point, the limit of my productivity is me, not NERSC

    Nothing that I can think of.

      Additional / different data storage services:   10 comments

    ... Increase home and scratch area quota in general. Lot of time is wasted in managing the scratch space and archiving and storing the data.

    Treat batch jobs and visualization queues differently upon quota limit issue. Copying files to HPSS is quick enough, but it's pretty slow to transfer file back from HPSS to Franklin.

    ... Second, the files in the scratch directory should not be deleted without permission from the users. If the files exceed the limit, NERSC should send a list of users who exceed the limit and block submission new jobs until the user's quota backs to the normal. One can send two warnings beforehand. At present, everybody suffers, since just a few users exceed their limits.

    Improve the performance and scaling of file I/O, preferably via HDF5. Make the long term file storage HPSS interface more like a normal Unix file system. Provide larger allocations of scratch and project space.

    ... Third, HPSS should allow users to view the file contents. Add a "less" "more" there. At present, I have to transfer files back to franklin and view to see whether those are the files that I need. ...

    Make hsi/htar software available on a wider variety of Linux distributions.

    If there were a quicker way to access stored data that would be nice.

    More scratch space on franklin ... The scratch space thing was an issue for me recently; it's sort of sporadic getting gauge configuration ensembles copied from NCSA's mass storage system, so it's helpful if I can move over a large number at once and leave them on your storage. I don't use most of the bells and whistles, so I have no comment about them.

    Give more disk space for data analysis.

    Access to data during downtimes is very useful. Also access to error/output files while a job is running is a useful way to help reduce wasted compute hours when a job isn't doing what it should. I believe NERSC is already working on increasing the number of machines with these features though.

      Improve stability/reliability:   10 comments

    Number one frustration:
    I wait in the franklin queue for four to six days. I finally get the cores I need to run (4096-8192), and the computation begins. Sometimes things hang together perfectly. However about one in three or four times, a single node will drop out mid computation and will bring the entire calculation to a screeching halt with everything lost since the last checkpoint file was written. I'm then stuck waiting in the queue for another four to six days only to have the same happen again.
    Reliability is absolutely essential. ...

    Keep the supercomputers up more. Make them more stable. Reduce the variability in the wallclock times for identical jobs.

    less down time.

    ... Make machines like Franklin go down less often.

    ... Minimize downtime.

    Less HPC machine downtime

    ... Finally, back in April/May, I believe, I/O on Carver appeared suffering erratic changes in I/O rates for our code (FLASH + parallel HDF5). Perhaps this issue is resolved now.

    Keep higher uptime. ...

    My own productivity would benefit most from a reduction in machine downtimes and the "word too long" errors that occasionally kill batch jobs on startup. ...

    Many job crashes due to node failure and other unknown reasons have significant impact on my simulations on Franklin.

      Provide more memory:   6 comments

    The computational resources of NERSC are somehow still limited. My case is using VASP which requires quite large RAM, whereas the RAM of Franklin is too small of the individual node. Also, the Cray XT4 processors might be obsolete.

    1)More memory per node
    2)More processors per node

    If in future one can run jobs with more Memory than available at present , researchers in general would benefit tremendously.

    There is a need for machines with large shared memory. Not all tasks perform well on distributed memory, or alternatively are hard to parallelize. As it is now, I don't use Nersc HPC resources due to there being too little memory on the nodes of all machines. Something with 32-64gb of addressable memory per node would fill a very real niche.

    Most of my jobs require large memory per core. The application software is both CPU and memory intensive. The memory is usually the limit of the problem size I can solve. Wish the memory per core could be larger.
    The caver machine is a better machine. But too fewer cores as compared to Franklin. The queue time on Carver was usually very long.

    ... Memory is a constraint.

      Better network performance:   3 comments

    ... and more bandwidth when downloading files to my local machine

    Better bandwidth to small institutions

    Faster connection to external computers/networks

      Other comments

    Allow more than 3 password tries before locking someone out

    NERSC response: NERSC's password policies must adhere to Berkeley Lab's Operational Procedures for Computing and Communications, which specifies that "three failed attempts to provide a legitimate password for an access request will result in an access lockout".

    If scheduled maintenance was at the weekend that would make my work more productive.

    My only complaint is with the allocation process. While I understand that not all allocations can (or should) be fully awarded, it would be helpful to get some explanation as to what the reasoning was behind the amount that was awarded. Even just a couple sentences would be great, this would let the users know what it is we are doing right and what we can improve upon for the next round. We do spend a lot of time on these applications, so feedback is always welcome.

 

If there is anything important to you that is not covered in this survey, please tell us about it.   15 respondents

  Software comments

I encountered some strange behaviour with some installation code. When logined from CentOS distribution Linux it did not work. However, using other linux machine worked just fine. I am not sure what could be the problem, but maybe some information about this issue posted on your webpage could be useful for new NERSC users.

If I haven't made it abundantly clear, start by making mercurial or git the default repository for hosting projects. Subversion is as outdated as AOL

Sometime, softwares keeping updated made some troubles to compile models.

It is important for my legacy code that NERSC continue to provide access to older versions of certain scientific computing libraries, such as PETSc 2.3.x and HDF5 1.6.x. I have been happy with the availability of these libraries on Franklin and Hopper thus far, and hope it will continue.

  Storage comments

I mostly run DFT code VASP on Hopper and Carver. My disk quota is only 40 GB which is definitely not enough for me. The volume of output files from one VASP job is typically around 1.2 GB. It means that I can only store 30 complete jobs on NERSC and I have to delete the rest of output files.
This is the only inconvenience that I felt for NERSC.
It would be perfect if I have a larger disk quota, for example, 400 GB.

... We could use a lot more disk space so that we wouldn't have to use the HPSS as a file server.

It would be great if NERSC could share its expertise with GPFS with other LBNL groups such as SCS, so that LBNL clusters could also use it.

The common home directory is a bad choice, since Carver and Hopper have different architectures from Franklin. My codes and job scripts are built on Franklin. When I transfer them to Carver and Hopper, they are not compatible. And even worse, Carver and Hopper do not have the same capacity as Franklin to run jobs. After I change them on Carver and Hopper, I can not run the codes. I think either one is allowed to run longer jobs on Carver/Hopper, or adding more nodes to it. I understand Carver/Hopper replaces Bassi, but I think Carver/Hopper should do a better job than Bassi.
I have more to add, but I have to check my jobs.

  Job and Queue comments

I'd like to write the log file and out file myself when submit a pbs job instead of using the system distributed XXXXXX.out and XXXXXX.err

improve walltime run limits. It is very inefficient to place 24 hour, or even 48 hour, time limits on quantum chemistry jobs. Simply computing start up information such as wavefunctions or correlation requires significant amount of time. If one wants to do much more than that, they get cut off at the time limit and then have to start over, thus having to re-do the expensive parts of the calculation. Couldn't there be a long queue in place? Even if this were limited to a very small number of processors it would still be useful.
I currently only use NERSC as a back up because of the short walltime limits.

I am disappointed that the number of nodes required for large jobs and the large jobs discount has increased. This change has dramatically increased my waiting times.

  Performance and Reliability comments

We don't use the NERSC systems much because it is too difficult to do simple things such as run our models. The machines are crashing every few days and for one or two days before a crash, things slow down dramatically.
Davinci is sometimes so slow as to be nearly useless.

Execution often stops due to node failure when using more than 1024 nodes for >10 hours. I am not sure what can be done about this, but it would be good to improve the reliability of each nodes.

  Other comments

I would hope that in the future, there would be more timely monitoring and notification of shutdowns affecting HPC assets such as Franklin (which goes down very frequently). I wish the main home page would be modified to have a real time status and MOTD display somewhere prominent.

I have trouble connecting with multiple SSH shells into Franklin.

Thank you all for such a good job.

Survey Text

This survey is now closed.

Section 1: Overall Satisfaction with NERSC
For each item you use, please indicate both your satisfaction and its importance to you.
Please rate:How satisfied are you?How important is this to you?
Overall satisfaction with NERSC Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
NERSC services Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
NERSC computing resources Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
NERSC data storage resources Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
HPC software Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
How long have you used NERSC?
Less than 1 year 1 year - 3 years More than 3 years
What desktop systems do you use to access NERSC? Check all that apply.
UNIX Systems
Linux
Free BSD
Sun Solaris
IBM AIX
HP HPUX
SGI IRIX
Other
PC Systems
Windows 7
Windows Vista
Windows XP
Windows 2000
Other Windows
Mac Systems
MacOS X
MacOS 9 or earlier
Other Mac
Other

 

Section 2: HPC Resources

Please rate the NERSC systems or resources you have used.

  Cray XT4: Franklin
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch wait time Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch queue structure Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Ability to run interactively Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Disk configuration and I/O performance Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  Cray XT5: Hopper
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch wait time Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch queue structure Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Ability to run interactively Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Disk configuration and I/O performance Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  IBM iDataPlex Linux Cluster: Carver
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch wait time Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch queue structure Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Ability to run interactively Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Disk configuration and I/O performance Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  SGI Altix: DaVinci
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Ability to run interactively Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Disk configuration and I/O performance Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  Parallel Distributed Systems Facility: PDSF
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Batch system configuration Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Ability to run interactively Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Disk configuration and I/O performance Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Programming environment Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
CHOS environment Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
STAR software environment Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Applications software Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Programming libraries Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Performance and debugging tools Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
General tools and utilities Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  High Performance Storage System: HPSS
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Reliability (data integrity) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Data transfer rates Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Data access time Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
User interface (hsi, pftp, ftp) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  NERSC Global Homes File System
In 2009 NERSC implemented Global Homes, where all NERSC computers share a common home directory. Previously, each system had a separate, local home file space.
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Reliability (data integrity) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
I/O bandwidth Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
File and directory (metadata) operations Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  NERSC /project File System
The NERSC "Project" file system is globally accessible from all NERSC computers. Space in /project is allocated upon request for the purpose of sharing data among members of a research group.
Please rate:How satisfied are you?
Overall satisfaction Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Uptime (Availability) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Reliability (data integrity) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
I/O bandwidth Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
File and directory (metadata) operations Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  NERSC Grid Resources
Please rate:How satisfied are you?
Access and authentication Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
File transfer Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Job submission Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Job monitoring Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]
  NERSC Network
Please rate:How satisfied are you?
Network performance within NERSC (e.g. Hopper to HPSS) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Remote network performance to/from NERSC (e.g. Hopper to your home institution) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to list of systems]

 

Section 3: Software

Please rate software on NERSC systems. PDSF software was rated previously in the PDSF section of "HPC Rsources". For each of the software categories below, consider availability, usability, and robustness of the software. If you have specific concerns about particular software, please note them in Section 5. Comments of this survey.

NERSC Software
Please rate:How satisfied are you?
Programming environment Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Applications software Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Programming libraries Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Performance and debugging tools Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
General tools and utilities Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Visualization software Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Data analysis software Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
ACTS software collection (acts.nersc.gov) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This

 

Section 4: Services
  HPC Consulting
For each item you use, please indicate both your satisfaction and its importance to you.
Please rate:How satisfied are you?
Consulting overall Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Quality of technical advice Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Response time Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Time to solution Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Special requests (e.g. disk quota increases, etc.) Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
On-Line Help Desk Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
  Accounts and Allocations
For each item you use, please indicate both your satisfaction and its importance to you.
Please rate:How satisfied are you?
Account support and passwords Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
NIM web accounting interface Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Allocations process Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to top]
  Communications
The following questions relate to how you keep informed of NERSC changes and current issues.
Please rate:How useful is this to you?
MOTD (Message of the Day) Not Answered Very Useful Somewhat Useful Not at All
E-mail announcements Not Answered Very Useful Somewhat Useful Not at All
Web site status page Not Answered Very Useful Somewhat Useful Not at All
Do you feel you are adequately informed about NERSC changes?
Yes No
 [Back to top]
  Training
For each item you use, please indicate both your satisfaction and how useful it is to you.
Please rate:How satisfied are you?How useful is this to you?
Web tutorials Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Useful Somewhat Useful Not at All
New User's Guide Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Useful Somewhat Useful Not at All
Workshops Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Useful Somewhat Useful Not at All
 [Back to top]
  NERSC Web Site
Please rate:How satisfied are you?
NERSC web site overall Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Ease of navigation Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Timeliness of information Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Accuracy of information Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
NERSC status information Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
Searching Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
 [Back to top]
What additional services or information would you like to have on the NERSC web site?

 [Back to top]
  Security
Please rate:How satisfied are you?
NERSC Security Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This
  Data Analysis and Visualization
Please rate:How satisfied are you?How important is this to you?
Data analysis and visualization assistance Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
Ability to perform data analysis Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important
Where do you analyze data produced by your NERSC jobs?
All at NERSC Most at NERSC
Half at NERSC, half elsewhere Most elsewhere
All elsewhere I don't need data analysis or visualization
  Comments
If your data analysis and visualization needs are not being met, please explain why.

 [Back to top]
Section 5: Comments
Your comments about NERSC overall
What does NERSC do well?
What can NERSC do to make you more productive?
If there is anything important to you that is not covered in this survey, please tell us about it here.

Show Pagination