NERSCPowering Scientific Discovery for 50 Years

2006 User Survey Results

Survey Results

Many thanks to the 256 users who responded to this year's User Survey. This represents a response rate of about 13 percent of the active NERSC users. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics.

The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

You can see the 2006 User Survey text, in which users rated us on a 7-point satisfaction scale. Some areas were also rated on a 3-point importance scale or a 3-point usefulness scale.

Satisfaction
Score
MeaningNumber of
Times Selected
7 Very Satisfied 4,985
6 Mostly Satisfied 3,748
5 Somewhat Satisfied 832
4 Neutral 584
3 Somewhat Dissatisfied 251
2 Mostly Dissatisfied 75
1 Very Dissatisfied 51
Importance ScoreMeaning
3 Very Important
2 Somewhat Important
1 Not Important
Usefulness ScoreMeaning
3 Very Useful
2 Somewhat Useful
1 Not at All Useful

The average satisfaction scores from this year's survey ranged from a high of 6.7 (very satisfied) to a low of 4.9 (somewhat satisfied). Across 111 questions, users chose the Very Satisfied rating 4,985 times, and the Very Dissatisfied rating only 51 times. The scores for all questions averaged 6.1, and the average score for overall satisfaction with NERSC was 6.3. See All Satisfaction Ratings.

For questions that spanned the 2006 through 2003 surveys, the change in rating was tested for significance (using the t test at the 90% confidence level). Significant increases in satisfaction are shown in blue; significant decreases in satisfaction are shown in red.

Significance of Change
significant increase (change from 2005)
significant decrease (change from 2005)
not significant

Areas with the highest user satisfaction include the HPSS mass storage system, account and consulting services, DaVinci C/C++ compilers, Jacquard uptime, network performance within the NERSC center, and Bassi Fortran compilers.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
HPSS: Reliability (data integrity)       2   22 69 93 6.70 0.59 -0.03
Account support services 1   1 4 2 47 147 202 6.64 0.76 -0.09
HPSS: Uptime (Availability)       1 2 29 62 94 6.62 0.59 -0.06
DaVinci SW: C/C++ compilers         1 3 9 13 6.62 0.65  
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73
CONSULT: Timely initial response to consulting questions   1 3 2 6 50 136 198 6.57 0.81 -0.08
Network performance within NERSC (e.g. Seaborg to HPSS)     2 1 3 38 72 116 6.53 0.75 -0.08
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20
Bassi SW: Fortran compilers 1 1     3 18 50 73 6.52 1.02  

Areas with the lowest user satisfaction include Seaborg batch wait times; PDSF disk storage, interactive services and performance tools; Bassi and Seaborg visualization software; and analytics facilities.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
PDSF SW: Performance and debugging tools 1   3 3 5 10 9 31 5.48 1.52 -0.52
Seaborg SW: Visualization software     1 12 5 15 9 42 5.45 1.19 -0.08
PDSF: Ability to run interactively 1 1 1 4 11 17 6 41 5.39 1.30 -0.40
OVERALL: Data analysis and visualization facilities   2 4 32 20 47 23 128 5.37 1.22 -0.28
Bassi SW: Visualization software 1 1   4 2 9 5 22 5.36 1.62  
PDSF: Disk configuration and I/O performance 1   7 5 6 13 7 39 5.10 1.54 -0.04
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99

The largest increases in satisfaction over last year's survey are for the Jacquard linux cluster; Seaborg batch wait times and queue structure; NERSC's available computing hardware; and the NERSC Information Management (NIM) system.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73
Seaborg: Batch queue structure 1 4 5 13 21 61 48 153 5.77 1.27 0.72
Jacquard: Batch wait time 1   3 5 10 40 23 82 5.87 1.13 0.71
Jacquard: overall   2   2 10 28 46 88 6.27 1.01 0.49
Jacquard: Batch queue structure   1 3 6 7 34 28 79 5.95 1.14 0.49
OVERALL: Available Computing Hardware     3 5 29 108 92 237 6.19 0.82 0.30
NIM     3 2 19 76 102 202 6.35 0.81 0.19

The largest decreases in satisfaction over last year's survey are shown below.

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
PDSF SW: Programming libraries     1 3 7 9 11 31 5.84 1.13 -0.62
PDSF SW: General tools and utilities   1 2 4 4 14 9 34 5.62 1.33 -0.58
PDSF SW: Software environment   2   1 6 14 13 36 5.92 1.25 -0.52
Seaborg: Uptime (Availability)   1 4 3 20 52 79 159 6.23 0.99 -0.33
NERSC security 2 1 7 9 10 72 134 235 6.30 1.11 -0.31
Seaborg SW: Performance and debugging tools   3 6 7 13 38 28 95 5.69 1.31 -0.31
OVERALL: Available Software     6 24 22 85 82 219 5.97 1.08 -0.22
CONSULT: overall 1 1 2 3 9 59 124 199 6.47 0.90 -0.21
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20
OVERALL: Network connectivity     8 10 19 69 124 230 6.27 1.02 -0.18
CONSULT: Quality of technical advice 1   2 3 8 66 113 193 6.46 0.84 -0.16

Survey Results Lead to Changes at NERSC

Every year we institute changes based on the previous year survey. In 2006 NERSC took a number of actions in response to suggestions from the 2005 user survey.

  1. 2005 user survey: On the 2005 survey 24 users asked us to improve queue turnaround times. Seaborg wait time had the lowest satisfaction rating on the survey, with an average score of 3.95 (out of 7).

    NERSC response: In 2006, NERSC and DOE adjusted the duty cycle of NERSC systems to better balance throughput (reduced queue wait times) and overall utilization, and also agreed not to pre-allocate systems that are not yet in production. This approach has paid off: on the 2006 survey only 5 users commented on poor turnaround times, and the average satisfaction score for Seaborg wait times increased by almost one point.

  2. 2005 user survey: On the 2005 survey three Jacquard ratings were among the lowest seven ratings.

    NERSC response: In 2006 NERSC staff worked hard to improve Jacquard's computing infrastructure:

    • We implemented the Maui scheduler in order to manage the queues more effectively.
    • The system was greatly stabilized by reducing the system memory clock speed from 400 MHz to 333 MHz (more nodes were added to Jacquard to compensate for the reduced clock sped).
    • We worked with Linux Networx and its third party vendors to improve MVAPICH.
    • We worked with Mellanox to debug and fix several problems with the Infiniband drivers and firmware on the Infiniband switches that were preventing successful runs of large-concurrency jobs.

    On the 2006 survey four Jacqaurd ratings were significantly higher: those for up time, wait time, and queue structure, as well as overall satisfaction with Jacquard.

  3. 2005 user survey: On the 2005 survey four users mentioned that moving data between machines was an inhibitor to doing visualization.

    NERSC response: In early 2006 the NERSC Global Filesystem was deployed to address this issue. It is a large, shared filesystem that can be accessed from all the computational systems at NERSC.

    Moving files between machines did not come up as an issue on the 2006 survey, and users were mostly satisfied with NGF reliability and performance.

  4. 2005 user survey: On the 2005 survey 17 users requested more hardware resources.

    NERSC response: In addition to deploying the Bassi POWER5 system in early 2006, NERSC has announced plans to deploy a 19,344 processor Cray XT4 system in 2007. User satisfaction with available computing hardware at NERSC increased by 0.3 points on the 2006 survey, and only ten users requested additional computing resources in the Comments about NERSC section.

Users are invited to provide overall comments about NERSC:

113 users answered the question What does NERSC do well?   87 respondents stated that NERSC gives them access to powerful computing resources without which they could not do their science; 47 mentioned excellent support services and NERSC's responsive staff; 27 highlighted good software support or an easy to use user environment; 24 pointed to hardware stability and reliability. Some representative comments are:

The computers are stable and always up. The consultants are knowledgeable. The users are kept well informed about what's happening to the systems. The available software is complete. The NERSC people are friendly.

NERSC runs a reliable computing service with good documentation of resources. I especially like the way they have been able to strike a good balance between the sometimes conflicting goals of being at the "cutting edge" while maintaining a high degree of uptime and reliable access to their computers.

NERSC has a lot of computational power distributed in many different platforms (SP, Linux clusters, SMP machines) that can be tailored to all sorts of applications. I think that the DaVinci machine was a great addition to your resource pool, for quick and inexpensive OMP parallelization.

The preinstalled application packages are truly useful to me. Some of these applications are quite tricky to install by myself.

NERSC makes possible for me extensive numerical calculations that are a crucial part of my research program in environmental geophysics. I compute at NERSC to use fast machines with multiple processors that I can run simultaneously. It is a great resource.

72 users responded to What should NERSC do differently?.

In previous years the greatest areas of concern were dominated by queue turnaround and job scheduling issues. In 2004 , 45 users reported dissatisfaction with queue turnaround times. In 2005 this number dropped to 24 and this year only 5 users made such comments. NERSC has made many efforts to acquire new hardware, to implement equitable queueing policies across the NERSC machines and to address queue turnaround times by allocating fewer of the total available cycles, and this has clearly paid off. The top three areas of concern this year are job scheduling, more compute cycles, and software issues.

Some of the comments from this section are:

The move now is to large numbers of CPUs with relatively low amounts of RAM per CPU. My code is moving the opposite direction. While I can run larger problems with very large numbers of CPUs, for full 3-D simulations, large amounts of RAM per CPU are required. Thus NERSC should acquire a machine with say 1024 CPUs, but 16 or 32 GB RAM/CPU.

More adequate and equitable resources allocation based on what the user accomplished in the previous year.

Increased storage resources would be very helpful. Global file systems have been started and should be continued and improved.

The CPU limit on interactive testing is often restrictive, and a faster turnaround time for a test job queue (minutes, not hours) would help a lot.

67 users answered the question How does NERSC compare to other centers you have used?   41 users stated that NERSC was an excellent center or was better than other centers they have used. Reasons given for preferring NERSC include its consulting services and responsiveness, its hardware and software management and the stability of its systems.

Eleven users said that NERSC was comparable to other centers or gave a mixed review and only four said that NERSC was not as good as another center they had used. Some users expressed dissatisfaction with user support, turnaround time, Seaborg's slow processors, the lack of production (group) accounts, HPSS software, visualization and the allocations process.

 

Here are the survey results:

  1. Respondent Demographics
  2. Overall Satisfaction and Importance
  3. All Satisfaction, Importance and Usefulness Ratings
  4. Hardware Resources
  5. Software
  6. Visualization and Data Analysis
  7. HPC Consulting
  8. Services and Communications
  9. Web Interfaces
  10. Training
  11. Comments about NERSC

Respondent Demographics

Number of respondents to the survey: 256

 

Respondents by DOE Office and User Role:

OfficeRespondentsPercent
ASCR 17 6.6%
BER 33 12.9%
BES 87 34.0%
FES 25 9.8%
HEP 38 14.8%
NP 53 20.7%
guests 3 1.2%
User RoleNumberPercent
Principal Investigators 54 21.1%
PI Proxies 38 14.8%
Project Managers 8 3.1%
Users 156 60.9%

 

Respondents by Organization:

Organization TypeNumberPercent
Universities 138 53.9%
DOE Labs 91 35.5%
Industry 20 7.8%
Other Govt Labs 16 6.3%
OrganizationNumberPercent
Berkeley Lab 37 14.5%
Oak Ridge 12 4.7%
UC Berkeley 12 4.7%
U. Wisconsin Madison 12 4.7%
Livermore 7 2.7%
Harvard 6 2.3%
Northwestern 6 2.3%
PNNL 6 2.3%
U. Washington 6 2.3%
Argonne 5 2.0%
SLAC 5 2.0%
Tech-X Corp 5 2.0%
Ames Lab 4 1.6%
NREL 4 1.6%
PPPL 4 1.6%
Stanford 4 1.6%
U. Oklahoma 4 2.0%
OrganizationNumberPercent
Georgia Tech 3 1.2%
NCAR 3 1.2%
Texas A&M 3 1.2%
UC Irvine 3 1.2%
Vanderbilt 3 1.2%
Auburn Univ 2 0.8%
Cal Tech 2 0.8%
Georgia State 2 0.8%
Huazhong Univ (China) 2 0.8%
J. Inst Nuclear Research (Russia) 2 0.8%
Jefferson Lab 2 0.8%
Kansas State 2 0.8%
LAL IN2P3 (France) 2 0.8%
MIT 2 0.8%
Shanghai Physics (China) 2 0.8%
U. Central Florida 2 0.8%
U. Chicago 2 0.8%
U. Georgia 2 0.8%
UC Davis 2 0.8%
Wayne State 2 0.8%
Other Universities 49 19.1%
Other Gov. Labs 7 2.7%
Other DOE Labs 5 2.0%
Other Industry 4 1.6%

 

Which NERSC resources do you use?

ResourceResponses  PercentNum who answered
questions on this topic
  Percent
IBM SP (Seaborg) 169 66.0% 168 65.6%
NIM 160 62.5% 202 78.9%
NERSC web site (www.nersc.gov) 148 57.8% 211 82.4%
HPSS 110 43.2% 172 67.2%
Jacquard 109 42.6% 88 34.4%
Bassi 107 41.8% 99 38.7%
Consulting services 89 34.8% 199 77.7%
Account support services 85 33.2% 202 78.9%
PDSF 48 18.8% 43 16.8%
DaVinci 30 11.7% 30 11.7%
Off-hours 24x7 Computer and ESnet Operations support 26 10.2% 88 34.3%
Visualization services 9 3.5% 29 11.3%
NGF 8 3.1% 26 10.2%
NERSC CVS server 4 1.6% 24 9.4%
Grid services 3 1.2% 23 9.0%

 

How long have you used NERSC?

TimeNumberPercent
less than 6 months 31 12.1%
6 months - 3 years 116 45.3%
more than 3 years 109 42.6%

 

What desktop systems do you use to connect to NERSC?

SystemResponses
Unix Total 221
PC Total 130
Mac Total 93
Linux 187
Windows XP 113
OS X 77
Sun Solaris 21
Windows 2000 16
MacOS 15
IBM AIX 7
SGI IRIX 4
HP HPUX 1
Other 3

 

Web Browser Used to Take Survey:

BrowserNumberPercent
Firefox 119 46.5%
MSIE 6 50 19.5%
Mozilla 42 16.4%
Safari 35 13.7%
Netscape 4 8 3.1%
Opera 2 0.8%

 

Operating System Used to Take Survey:

OSNumberPercent
Windows XP 92 35.9%
Linux 80 31.3%
Mac OS X 70 27.3%
SunOS 5 2.0%
Windows 2000 5 2.0%
Windows NT 2 0.8%
FreeBSD 2 0.4%
MacOS 1 0.4%

 

Overall Satisfaction and Importance

 

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied 5.50 - 6.49
Somewhat Satisfied 4.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant increase
significant decrease
not significant

 

Overall Satisfaction with NERSC

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20
OVERALL: Satisfaction with NERSC 2   9 3 8 99 128 249 6.31 1.01 0.11
OVERALL: NERSC security 2 1 7 9 10 72 134 235 6.30 1.11 -0.31
OVERALL: Network connectivity     8 10 19 69 124 230 6.27 1.02 -0.18
OVERALL: Available computing hardware     3 5 29 108 92 237 6.19 0.82 0.30
OVERALL: Mass storage facilities     4 17 13 52 86 172 6.16 1.08 -0.16
OVERALL: Hardware management and configuration 2 2 3 17 14 86 89 213 6.07 1.15 0.09
OVERALL: Software management and configuration     8 19 17 65 89 198 6.05 1.13 -0.17
OVERALL: Available software     6 24 22 85 82 219 5.97 1.08 -0.22
OVERALL: Data analysis and visualization facilities   2 4 32 20 47 23 128 5.37 1.22 -0.28

 

How important to you is?

3=Very, 2=Somewhat, 1=Not important

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.
123
OVERALL: Satisfaction with NERSC   31 202 233 2.87 0.34
OVERALL: Available computing hardware 3 31 189 223 2.83 0.41
OVERALL: Consulting and Account Support services 4 52 167 223 2.73 0.48
OVERALL: Network connectivity 5 49 159 213 2.72 0.50
OVERALL: Hardware management and configuration 7 68 122 197 2.58 0.56
OVERALL: Software management and configuration 10 75 104 189 2.50 0.60
OVERALL: Available software 9 90 109 208 2.48 0.58
NERSC security 21 83 121 225 2.44 0.66
OVERALL: Mass storage facilities 31 71 80 182 2.27 0.74
OVERALL: Data analysis and visualization facilities 54 57 45 156 1.94 0.80

All Satisfaction, Importance and Usefulness Ratings

 

Legend

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied 5.50 - 6.49
Somewhat Satisfied 4.50 - 5.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant increase
significant decrease
not significant
UsefulnessAverage Score
Very Useful 2.50 - 3.00
Somewhat Useful 1.50 - 2.49

 

All Satisfaction Topics - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005Change from 2004Change from 2003
1234567
HPSS: Reliability (data integrity)       2   22 69 93 6.70 0.59 -0.03 -0.04 0.09
Account support services 1   1 4 2 47 147 202 6.64 0.76 -0.09 -0.04 0.25
HPSS: Uptime (Availability)       1 2 29 62 94 6.62 0.59 -0.06 -0.05 0.08
DaVinci SW: C/C++ compilers         1 3 9 13 6.62 0.65      
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73    
CONSULT: Timely initial response to consulting questions   1 3 2 6 50 136 198 6.57 0.81 -0.08 -0.13 0.02
Network performance within NERSC (e.g. Seaborg to HPSS)     2 1 3 38 72 116 6.53 0.75 -0.08 -0.07 -0.01
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20 -0.14 0.16
Bassi SW: Fortran compilers 1 1     3 18 50 73 6.52 1.02      
CONSULT: Followup to initial consulting questions 1   2 3 9 55 117 187 6.49 0.86 -0.08 -0.17 -0.00
CONSULT: overall 1 1 2 3 9 59 124 199 6.47 0.90 -0.21 -0.22 0.13
CONSULT: Quality of technical advice 1   2 3 8 66 113 193 6.46 0.84 -0.16 -0.13 -0.08
Seaborg SW: Fortran compilers   1 1 7 2 35 80 126 6.45 0.93 -0.04 0.04 0.11
PDSF SW: C/C++ compilers         3 13 17 33 6.42 0.66 -0.18 0.19 -0.02
NGF: Reliability       3   6 17 26 6.42 0.99      
WEB: Accuracy of information     1 6 7 79 106 199 6.42 0.75 0.02 0.22 0.17
DaVinci SW: Software environment       1 2 7 14 24 6.42 0.83      
NGF: File and Directory Operations       1 1 8 12 22 6.41 0.80      
Seaborg SW: Software environment     1 6 5 51 77 140 6.41 0.81 0.02 0.07 0.17
Bassi: Uptime (Availability)     1 4 4 31 52 92 6.40 0.85      
Seaborg SW: C/C++ compilers     1 6 4 24 55 90 6.40 0.93 0.03 0.14 0.18
WEB: NERSC web site overall (www.nersc.gov)     2 4 8 92 105 211 6.39 0.74 0.10 0.07 0.39
HPSS: Overall satisfaction 1   2 1 5 37 58 104 6.38 0.96 -0.13 -0.18 -0.08
NGF: Overall     1 1 2 5 17 26 6.38 1.06      
NIM     3 2 19 76 102 202 6.35 0.81 0.19 0.10 0.27
NGF: Uptime   1   1 1 7 16 26 6.35 1.16      
TRAINING: New User's Guide       3 6 53 49 111 6.33 0.70 -0.04 0.07 0.07
Seaborg SW: Programming libraries   1   7 7 35 60 110 6.32 0.96 -0.09 0.05 0.05
GRID: Job Submission 1     1 1 2 14 19 6.32 1.53 -0.21    
OVERALL: Satisfaction with NERSC 2   9 3 8 99 128 249 6.31 1.01 0.11 0.21 -0.06
NERSC security 2 1 7 9 10 72 134 235 6.30 1.11 -0.31 -0.18  
Bassi SW: Software environment 2   2   4 29 44 81 6.30 1.17      
HPSS: Data transfer rates 1 1 2 2 5 33 52 96 6.29 1.10 -0.11 -0.11  
SERVICES: Allocations process   1 1 5 12 70 76 165 6.28 0.85 0.12 0.35 0.59
Jacquard SW: Software environment       4 6 29 34 73 6.27 0.84 0.15    
Jacquard: overall   2   2 10 28 46 88 6.27 1.01 0.49    
Bassi SW: C/C++ compilers   2   2 2 15 27 48 6.27 1.18      
Bassi SW: Programming libraries 1 1   4 3 15 36 60 6.27 1.25      
CONSULT: Amount of time to resolve your issue 2   6 6 11 68 103 196 6.27 1.08 -0.14 -0.33 -0.09
OVERALL: Network connectivity     8 10 19 69 124 230 6.27 1.02 -0.18 -0.11 0.04
Bassi: overall 2   3 5 2 30 57 99 6.26 1.23      
GRID: Access and Authentication   1   2   6 14 23 6.26 1.29 -0.16    
Jacquard SW: C/C++ compilers   2   1 4 19 28 54 6.26 1.10 0.11    
SERVICES: Response to special requests (e.g. disk quota increases, etc.)   1 4 3 4 36 50 98 6.24 1.08 -0.11 0.17 -0.11
Seaborg: Uptime (Availability)   1 4 3 20 52 79 159 6.23 0.99 -0.33 -0.03 -0.19
DaVinci SW: Fortran compilers 1       1 6 10 18 6.22 1.44      
On-line help desk 1   1 6 11 41 55 115 6.21 1.02 0.04 0.05 0.19
WEB: Timeliness of information     1 13 21 69 90 194 6.21 0.92 0.09 0.04 0.16
GRID: Job Monitoring 1     2   4 13 20 6.20 1.54 -0.30    
OVERALL: Available Computing Hardware     3 5 29 108 92 237 6.19 0.82 0.30 0.53 0.06
GRID: File Transfer     2 1 1 5 13 22 6.18 1.30 -0.10    
SERVICES: E-mail lists     2 8 4 33 42 89 6.18 1.03 0.10 0.06  
Seaborg SW: Applications software     1 8 6 35 41 91 6.18 0.97 0.01 0.04 0.18
NGF: I/O Bandwidth     1   3 9 10 23 6.17 0.98      
Jacquard SW: General tools and utilities       3 3 24 17 47 6.17 0.82 0.19    
Jacquard SW: Programming libraries   2   3 5 18 28 56 6.16 1.17 0.24    
OVERALL: Mass storage facilities     4 17 13 52 86 172 6.16 1.08 -0.16 -0.20 0.04
TRAINING: Web tutorials       4 10 44 31 89 6.15 0.79 -0.07 0.05 0.08
CONSULT: Software bug resolution 1 1 1 11 9 37 60 120 6.14 1.16 0.04 0.02 0.50
PDSF SW: Fortran compilers       1 3 6 7 17 6.12 0.93 -0.08 0.25 0.09
Jacquard SW: Visualization software       1 3 6 7 17 6.12 0.93 0.58    
Jacquard SW: Fortran compilers   1 5 2 4 18 34 64 6.11 1.30 0.38    
Seaborg: overall 1   4 7 17 75 64 168 6.10 1.00 0.18 0.33 -0.33
Seaborg SW: General tools and utilities     2 8 6 45 37 98 6.09 0.97 -0.00 0.18 0.11
Jacquard SW: Applications software 1     3 2 21 17 44 6.09 1.14 0.31    
Bassi: Disk configuration and I/O performance 1 1 1 5 2 36 30 76 6.08 1.16      
HPSS: Data access time 1 1 3 2 9 37 38 91 6.08 1.17 0.08 -0.17 -0.38
DaVinci: overall   2   1 3 9 15 30 6.07 1.36 0.42 0.59 0.84
OVERALL: Hardware management and configuration 2 2 3 17 14 86 89 213 6.07 1.15 0.09 0.18 -0.00
Jacquard: Disk configuration and I/O performance   1 1 8 3 25 30 68 6.06 1.16 0.18    
OVERALL: Software management and configuration     8 19 17 65 89 198 6.05 1.13 -0.17 -0.14 0.01
Bassi SW: General tools and utilities   1 1 5 3 20 22 52 6.04 1.17      
Off-hours 24x7 Computer and ESnet Operations support 1 1 3 11 4 21 47 88 6.03 1.37      
Bassi SW: Applications software 1     4 6 18 20 49 6.02 1.18      
WEB: Ease of finding information 1   4 9 33 90 72 209 6.02 0.99 0.09 0.13 0.22
DaVinci SW: Visualization software   1   2 1 6 9 19 6.00 1.37 0.57    
NERSC CVS server       4 2 8 10 24 6.00 1.10 -0.21 0.67  
PDSF: Batch queue structure     1 3 3 20 11 38 5.97 0.97 -0.03 -0.34 -0.03
OVERALL: Available Software     6 24 22 85 82 219 5.97 1.08 -0.22 -0.26 -0.08
Jacquard: Batch queue structure   1 3 6 7 34 28 79 5.95 1.14 0.49    
PDSF: Batch wait time     1 3 5 18 12 39 5.95 1.00 0.15 0.08 0.02
TRAINING: NERSC classes: in-person       4 2 3 9 18 5.94 1.26 -0.18 0.46 1.06
Seaborg: Disk configuration and I/O performance 1 1 4 13 13 55 49 136 5.92 1.19 -0.14 -0.02 -0.23
Bassi: Batch queue structure 1   2 9 7 38 29 86 5.92 1.16      
PDSF SW: Software environment   2   1 6 14 13 36 5.92 1.25 -0.52 -0.44 -0.41
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) 1 5 10 4 19 64 63 166 5.89 1.33 -0.24 -0.23 -0.23
Jacquard: Batch wait time 1   3 5 10 40 23 82 5.87 1.13 0.71    
PDSF SW: Applications software     2 2 4 10 10 28 5.86 1.21 -0.28 0.07 -0.01
Bassi: Batch wait time     3 7 16 40 25 91 5.85 1.02      
PDSF SW: Programming libraries     1 3 7 9 11 31 5.84 1.13 -0.62 -0.30 -0.16
WEB: Searching     3 13 18 41 35 110 5.84 1.09 0.14 0.20 0.40
PDSF: Uptime (availability)     4 3 6 12 17 42 5.83 1.31 -0.06 -0.57 -0.52
HPSS: User interface (hsi, pftp, ftp) 1 1 4 7 14 35 33 95 5.83 1.26 -0.29 -0.31 -0.15
PDSF: Overall satisfaction   1 3 1 4 23 11 43 5.81 1.20 -0.19 -0.71 -0.60
Bassi SW: Performance and debugging tools 1 2 2 3 6 20 19 53 5.77 1.45      
Seaborg: Batch queue structure 1 4 5 13 21 61 48 153 5.77 1.27 0.72 1.11 0.08
Jacquard: Ability to run interactively   2 2 9 7 25 23 68 5.76 1.29 0.20    
Live classes on the web       5 1 9 6 21 5.76 1.14 0.04 0.61 1.09
Seaborg: Ability to run interactively   2 10 13 19 42 45 131 5.71 1.32 0.18 0.37 0.14
Seaborg SW: Performance and debugging tools   3 6 7 13 38 28 95 5.69 1.31 -0.31 -0.14 0.12
SERVICES: Visualization services     1 7 3 7 11 29 5.69 1.31 -0.14 0.28 0.88
PDSF SW: General tools and utilities   1 2 4 4 14 9 34 5.62 1.33 -0.58 -0.21 -0.31
Jacquard SW: Performance and debugging tools   4 1 2 6 20 11 44 5.59 1.45 0.24    
Bassi: Ability to run interactively 2 4 3 8 8 25 25 75 5.55 1.60      
PDSF SW: Performance and debugging tools 1   3 3 5 10 9 31 5.48 1.52 -0.52 -0.29 0.17
Seaborg SW: Visualization software     1 12 5 15 9 42 5.45 1.19 -0.08 0.05 0.37
PDSF: Ability to run interactively 1 1 1 4 11 17 6 41 5.39 1.30 -0.40 -0.29 -0.38
OVERALL: Data analysis and visualization facilities   2 4 32 20 47 23 128 5.37 1.22 -0.28 -0.04  
Bassi SW: Visualization software 1 1   4 2 9 5 22 5.36 1.62      
PDSF: Disk configuration and I/O performance 1   7 5 6 13 7 39 5.10 1.54 -0.04 -0.49 -0.59
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99 1.09 -0.30

 

All Satisfaction Topics - by Number of Responses

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005Change from 2004
1234567
OVERALL: Satisfaction with NERSC 2   9 3 8 99 128 249 6.31 1.01 0.11 0.21
OVERALL: Available Computing Hardware     3 5 29 108 92 237 6.19 0.82 0.30 0.53
OVERALL: Consulting and Support Services     4 8 7 58 159 236 6.53 0.85 -0.20 -0.14
NERSC security 2 1 7 9 10 72 134 235 6.30 1.11 -0.31 -0.18
OVERALL: Network connectivity     8 10 19 69 124 230 6.27 1.02 -0.18 -0.11
OVERALL: Available Software     6 24 22 85 82 219 5.97 1.08 -0.22 -0.26
OVERALL: Hardware management and configuration 2 2 3 17 14 86 89 213 6.07 1.15 0.09 0.18
WEB: NERSC web site overall (www.nersc.gov)     2 4 8 92 105 211 6.39 0.74 0.10 0.07
WEB: Ease of finding information 1   4 9 33 90 72 209 6.02 0.99 0.09 0.13
Account support services 1   1 4 2 47 147 202 6.64 0.76 -0.09 -0.04
NIM     3 2 19 76 102 202 6.35 0.81 0.19 0.10
CONSULT: overall 1 1 2 3 9 59 124 199 6.47 0.90 -0.21 -0.22
WEB: Accuracy of information     1 6 7 79 106 199 6.42 0.75 0.02 0.22
CONSULT: Timely initial response to consulting questions   1 3 2 6 50 136 198 6.57 0.81 -0.08 -0.13
OVERALL: Software management and configuration     8 19 17 65 89 198 6.05 1.13 -0.17 -0.14
CONSULT: Amount of time to resolve your issue 2   6 6 11 68 103 196 6.27 1.08 -0.14 -0.33
WEB: Timeliness of information     1 13 21 69 90 194 6.21 0.92 0.09 0.04
CONSULT: Quality of technical advice 1   2 3 8 66 113 193 6.46 0.84 -0.16 -0.13
OVERALL: Mass storage facilities     4 17 13 52 86 172 6.16 1.08 -0.16 -0.20
CONSULT: Followup to initial consulting questions 1   2 3 9 55 117 187 6.49 0.86 -0.08 -0.17
Seaborg: overall 1   4 7 17 75 64 168 6.10 1.00 0.18 0.33
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) 1 5 10 4 19 64 63 166 5.89 1.33 -0.24 -0.23
SERVICES: Allocations process   1 1 5 12 70 76 165 6.28 0.85 0.12 0.35
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99 1.09
Seaborg: Uptime (Availability)   1 4 3 20 52 79 159 6.23 0.99 -0.33 -0.03
Seaborg: Batch queue structure 1 4 5 13 21 61 48 153 5.77 1.27 0.72 1.11
Seaborg SW: Software environment     1 6 5 51 77 140 6.41 0.81 0.02 0.07
Seaborg: Ability to run interactively   2 10 13 19 42 45 131 5.71 1.32 0.18 0.37
Seaborg: Disk configuration and I/O performance 1 1 4 13 13 55 49 136 5.92 1.19 -0.14 -0.02
OVERALL: Data analysis and visualization facilities   2 4 32 20 47 23 128 5.37 1.22 -0.28 -0.04
Seaborg SW: Fortran compilers   1 1 7 2 35 80 126 6.45 0.93 -0.04 0.04
CONSULT: Software bug resolution 1 1 1 11 9 37 60 120 6.14 1.16 0.04 0.02
Network performance within NERSC (e.g. Seaborg to HPSS)     2 1 3 38 72 116 6.53 0.75 -0.08 -0.07
On-line help desk 1   1 6 11 41 55 115 6.21 1.02 0.04 0.05
TRAINING: New User's Guide       3 6 53 49 111 6.33 0.70 -0.04 0.07
Seaborg SW: Programming libraries   1   7 7 35 60 110 6.32 0.96 -0.09 0.05
WEB: Searching     3 13 18 41 35 110 5.84 1.09 0.14 0.20
HPSS: Overall satisfaction 1   2 1 5 37 58 104 6.38 0.96 -0.13 -0.18
Bassi: overall 2   3 5 2 30 57 99 6.26 1.23    
Seaborg SW: General tools and utilities     2 8 6 45 37 98 6.09 0.97 -0.00 0.18
SERVICES: Response to special requests (e.g. disk quota increases, etc.)   1 4 3 4 36 50 98 6.24 1.08 -0.11 0.17
HPSS: Data transfer rates 1 1 2 2 5 33 52 96 6.29 1.10 -0.11 -0.11
HPSS: User interface (hsi, pftp, ftp) 1 1 4 7 14 35 33 95 5.83 1.26 -0.29 -0.31
Seaborg SW: Performance and debugging tools   3 6 7 13 38 28 95 5.69 1.31 -0.31 -0.14
HPSS: Uptime (Availability)       1 2 29 62 94 6.62 0.59 -0.06 -0.05
HPSS: Reliability (data integrity)       2   22 69 93 6.70 0.59 -0.03 -0.04
Bassi: Uptime (Availability)     1 4 4 31 52 92 6.40 0.85    
Bassi: Batch wait time     3 7 16 40 25 91 5.85 1.02    
HPSS: Data access time 1 1 3 2 9 37 38 91 6.08 1.17 0.08 -0.17
Seaborg SW: Applications software     1 8 6 35 41 91 6.18 0.97 0.01 0.04
Seaborg SW: C/C++ compilers     1 6 4 24 55 90 6.40 0.93 0.03 0.14
SERVICES: E-mail lists     2 8 4 33 42 89 6.18 1.03 0.10 0.06
TRAINING: Web tutorials       4 10 44 31 89 6.15 0.79 -0.07 0.05
Computer and Network Operations 1 1 3 11 4 21 47 88 6.03 1.37 -0.57 -0.47
Jacquard: overall   2   2 10 28 46 88 6.27 1.01 0.49  
Bassi: Batch queue structure 1   2 9 7 38 29 86 5.92 1.16    
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73  
Jacquard: Batch wait time 1   3 5 10 40 23 82 5.87 1.13 0.71  
Bassi SW: Software environment 2   2   4 29 44 81 6.30 1.17    
Jacquard: Batch queue structure   1 3 6 7 34 28 79 5.95 1.14 0.49  
Bassi: Disk configuration and I/O performance 1 1 1 5 2 36 30 76 6.08 1.16    
Bassi: Ability to run interactively 2 4 3 8 8 25 25 75 5.55 1.60    
Bassi SW: Fortran compilers 1 1     3 18 50 73 6.52 1.02    
Jacquard SW: Software environment       4 6 29 34 73 6.27 0.84 0.15  
Jacquard: Ability to run interactively   2 2 9 7 25 23 68 5.76 1.29 0.20  
Jacquard: Disk configuration and I/O performance   1 1 8 3 25 30 68 6.06 1.16 0.18  
Jacquard SW: Fortran compilers   1 5 2 4 18 34 64 6.11 1.30 0.38  
Bassi SW: Programming libraries 1 1   4 3 15 36 60 6.27 1.25    
Jacquard SW: Programming libraries   2   3 5 18 28 56 6.16 1.17 0.24  
Jacquard SW: C/C++ compilers   2   1 4 19 28 54 6.26 1.10 0.11  
Bassi SW: Performance and debugging tools 1 2 2 3 6 20 19 53 5.77 1.45    
Bassi SW: General tools and utilities   1 1 5 3 20 22 52 6.04 1.17    
Bassi SW: Applications software 1     4 6 18 20 49 6.02 1.18    
Bassi SW: C/C++ compilers   2   2 2 15 27 48 6.27 1.18    
Jacquard SW: General tools and utilities       3 3 24 17 47 6.17 0.82 0.19  
Jacquard SW: Applications software 1     3 2 21 17 44 6.09 1.14 0.31  
Jacquard SW: Performance and debugging tools   4 1 2 6 20 11 44 5.59 1.45 0.24  
PDSF: Overall satisfaction   1 3 1 4 23 11 43 5.81 1.20 -0.19 -0.71
PDSF: Uptime (availability)     4 3 6 12 17 42 5.83 1.31 -0.06 -0.57
Seaborg SW: Visualization software     1 12 5 15 9 42 5.45 1.19 -0.08 0.05
PDSF: Ability to run interactively 1 1 1 4 11 17 6 41 5.39 1.30 -0.40 -0.29
PDSF: Batch wait time     1 3 5 18 12 39 5.95 1.00 0.15 0.08
PDSF: Disk configuration and I/O performance 1   7 5 6 13 7 39 5.10 1.54 -0.04 -0.49
PDSF: Batch queue structure     1 3 3 20 11 38 5.97 0.97 -0.03 -0.34
PDSF SW: Software environment   2   1 6 14 13 36 5.92 1.25 -0.52 -0.44
PDSF SW: General tools and utilities   1 2 4 4 14 9 34 5.62 1.33 -0.58 -0.21
PDSF SW: C/C++ compilers         3 13 17 33 6.42 0.66 -0.18 0.19
PDSF SW: Performance and debugging tools 1   3 3 5 10 9 31 5.48 1.52 -0.52 -0.29
PDSF SW: Programming libraries     1 3 7 9 11 31 5.84 1.13 -0.62 -0.30
DaVinci: overall   2   1 3 9 15 30 6.07 1.36 0.42 0.59
SERVICES: Visualization services     1 7 3 7 11 29 5.69 1.31 -0.14 0.28
PDSF SW: Applications software     2 2 4 10 10 28 5.86 1.21 -0.28 0.07
NGF: Overall     1 1 2 5 17 26 6.38 1.06    
NGF: Uptime   1   1 1 7 16 26 6.35 1.16    
NGF: Reliability       3   6 17 26 6.42 0.99    
DaVinci SW: Software environment       1 2 7 14 24 6.42 0.83    
NERSC CVS server       4 2 8 10 24 6.00 1.10 -0.21 0.67
GRID: Access and Authentication   1   2   6 14 23 6.26 1.29 -0.16  
NGF: I/O Bandwidth     1   3 9 10 23 6.17 0.98    
Bassi SW: Visualization software 1 1   4 2 9 5 22 5.36 1.62    
GRID: File Transfer     2 1 1 5 13 22 6.18 1.30 -0.10  
NGF: File and Directory Operations       1 1 8 12 22 6.41 0.80    
Live classes on the web       5 1 9 6 21 5.76 1.14 0.04 0.61
GRID: Job Monitoring 1     2   4 13 20 6.20 1.54 -0.30  
DaVinci SW: Visualization software   1   2 1 6 9 19 6.00 1.37 0.57  
GRID: Job Submission 1     1 1 2 14 19 6.32 1.53 -0.21  
DaVinci SW: Fortran compilers 1       1 6 10 18 6.22 1.44    
TRAINING: NERSC classes: in-person       4 2 3 9 18 5.94 1.26 -0.18 0.46
Jacquard SW: Visualization software       1 3 6 7 17 6.12 0.93 0.58  
PDSF SW: Fortran compilers       1 3 6 7 17 6.12 0.93 -0.08 0.25
DaVinci SW: C/C++ compilers         1 3 9 13 6.62 0.65    

 

All Importance Topics

Importance Ratings: 3=Very important, 2=Somewhat important, 1=Not important
Satisfaction Ratings: 7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total Responses for ImportanceAverage Importance ScoreStd. Dev.Total Responses for SatisfactionAverage Satisfaction ScoreStd. Dev.Change from 2005Change from 2004Change from 2003
123
OVERALL: Satisfaction with NERSC   31 202 233 2.87 0.34 249 6.31 1.01 0.11 0.21 -0.06
OVERALL: Available Computing Hardware 3 31 189 223 2.83 0.41 237 6.19 0.82 0.30 0.53 0.06
Account support services 2 40 144 186 2.76 0.45 202 6.64 0.76 -0.09 -0.04 0.25
SERVICES: Allocations process 3 31 119 153 2.76 0.47 165 6.28 0.85 0.12 0.35 0.59
OVERALL: Consulting and Support Services 4 52 167 223 2.73 0.48 236 6.53 0.85 -0.20 -0.14 0.16
OVERALL: Network connectivity 5 49 159 213 2.72 0.50 230 6.27 1.02 -0.18 -0.11 0.04
SERVICES: Response to special requests (e.g. disk quota increases, etc.) 4 26 59 89 2.62 0.57 98 6.24 1.08 -0.11 0.17 -0.11
OVERALL: Hardware management and configuration 7 68 122 197 2.58 0.56 213 6.07 1.15 0.09 0.18 -0.00
OVERALL: Software management and configuration 10 75 104 189 2.50 0.60 198 6.05 1.13 -0.17 -0.14 0.01
OVERALL: Available Software 9 90 109 208 2.48 0.58 219 5.97 1.08 -0.22 -0.26 -0.08
NERSC security 21 83 121 225 2.44 0.66 235 6.30 1.11 -0.31 -0.18  
OVERALL: Mass storage facilities 31 71 80 182 2.27 0.74 172 6.16 1.08 -0.16 -0.20 0.04
Off-hours 24x7 Computer and ESnet Operations support 14 38 38 90 2.27 0.72 88 6.03 1.37      
SERVICES: E-mail lists 19 40 24 83 2.06 0.72 89 6.18 1.03 0.10 0.06  
OVERALL: Data analysis and visualization facilities 54 57 45 156 1.94 0.80 128 5.37 1.22 -0.28 -0.04  
SERVICES: Visualization services 26 15 13 54 1.76 0.82 29 5.69 1.31 -0.14 0.28 0.88
NERSC CVS server 23 11 6 40 1.57 0.75 24 6.00 1.10 -0.21 0.67  

 

All Usefulness Topics

3=Very useful, 2=Somewhat useful, 1=Not useful

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.
123
SERVICES: E-mail lists 1 38 156 195 2.79 0.42
TRAINING: New User's Guide 1 25 69 95 2.72 0.48
TRAINING: Web tutorials 5 27 59 91 2.59 0.60
MOTD (Message of the Day) 18 71 82 171 2.37 0.67
SERVICES: Announcements web archive 15 87 68 170 2.31 0.63
Live classes on the web 7 13 12 32 2.16 0.77
Phone calls from NERSC 34 43 50 127 2.13 0.81
TRAINING: NERSC classes: in-person 11 11 12 34 2.03 0.83

Hardware Resources

 

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied 5.50 - 6.49
Somewhat Satisfied 4.50 - 5.49
Significance of Change
significant increase
significant decrease
not significant

 

Hardware Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
HPSS: Reliability (data integrity)       2   22 69 93 6.70 0.59 -0.03
HPSS: Uptime (Availability)       1 2 29 62 94 6.62 0.59 -0.06
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73
Network performance within NERSC (e.g. Seaborg to HPSS)     2 1 3 38 72 116 6.53 0.75 -0.08
NGF: Reliability       3   6 17 26 6.42 0.99  
NGF: File and Directory Operations       1 1 8 12 22 6.41 0.80  
Bassi: Uptime (Availability)     1 4 4 31 52 92 6.40 0.85  
HPSS: Overall satisfaction 1   2 1 5 37 58 104 6.38 0.96 -0.13
NGF: Overall     1 1 2 5 17 26 6.38 1.06  
NGF: Uptime   1   1 1 7 16 26 6.35 1.16  
GRID: Job Submission 1     1 1 2 14 19 6.32 1.53 -0.21
HPSS: Data transfer rates 1 1 2 2 5 33 52 96 6.29 1.10 -0.11
NERSC CVS server       1   2 4 7 6.29 1.11 0.08
Jacquard: overall   2   2 10 28 46 88 6.27 1.01 0.49
Bassi: overall 2   3 5 2 30 57 99 6.26 1.23  
GRID: Access and Authentication   1   2   6 14 23 6.26 1.29 -0.16
Seaborg: Uptime (Availability)   1 4 3 20 52 79 159 6.23 0.99 -0.33
GRID: Job Monitoring 1     2   4 13 20 6.20 1.54 -0.30
GRID: File Transfer     2 1 1 5 13 22 6.18 1.30 -0.10
NGF: I/O Bandwidth     1   3 9 10 23 6.17 0.98  
Seaborg: overall 1   4 7 17 75 64 168 6.10 1.00 0.18
Bassi: Disk configuration and I/O performance 1 1 1 5 2 36 30 76 6.08 1.16  
HPSS: Data access time 1 1 3 2 9 37 38 91 6.08 1.17 0.08
DaVinci: overall   2   1 3 9 15 30 6.07 1.36 0.42
Jacquard: Disk configuration and I/O performance   1 1 8 3 25 30 68 6.06 1.16 0.18
PDSF: Batch queue structure     1 3 3 20 11 38 5.97 0.97 -0.03
Jacquard: Batch queue structure   1 3 6 7 34 28 79 5.95 1.14 0.49
PDSF: Batch wait time     1 3 5 18 12 39 5.95 1.00 0.15
Seaborg: Disk configuration and I/O performance 1 1 4 13 13 55 49 136 5.92 1.19 -0.14
Bassi: Batch queue structure 1   2 9 7 38 29 86 5.92 1.16  
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) 1 5 10 4 19 64 63 166 5.89 1.33 -0.24
Jacquard: Batch wait time 1   3 5 10 40 23 82 5.87 1.13 0.71
Bassi: Batch wait time     3 7 16 40 25 91 5.85 1.02  
PDSF: Uptime (availability)     4 3 6 12 17 42 5.83 1.31 -0.06
HPSS: User interface (hsi, pftp, ftp) 1 1 4 7 14 35 33 95 5.83 1.26 -0.29
PDSF: Overall satisfaction   1 3 1 4 23 11 43 5.81 1.20 -0.19
Seaborg: Batch queue structure 1 4 5 13 21 61 48 153 5.77 1.27 0.72
Jacquard: Ability to run interactively   2 2 9 7 25 23 68 5.76 1.29 0.20
Seaborg: Ability to run interactively   2 10 13 19 42 45 131 5.71 1.32 0.18
Bassi: Ability to run interactively 2 4 3 8 8 25 25 75 5.55 1.60  
PDSF: Ability to run interactively 1 1 1 4 11 17 6 41 5.39 1.30 -0.40
PDSF: Disk configuration and I/O performance 1   7 5 6 13 7 39 5.10 1.54 -0.04
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99

 

Hardware Satisfaction - by Platform

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
IBM POWER 5 p575: Bassi
Bassi: Uptime (Availability)     1 4 4 31 52 92 6.40 0.85  
Bassi: overall 2   3 5 2 30 57 99 6.26 1.23  
Bassi: Disk configuration and I/O performance 1 1 1 5 2 36 30 76 6.08 1.16  
Bassi: Batch queue structure 1   2 9 7 38 29 86 5.92 1.16  
Bassi: Batch wait time     3 7 16 40 25 91 5.85 1.02  
Bassi: Ability to run interactively 2 4 3 8 8 25 25 75 5.55 1.60  
CVS Server
CVS server       1   2 4 7 6.29 1.11 0.08
SGI Altix: DaVinci
DaVinci: overall   2   1 3 9 15 30 6.07 1.36 0.42
Grid Services
GRID: Job Submission 1     1 1 2 14 19 6.32 1.53 -0.21
GRID: Access and Authentication   1   2   6 14 23 6.26 1.29 -0.16
GRID: Job Monitoring 1     2   4 13 20 6.20 1.54 -0.30
GRID: File Transfer     2 1 1 5 13 22 6.18 1.30 -0.10
Archival Mass Storage: HPSS
HPSS: Reliability (data integrity)       2   22 69 93 6.70 0.59 -0.03
HPSS: Uptime (Availability)       1 2 29 62 94 6.62 0.59 -0.06
HPSS: Overall satisfaction 1   2 1 5 37 58 104 6.38 0.96 -0.13
HPSS: Data transfer rates 1 1 2 2 5 33 52 96 6.29 1.10 -0.11
HPSS: Data access time 1 1 3 2 9 37 38 91 6.08 1.17 0.08
HPSS: User interface (hsi, pftp, ftp) 1 1 4 7 14 35 33 95 5.83 1.26 -0.29
Opteron/Infiniband Linux Cluster: Jacquard
Jacquard: Uptime (Availability)       2 2 26 55 85 6.58 0.66 0.73
Jacquard: overall   2   2 10 28 46 88 6.27 1.01 0.49
Jacquard: Disk configuration and I/O performance   1 1 8 3 25 30 68 6.06 1.16 0.18
Jacquard: Batch queue structure   1 3 6 7 34 28 79 5.95 1.14 0.49
Jacquard: Batch wait time 1   3 5 10 40 23 82 5.87 1.13 0.71
Jacquard: Ability to run interactively   2 2 9 7 25 23 68 5.76 1.29 0.20
NERSC Network
Network performance within NERSC (e.g. Seaborg to HPSS)     2 1 3 38 72 116 6.53 0.75 -0.08
Remote network performance to/from NERSC (e.g. Seaborg to your home institution) 1 5 10 4 19 64 63 166 5.89 1.33 -0.24
NERSC Global Filesystem
NGF: Reliability       3   6 17 26 6.42 0.99  
NGF: File and Directory Operations       1 1 8 12 22 6.41 0.80  
NGF: Overall     1 1 2 5 17 26 6.38 1.06  
NGF: Uptime   1   1 1 7 16 26 6.35 1.16  
NGF: I/O Bandwidth     1   3 9 10 23 6.17 0.98  
Linux Cluster: PDSF
PDSF: Batch queue structure     1 3 3 20 11 38 5.97 0.97 -0.03
PDSF: Batch wait time     1 3 5 18 12 39 5.95 1.00 0.15
PDSF: Uptime (availability)     4 3 6 12 17 42 5.83 1.31 -0.06
PDSF: Overall satisfaction   1 3 1 4 23 11 43 5.81 1.20 -0.19
PDSF: Ability to run interactively 1 1 1 4 11 17 6 41 5.39 1.30 -0.40
PDSF: Disk configuration and I/O performance 1   7 5 6 13 7 39 5.10 1.54 -0.04
IBM POWER 3: Seaborg
Seaborg: Uptime (Availability)   1 4 3 20 52 79 159 6.23 0.99 -0.33
Seaborg: overall 1   4 7 17 75 64 168 6.10 1.00 0.18
Seaborg: Disk configuration and I/O performance 1 1 4 13 13 55 49 136 5.92 1.19 -0.14
Seaborg: Batch queue structure 1 4 5 13 21 61 48 153 5.77 1.27 0.72
Seaborg: Ability to run interactively   2 10 13 19 42 45 131 5.71 1.32 0.18
Seaborg: Batch wait time 6 5 27 11 35 56 19 159 4.94 1.57 0.99

 

Hardware Comments:   37 responses

 

Overall Hardware Comments:   12 responses

Need more resources

Hopefully franklin will fix the long queue wait times.

Please get more computers.

Please assign more space of hardware to its users.

Queue comments

The queue structure could have some improvement, sometimes jobs requiring many nodes make the queue slow but I am sure that you are looking into this.

I run the CCSM model. The model runs a relatively small number of processors for a very long time. For example, we use 248 processors on bassi. On Seaborg, we could potentially get one model year/wallclock day. Since we usually run 130 year simulations, if we had 248 processors continuously, it would take 4.5 months to run the model. We didn't get even close to that. Our last seaborg run took 15 months real time, which is intolerably slow.

Bassi runs faster. On bassi, we get roughly 10 model years/wallclock day, a nice number. So it's cheaper for us to run on bassi, and better. bassi is down more frequently, and I get more machine related errors when running on bassi.

On both machines your queue structure does not give us the priority that we need to get the throughput that we have been allocated. For now it's working because bassi isn't heavily loaded. But as others leave seaborg behind and move onto bassi, the number of slots we get in the queue will go down, and we'll find ourselves unable to finish model runs in a timely fashion again.

Submission of batch jobs is not well documented.

The most unsatisfactory part for me is the confusing policy of queueing the submitted jobs. In an ideal world, it should be first come, first serve with some reasonable constraints. However, I frequently find my jobs waiting for days and weeks without knowing why. Other jobs of similar types or even those with low priority sometimes can jump ahead and run instantaneously. This makes rational planning of the project and account management almost impossible. I assume most of us are not trained as computer scientists with special skills who can find loop holes or know how to take advantages of the system. We only need our projects to proceed as planed.

Good overall

Bassi is great. The good network connectivity within NERSC and to the outside world and the reliability of HPSS make NERSC my preferred platform for post-processing very large scale runs.

Hardware resources at NERSC are the best I have used anywhere. NERSC and in particular Dr. Simon Horst should be congratulated for setting up and running certainly one of the best supercomputing facilities in the world.

Other comments

NERSC could have a more clear and fair computing time reimbursement/refund policy. For example (Reference Number 061107-000061 for online consulting), on 11/07/2006, I had a batch job on bassi interrupted by a node failure. The loadleveler automatically restarted the batch job from beginning, overwritting all the output files before the node failure. Later I requested refund of the 1896 MPP hours wasted in that incident due to the bassi node failure. But my request was denied, which I think is unfair.

I have not done extensive comparison on I/O and network performance. Hopefully, next year I'll be able to provide more useful information here.

every nersc head node should be running grid ftp

every nersc queuing node should be running GT4 GRAM

 

Comments by Bassi Users:   7 responses

Charge factor

the charge factor of 6 for bassi is absolutely ridiculous compared to jacquard. it performs only half as good as jacquard.

We have consistently found (and NERSC consultants have confirmed) a speedup factor of 2 for Bassi relative to Seaborg on our production code. Because the charge factor is 6, and because we see a speedup of 3 on Jacquard, Bassi is currently not an attractive platform for us, except for extremely large and time-sensitive jobs.

Queue comments

I really like Bassi; however, the availability of Bassi for multiple small jobs is difficult, since only 3 jobs from a user can run at a time; this is difficult to deal with when I have many of these jobs, even when the queues are rather small.

I don't understand why bassi has restriction on using large number of nodes (i.e., > 48 nodes requires special arrangement.)

Disk storage comments

Scratch space is small. My quota is 256GB. Simulation we are currently running are on a 2048^3 and we solve for 3 real variables per grid point giving a total of 96 GB per restart dataset. After 6 hours of running (maximum walltime on bassi), we continue from a restart dataset. But sometimes, we need to do checkpointing (i.e. generate the restart files) half way through the simulation. This amount of being able to hold 3 datasets (initial conditions, half-way checkpoint and at the end) which is not possible. Moreover, for simulations of more scientific interest we solve for 5 variables per grid point. The restart dataset in this case is 160 GB. This means that we cannot run, checkpoint and continue. This quota also prevents fast postprocessing of the data when several realizations of the fields (many datasets) are needed to get reliable statistical results.

I could not unzip library source code on Bassi because it limited the number of subdirectories I could create. That machine is useless to me unless I can get Boost installed. ...

Login problems

Logon behavior to Bassi can be inconsistent with good passwords sometimes being rejected, and then accepted at the next attempt. Molpro does not generally work well on multiple nodes. This is not too much of a problem on Bassi as there are 8 processors per node, but better scaling, with respect to number of nodes, is possible for this code.

 

Comments by DaVinci Users:   2 responses

I use multiple processors on DaVinci for computations with MATLAB. The multiple processors and rather fast computation are extremely useful for my research projects on climate and ice sheet dynamics. Via DaVinci NERSC has been a huge help to my research program.

... The machine I have been able to effectively use is Davinci because it has Intel compilers. NERSC support has not been helpful at all in getting my software to run on various machines.

 

Comments by Jacquard Users:   2 responses

On jacquard, it might be nice to make it easier for users who want to submit a large number of single-processor jobs as opposed to a few massively parallel jobs. This is possible but in the current configuration, the user has to manually write code to submit a batch job, ssh to all the assigned nodes, and start the jobs manually. Perhaps that is intentional, but the need does arise, for instance when it is possible to divide a task such that it can be run as 1000 separate jobs which do not need to communicate.

Jacquard is much harder to use than the IBM-SP's...

 

Comments on Network Performance:   3 responses

we produce output files faster than we can transfer them to our home institution, even using compression techniques. this is usually not an issue, but it has been recently.

Network performance to HPSS seems a bit slower than to resources such as Jacquard. Not sure of how much of a hit this actually is. Just an impression.

It is quite possible that I am unaware of a better alternative, but using BBFTP to transfer files to/from Bassi from/to NSF centers I see data rates of only 30-40 MB/sec. This isn't really adequate for the volume of data that we need to move. For example, I can regularly achieve 10x this rate between major elements of the NSF Teragrid. And that isn't enough either!

 

Comments about Storage:   3 responses

... The HPSS hardware seems great, but the ability to access it is terrible [see comments in software section].

The largest restriction for us is usually disk and storage; we have been able to work with consulting to make special arrangements for our needs, which have been very helpful.

The low inodes quota is a real pain.

 

Comments by PDSF Users:   6 responses

Diskservers at PDSF are faring reasonably well, but occasional crashes/outages occur. The move to GFPS has made disk more reliable, but still occasional crashes occur. These sometimes mean that PDSF is unavailable for certain tasks for up to several days (depending on the severity of the crash). This should be an area of continued focus and attention.

My biggest problem with using PDSF has always been that regularly the nodes just freeze, even on something as simple as an ls command. Typically I think this is because some user is hammering a disk I am accessing. This effects the overall useability of the nodes and can be very frustrating. My officemates all use pdsf, and we regularly inform each other about the performance of PDSF to decide whether it is worth trying to connect to the system at all or if it would be better to wait until later.

Interactive use on PDSF is often too slow.

I think that the NGF and the general choice for GPFS is a great improvement over the previous NFS-based systems. I am worried that in recent months we have seen the performance of the PDSF home FS drop significantly.

The switch to GPFS from NFS on PDSF seems to be overall a good thing, but there are now occasional long delays or unavailability of the home disks that I don't like and don't understand...

PDSF has got way too few SSH gateway systems, plus they seem to be selected by round-robin DNS aliasing and thus it is entirely possible to end up on a host with load already approaching 10 while there are still machines available doing absolutely nothing; what I normally do nowadays is look at the Ganglia interface of PDSF and manually log in to the machine with smallest load. There is a definite need for proper load balancing here! Also, it may make sense to separate interactive machines into strict gateways (oriented on minimal latency of connections, with very limited number-crunching privileges) and interactive-job boxes (the opposite).

 

Comments by Seaborg Users:   3 responses

Seaborg is a little slow, but that is to be expected. The charge factors on the newer, faster machines are dauntingly high.

Our project relies primarily on our ability to submit parallel jobs to the batch queue on Seaborg. To that end, the current setup is more than adequate.

There have been persistent problems with passwords (to seaborg) being reset or deactivated. In one case my password was deactivated but I was not informed (via email or otherwise. This may have been the result of a security breach at our home institute. Several hours are lost trying to regain access to NERSC.

Software

 

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied 5.50 - 6.49
Somewhat Satisfied 4.50 - 5.49
Significance of Change
significant decrease
not significant

 

Software Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
DaVinci SW: C/C++ compilers         1 3 9 13 6.62 0.65  
Bassi SW: Fortran compilers 1 1     3 18 50 73 6.52 1.02  
Seaborg SW: Fortran compilers   1 1 7 2 35 80 126 6.45 0.93 -0.04
PDSF SW: C/C++ compilers         3 13 17 33 6.42 0.66 -0.18
DaVinci SW: Software environment       1 2 7 14 24 6.42 0.83  
Seaborg SW: Software environment     1 6 5 51 77 140 6.41 0.81 0.02
Seaborg SW: C/C++ compilers     1 6 4 24 55 90 6.40 0.93 0.03
Seaborg SW: Programming libraries   1   7 7 35 60 110 6.32 0.96 -0.09
Bassi SW: Software environment 2   2   4 29 44 81 6.30 1.17  
Jacquard SW: Software environment       4 6 29 34 73 6.27 0.84 0.15
Bassi SW: C/C++ compilers   2   2 2 15 27 48 6.27 1.18  
Bassi SW: Programming libraries 1 1   4 3 15 36 60 6.27 1.25  
Jacquard SW: C/C++ compilers   2   1 4 19 28 54 6.26 1.10 0.11
DaVinci SW: Fortran compilers 1       1 6 10 18 6.22 1.44  
Seaborg SW: Applications software     1 8 6 35 41 91 6.18 0.97 0.01
Jacquard SW: General tools and utilities       3 3 24 17 47 6.17 0.82 0.19
Jacquard SW: Programming libraries   2   3 5 18 28 56 6.16 1.17 0.24
PDSF SW: Fortran compilers       1 3 6 7 17 6.12 0.93 -0.08
Jacquard SW: Visualization software       1 3 6 7 17 6.12 0.93 0.58
Jacquard SW: Fortran compilers   1 5 2 4 18 34 64 6.11 1.30 0.38
Seaborg SW: General tools and utilities     2 8 6 45 37 98 6.09 0.97 -0.00
Jacquard SW: Applications software 1     3 2 21 17 44 6.09 1.14 0.31
Bassi SW: General tools and utilities   1 1 5 3 20 22 52 6.04 1.17  
Bassi SW: Applications software 1     4 6 18 20 49 6.02 1.18  
DaVinci SW: Visualization software   1   2 1 6 9 19 6.00 1.37 0.57
PDSF SW: Software environment   2   1 6 14 13 36 5.92 1.25 -0.52
PDSF SW: Applications software     2 2 4 10 10 28 5.86 1.21 -0.28
PDSF SW: Programming libraries     1 3 7 9 11 31 5.84 1.13 -0.62
Bassi SW: Performance and debugging tools 1 2 2 3 6 20 19 53 5.77 1.45  
Seaborg SW: Performance and debugging tools   3 6 7 13 38 28 95 5.69 1.31 -0.31
PDSF SW: General tools and utilities   1 2 4 4 14 9 34 5.62 1.33 -0.58
Jacquard SW: Performance and debugging tools   4 1 2 6 20 11 44 5.59 1.45 0.24
PDSF SW: Performance and debugging tools 1   3 3 5 10 9 31 5.48 1.52 -0.52
Seaborg SW: Visualization software     1 12 5 15 9 42 5.45 1.19 -0.08
Bassi SW: Visualization software 1 1   4 2 9 5 22 5.36 1.62  

 

Software Satisfaction - by Platform

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
Bassi SW: Fortran compilers 1 1     3 18 50 73 6.52 1.02  
Bassi SW: Software environment 2   2   4 29 44 81 6.30 1.17  
Bassi SW: C/C++ compilers   2   2 2 15 27 48 6.27 1.18  
Bassi SW: Programming libraries 1 1   4 3 15 36 60 6.27 1.25  
Bassi SW: General tools and utilities   1 1 5 3 20 22 52 6.04 1.17  
Bassi SW: Applications software 1     4 6 18 20 49 6.02 1.18  
Bassi SW: Performance and debugging tools 1 2 2 3 6 20 19 53 5.77 1.45  
Bassi SW: Visualization software 1 1   4 2 9 5 22 5.36 1.62  
 
DaVinci SW: C/C++ compilers         1 3 9 13 6.62 0.65  
DaVinci SW: Software environment       1 2 7 14 24 6.42 0.83  
DaVinci SW: Fortran compilers 1       1 6 10 18 6.22 1.44  
DaVinci SW: Visualization software   1   2 1 6 9 19 6.00 1.37 0.57
 
Jacquard SW: Software environment       4 6 29 34 73 6.27 0.84 0.15
Jacquard SW: C/C++ compilers   2   1 4 19 28 54 6.26 1.10 0.11
Jacquard SW: General tools and utilities       3 3 24 17 47 6.17 0.82 0.19
Jacquard SW: Programming libraries   2   3 5 18 28 56 6.16 1.17 0.24
Jacquard SW: Visualization software       1 3 6 7 17 6.12 0.93 0.58
Jacquard SW: Fortran compilers   1 5 2 4 18 34 64 6.11 1.30 0.38
Jacquard SW: Applications software 1     3 2 21 17 44 6.09 1.14 0.31
Jacquard SW: Performance and debugging tools   4 1 2 6 20 11 44 5.59 1.45 0.24
 
PDSF SW: C/C++ compilers         3 13 17 33 6.42 0.66 -0.18
PDSF SW: Fortran compilers       1 3 6 7 17 6.12 0.93 -0.08
PDSF SW: Software environment   2   1 6 14 13 36 5.92 1.25 -0.52
PDSF SW: Applications software     2 2 4 10 10 28 5.86 1.21 -0.28
PDSF SW: Programming libraries     1 3 7 9 11 31 5.84 1.13 -0.62
PDSF SW: General tools and utilities   1 2 4 4 14 9 34 5.62 1.33 -0.58
PDSF SW: Performance and debugging tools 1   3 3 5 10 9 31 5.48 1.52 -0.52
 
Seaborg SW: Fortran compilers   1 1 7 2 35 80 126 6.45 0.93 -0.04
Seaborg SW: Software environment     1 6 5 51 77 140 6.41 0.81 0.02
Seaborg SW: C/C++ compilers     1 6 4 24 55 90 6.40 0.93 0.03
Seaborg SW: Programming libraries   1   7 7 35 60 110 6.32 0.96 -0.09
Seaborg SW: Applications software     1 8 6 35 41 91 6.18 0.97 0.01
Seaborg SW: General tools and utilities     2 8 6 45 37 98 6.09 0.97 -0.00
Seaborg SW: Performance and debugging tools   3 6 7 13 38 28 95 5.69 1.31 -0.31
Seaborg SW: Visualization software     1 12 5 15 9 42 5.45 1.19 -0.08

 

Comments about Software:   27 responses

 

General (Cross Platform) Software Comments:   9 responses

It would be great to have more up-to-date versions of quantum chemistry packages running at NERSC.

It would be great to add more support for highly parallel molecular dynamics code, most notably NAMD by Klaus Schulten's group.

I think the main difficulty that I run into is not having an up to date version of Python available on all the machines. I would like to see versions: 2.3.6, 2.4.4, 2.5. These are the latest stable version of Python for each of the major releases.
The other things that I would *really* like is to have more modern MPI implementations - specifically ones that support the MPI-2 spec.

The GNU autotools for building software (autoconf, automake, etc) are frequently out of date which necessitates installing your own version to build some piece of software. Given that these are so commonly used, they should be kept up to date.

Compilers are the bane of our existence. NERSC is no worse than any other site and probably slightly better. The failure of compilers to be ansi compliant is not something I expect to be fixed any time in the near future. Indeed the reverse seems more likely. Perhaps NERSC with DOE behind it could be a leader. Certainly NSF is unlikely to. Otherwise how could they seriously propose Petascale computing.

Debugging with Totalview still seems more painful than necessary --- in parallel mode it is not fun at all (although I have not used it much in parallel mode in FY2006)

The software on all the computers is excellent!

Software resources at NERSC are excellent especially for research in mathematical and physical sciences. NERSC makes special efforts from time to time to upgrade the software and users are advised about the upgrades,etc. in sufficient details. NERSC deserves thanks from the users for their efforts to provide most recent upgrades.

 

Comments by Bassi Users:   3 responses

CHARMM performance on bassi is worse than that on jacquard, though bassi charges more than jacquard. I don't know whether it is bassi's problem or CHARMM's problem.

I wish there were a way to run some jobs for more than 24 hours on a small number of processors.

I would like need it to be available on Bassi.

 

Comments by DaVinci Users:   4 responses

On Davinci, some Fortran library is needed.

I don't use DaVinci. The /project directory has been flakey. I don't want to move data around. I would like to see Trilinos installed.

The only software I use a lot on NERSC machines is MATLAB. Being able to run multiple MATLABs simultaneously on DaVinci, and fairly quickly, has been a huge help to my research program. If there is any way to run MATLAB code (regular code, not parallelized in any way) faster I would like to know about it. Overall I am very satisfied with the resource as it has allowed me to do computations that I would not otherwise have been able to do.

... So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request.

 

Comments by HSS Users:   1 response

The HPSS interface options are shockingly bad. e.g. Kamland had to resort to writing hsi wrappers to achieve reasonable performance. The SNfactory wrote a perl module to have a standard interface within perl scripts rather than spawning and parsing hsi and htar calls one by one. Even the interactive interface of hsi doesn't have basic command line editing. htar failures sometimes don't return error codes and leave 0 sized files in the destination locations. NERSC should provide C, C++, perl, and python libraries for HPSS access in addition to htar, hsi, etc. The HPSS hardware seems great, but the ability to access it is terrible.

 

Comments by Jacquard Users:   6 responses

Need latest version of NWChem to be installed in Jacquard.

Porting some code to the pathscale compiler has been problematic on jacquard.

The PathScale compilers on Jacquard, particularly the Fortran one, have been quite awkward/awful for us to use. I keep on running into a variety of problems compiling and running our codes on Jacquard with PathScale, which are absent in other machines such as Bassi and Seaborg. Similar experience also applies to the mvapich library on Jacquard. ...

Even if it's heretical, please put Intel compilers on Jacquard. Some software does not support PathScale-specific options and will not compile. I have found Intel compilers, even on AMD machines, to be the most reliable and high-performing compilers for all of my programs.

Interactive debugging of large parallel jobs remotely is difficult even with Totalview, due to network lags and the opacity of the PETSc library. It is impossible on machines such as Jacquard on which there can be long delays in the launch of "interactive" jobs.

I haven't used the system much since the cmb module was upgraded, but my initial impression previously was that support for many quite standard libraries was not immediately apparent. (fftw3, gsl, boost, ATLAS, CBLAS, LAPACK). It is particularly surprising that the AMD math library interface is not compatible with the standard CBLAS interface, but that's obviously not your fault. I think the net result, though, is there are many many copies of these libraries floating around, which users have individually compiled themselves so their existing code would work. On the whole, though, the development support is still excellent.

 

Comments by PDSF Users:   2 responses

We recently switched operating systems on PDSF and are now using Scientific Linux 3.02. Unfortunately, the default installation is fairly bare bones, lacking any kind of graphics software to look at PDF, PNG, GIF etc.

The "RedHat 8 for STAR" CHOS environment on PDSF lacks many small yet highly useful tools, like Midnight Commander for instance. As a result it is necessary to switch to the default profile, which slow things down and makes it impossible to use STAR-specific components together with such tools.

 

Comments by Seaborg Users:   4 responses

I understand the reason why NERSC has to remove imsl from seaborg. But, I am not happy about this action.

The OS of Seaborg seems a bit clunky. For example, users can't use the up arrow to get most recent commands, and doesn't do automatic completion.

As earlier, the scope of our project is served well by the current setup at NERSC.

On seaborg, compiling C++ with optimization is very slow.

Visualization and Data Analysis

Where do you perform data analysis and visualization of data produced at NERSC?

LocationResponsesPercent
All at NERSC 10 3.9%
Most at NERSC 35 13.8%
Half at NERSC, half elsewhere 40 15.7%
Most elsewhere 90 35.4%
All elsewhere 71 28.0%
I don't need data analysis or visualization 8 3.1%

Are your data analysis and visualization needs being met? In what ways do you make use of NERSC data analysis and visualization resources? In what ways should NERSC add to or improve these resources?

 

Requests for additional services / problems:   21 responses

Requests for additional software

Improve the support of add on python libraries.

would be nice to have R (open-source S-PLUS)

... The only thing I can think of that I would like to see added is the mapping toolbox for matlab.

I would like to see the Climate Data Management System (CDAT) working on Seaborg (with the GUI)

3D visualization softwares such as AVS, Visit, etc., are hard to learn and to use for the typical researcher. That's why gnuplot is still the preferred tool for analysis and vis for many. NERSC's resources are extremely good for analysis and vis but the thing missing is a closer working relationship between members of the vis group and the researchers. Tailored analysis and visualization tools for specific applications would be great but researchers usually don't know what they want or what they are missing... Maybe the vis group should take the initiative of building a few of those vis tools for chosen applications and publicized them.

I know NERSC works at improving viz (which for me means both data analysis and visualization) but the codes we currently run at NERSC don't need any high end viz. Someday we may be doing long MD (version QMC) calculations. Then I will want NERSC to even have or install out real-time multi-resulation analysis software that allows us to detect stable structures and transitions, currently over 2-to-the-12th time scales.

... Sometimes I used XmakeMol compiled by myself at Home directory, which is light and good for atomistic structural analysis. Can you offer higher visualization/analysis softwares likewise?

So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request.

We use our own software for data analysis and do not rely on external, commercial software, our needs are satisfied by using the CERN ROOT package.
However, we currently lack any basic graphics visualization tools on PDSF. By this I mean a tool to look at PDF, GIF, PNG etc. We often create graphs in batch mode and these can only be viewed by copying them back to the desktop machine. We would like to see some basic graphics package installed on SL302 on PDSF.

Requests for more resources

We do most of our visualization in house with IDL, on serial machines. We have begin working with the visualization group for advanced visualization. The best addition would be stabilization and increase in capacity of shared file systems to make interoperation between code running machines and analysis easier. Added capacity in scratch and other file systems would also be very helpful; we often need to store an analyze large data sets, which often requires special arrangement.

I write my own analysis codes which must be run on the large machines (mainly Bassi) since DaVinci is not large enough. I move results from these to local resources where I visualize them and process further. I am happy with the situation. It would be nice to have a larger post-processing machine though, since post-processing development is quite iterative for me and this doesn't fit with the long queue times on the production machines.

I think the current way that queues are structured has a very significant and adverse effect on the ability of users to do vis/analysis on NERSC resources. The difficulty is that for large data sets, massive computational power is needed for analysis and vis. Currently the only way of getting that power is by using the production batch queues on the big machines. The problem with this is that it almost entirely eliminates the possibility of doing actual interactive viz and data analysis. In one recent set of run we were creating multiple 60 Gb data dumps and needed to run complicated algorithms to analyze the data and then we wanted to do viz. The problem is that we either have to run using:
1. Davinci
2. Interactive queues on on the big machines.
I realize that it is an extremely difficult problem to schedule jobs that are i) require many nodes i) need to be executed on demand. But, this is a huge limitation currently when it comes to data viz and analysis.

It is important that visualization server is available for dedicated data analysis and visualization as well as software that can leverage the server.

Requests for consulting help

At present, most of our visualization is done at DoD - but intend to switch to doing more at DaVinci. We will then request considerable help from the Visualization Group at NERSC

My group relied on help from NERSC visualization consultants in the past. But it seems too hard for us as regular users do all of it ourselves.

I am mostly glad with the data analysis and visualization support on nersc. 3D visualization might be a direction to pursue.

consulting help

Other

NWChem is not working completely well. Certain modules like PMF does not work (at least in Jacquard). The task shell command used in NWCHem does not work either in Jacquard.

I use matlab on davinci almost daily. Occasionally, I can't start it because of license shortage.

No.

It is very cumbersome to use PDSF when you use modern tools. For instance, I edit files and want to use code management tools found on MacOSx. However, I do not have enough disk space on AFS. Also, i cannot run batch jobs on PDSF using AFS.
What I want is to mount my PDSF files on my local computer. NERSC does not allow it. As a result, I use my own desktop most of the time. It is simply to hard to use NERSC.

 

Yes, data analysis needs are being met / positive comments:   14 responses

My viz needs at NERSC currently have to do with the LLNL VisIt tool. This past year we (LLNL researchers using VisIt to analyze data from NIMROD runs) came to the NERSC viz group requesting help interfacing NIMROD & VisIt and received *excellent* support.

I use visualization software and had collaborations with the visualization group who have always been very helpful.

My needs are mostly satisfied. I use mostly IDL on daVinci or other platforms. NGF made things easier in that respect for me.

My needs are currently met well by the data analysis capabilities of DaVinci.

I'm happy with DaVinci.

DaVinci for large-scale data analysis

My analysis and visualization needs are being met. I use DaVinci a lot with very large data sets. Most often I use Matlab, Grads, and NCL. ...

I use matlab and mathematica, and I amd satisfied with the current level of resources.

Yes. Serial queues on Jacquard or Bassi with my own software.

I use xmgr and gnuplot routinely. But that's about it.

I have not worked with the visualization group yet. My approach so far has been to use IDL and python/gnuplot to run where the data is. I have not explored the use of DaVinci and if that will require moving large dump files (which will likely be less efficient than postprocessing where the data is).

Satisfied.

They are met.

Seems OK

 

Do vis locally:   14 responses

I do the data analysis on my own PC. ...

I use my desktop for visualization and data analysis.

I do all post-processing and visualization off-site.

We export our produced data to other non-NERSC machines for final analysis and visualization where we have better X connections, better control of software configuration, better uptime, etc. I have not explored non-PDSF options at NERSC for these things; PDSF is simply not stable enough or designed for this kind of work. For the most part our final analysis and visualization needs are fairly modest and are well served by a mini cluster under our own control rather than having to submit proposals, share a cluster with other users, etc. to use NERSC resources for these needs.

Most of my visualization is done in Matlab, requiring moving large blocks of data to my local computer. This can sometimes be time consuming.

I usually do elsewhere, so not important to me.

I do not use the data analysis and visualization resources on NERSC. All of that is handled on local machines.

I don't use data analysis and visualization resources on machines at NERSC. I use local machines instead.

I do all visualization and data analysis elsewhere, because I have everything set up and I do not need a lot of resources.

it is easier to process data on a local machine for me because for data analysis I don't have to wait in a queue. For visualization, manipulating x windows it is much better to be local

I do analysis and visualization at our facility. I'm not sure that's the best solution for us, but it's the way we do it now.

We analyze our data locally. Data analysis is inexpensive for our projects. We don't use data analysis and visualization software at NERSC.

I have checked out that the matlab graphics works on Jacquard. However I have used the software for real work at OSC - it is closer and the same time zone if I have to do phone consultation.

Most of my data analysis and visualization take place off site.

 

Network speed is an inhibitor:   9 responses

I use simple visualization tools such as gnuplot to do quick checks of data. More complex visualization is performed elsewhere. Typically you do not want to attempt to perform complex visualizations on a remote resource at NERSC because of slow internet connectivity. You would not be able to interactively work with the visualization.

Data transfer from NERSC to NREL, Colorado is so slow that I cannot use any visualization software in production level. ...

To improve the speed of network connectivity so that remote visualization will be more convenient.

I'm satisfied with most of the service and hardware and software. But I'm using my account in China mostly. Sometime when I connect the pdsf through ssh, the transfers are so slow that I can't work almost. Can it be improved?

I try to use Mathematica and Maple on DaVinci, but forwarding X-services is quite slow and tedious. Perhaps its my network connection as well, but using X-windows remotely is too slow for me.

Overall, our network connection is too slow to even use Xwindows easily, so I usually just use a dumb terminal window.

Sometime I just want to do simple visualization using tools such as matlab. But the connection is very slow from my pc.

I've noticed that network response for IDLDE (the graphical UI with IDL) is very slow. It's typically been quicker to just copy everything to my local machine and work here. This isn't any great inconvenience for me, since I have most of what I need here.

I use python, pytables to access HDF5 data, then gnuplotpy. I do mostly batch generation of 2D plots, as network connectivity is too poor to do more. Also, the idea of moving data around nersc to get it on the right machine is clumsy.

 

Need more information / training:   8 responses

make some tutorial webpage on the using of visualization software.

We would like to use these services more. Providing more information of the form 'Getting Started with Analyzing your Data on DaVinci, serial queues on Seaborg' would be helpful.

I know that the information is available but I don't have time to spend to learn new software. Then, my position is that there is not enough information available to easily access the software. It might have a tutorial but I am not aware; then having a tutorial with some examples how to use would help to start using such software and machines.

I don't know how to use those softwares, so I have to download to my local computer and use some window softwares.

More frequent on-site user training. The problem is that we do not have the resources to come to NERSC for such training. We love to use NERSC facility for visualization and data analysis.

Not familiar with the NERSC data analysis and visualization resources available to me. It would be helpful to better understand what resources are available to me.

It would be great for the visualization resources to be more visible - eg I don't really know what is available for users. Maybe you should publicize this more?

My basic problem is to get up to speed with what is available. I am reluctant to learn new things when I want to get something achieved. This is my problem and not NERSC. Of the three software problems I have had, the staff has been extremely competent and helpful on two of these. The current problem is still on-going and is something I need to better understand.
I guess to improve services, it would be difficult to identify what would be required. I have gone through the manuals but find it always easier when you talk to a human being. For visualization capabilities, I am unaware if a manual or sample case exists. This would be helpful. I have stored the IBM manual on my desktop to help debug problems and understand system usage. Does a similar capability exist for visualization?

 

Don't use (yet):   8 responses

I have not started to use the NERSC data analysis and visualization resources. But, these are very important and we will begin to realize and utilize these resources as much as possible.

I really should do more with visualization. It is becoming increasingly important.

I have not had time this year to really explore use of DaVinci --- in FY07 I hope to really get to use it.

I am a new user and I have several students using the facilities. We are getting up to speed on the systems and that is taking somewhat longer than we thought. This problem is one that is local.

I don't use the available visualization tools.

I don't use this

I do not use data analysis and visualization on NERSC machines

I do not use those tools.

 

Yes, I can do data analysis, but poor performance:   3 responses

Not sure what 'data analysis' means in this context. 80% of what I do is called 'data analysis' and all is done on PDSF. And mostly fine, except slowness/outages. I use ROOT at PDSF for 'visualization' (making plots).

yes, it is good. But, this year I experienced inefficiency of PDSF more often than last year, i.e., sometimes PDSF is terribly slow.

I mainly use Matlab on Jacquard for data analysis. DaVinci's performance on running Matlab is very poor. I also tried to use Visit for data visualization but somehow the performance in speed is below my expectation.

 

Other:   2 responses

... I only use the IPM module and I m not really satisfied with it since the results are displayed on the net one day later

I usually check the websites before submitting large numbers of jobs. Given that I don't submit jobs on a regular basis, this has been very helpful.

HPC Consulting

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied 5.50 - 6.49
Significance of Change
significant decrease
not significant

Satisfaction with HPC Consulting

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
Timely initial response to consulting questions   1 3 2 6 50 136 198 6.57 0.81 -0.08
Followup to initial consulting questions 1   2 3 9 55 117 187 6.49 0.86 -0.08
CONSULT: overall 1 1 2 3 9 59 124 199 6.47 0.90 -0.21
Quality of technical advice 1   2 3 8 66 113 193 6.46 0.84 -0.16
Amount of time to resolve your issue 2   6 6 11 68 103 196 6.27 1.08 -0.14
Software bug resolution 1 1 1 11 9 37 60 120 6.14 1.16 0.04

 

Comments about Consulting:   36 responses

 

Good service:   23 responses

They have always been very quick to respond to problems by email or phone. Excellent work!

Consulting staff if very responsive and proactive in answering questions and resolving issues. Nice work!

Thanks for your prompt, good service!

NERSC consultants are your most important resource.

I think the NERSC consultants are most likely the best that DOE has.

On the rare occasion where technical support was necessary, the technicians I spoke with were courteous and knowledgeable.

Consultants have been extremely helpful.

Excellent job

Excellent consulting service. Many thanks (especially to Dave Turner)!

They provide an excellent service. From the technical questions to simple requests for increased disk allocation.

I've been very satisfied with them. They work well with users (well, me at least) to track down issues and find solutions. I've never felt like they were talking down to me, but they also explain issues well when I'm not acquainted with particulars.

Best consulting staff of all Supercomputing centers...

NERSC provides an exemplary service overall. The uniformity of the programming environments on Seaborg, Bassi and Jacquard is excellent!

Very good and responsive folks.

Great job!

All the times I have used NERSC consulting this year I have been very happy --- my kudo's to the staff

I like the fast responses - I am impressed

I am very impressed with the speed at which responses are answered, as well as the efficiency of technical support in solving problems.

The issue of new temporary passwords is prompt.

Want to thank all the consultants for a job well done.

I've always been impressed with the consulting services at NERSC.

The few times I have needed assistance, it has been QUICK AND COMPETENT. I feel very fortunate to have such good people working behind the scenes, even if I don't often need them.

NERSC consulting staff is very user-friendly and most consultants go out of their way to solve user problems. This group has been performing as a most reliable and very helpful team and I have found that but for their help and advice I would not have accomplished in my research as much as I did. I believe NERSC consultants are sine quo non for the NERSC supercomputing facility, as the users have to talk to the consultants as they know best how to talk to the computing machines. My sincerest thanks to you all in this group for a magnificent performance.

 

Mixed evaluation:   7 responses

Our old pdsf consultant, Iwona was terrific. However, when she was not available there was really no one to adequately fill in for her. I have not had very much contact with the new pdsf consultant, so I have no comments, but I've heard he's good. I wish we had more staff that were qualified to fill in when the default pdsf consultant is out.

Quality of technical response and advice varies considerably among different consultants. A more individually based rating system for each consultant might be more helpful.

In general NERSC consulting is the best in the world. Recently, requests have been turned down rather brusquely and some questions/problems have never been followed up on.

The efficiency and expertise of the NERSC consulting staff is unmatched by any other high performance computing facility I've used, and is one of NERSC's best resources. Our one area of disappointment has been in the lag involved in addressing our difficulties in scaling our code to Bassi; I hope this will happen soon.

One ongoing problem with our use of the PDSF cluster is the lack of support on nights and weekends. Often our processing pipeline, which requires < 24-hour turn-around on a large amount of data each night, is effectively brought to a halt because of e.g. a bad node draining jobs from the queue. The existing PDSF staff is quite knowledgeable and helpful, but overworked; we have been trying to work around known problems in generic ways, but there's no substitute for having someone on call if an unforeseen problem arises.

Debugging software is not easy. On basic problems, it works like a charm; however, on difficult issues, some additional thought is warranted. For example, some how TABs were inserted in my software. It is difficult to identify the source of this as in: computer operations, my incorrectly using the editor, use of debugging tools, or causes yet to be determined.
It is funny but just when you get things going, it is like a moving train that stops very suddenly...

I am sorry not to be more positive. And I don't keep a log of which of the platforms we use at various sites drive the most crazy. Remember compilers are our problem. Otherwise we have no problems. Generally we try to properly compile code before moving to NERSC. So our need for NERSC help is limited.

 

Unhappy:   6 responses

Often when I send in a problem, I get a short reply that doesn't address my issue. Often it takes two or three exchanges of email to get my issue resolved, and since each exchange takes roughly a day, I'm often waiting 2-3 days for a fix. I run your machines very predictably. When I have a problem, as often as not, it's the machine that's misbehaving. Consultants nearly always assume that it's my error.

Support for systems especially in emergency situations outside of normal business hours is still somewhat lacking.

PDSF is under supported. Eric and company are very good at responding to problems in a timely manner during regular business hours, but I have found the NERSC help desk to be nearly useless for off-hours problems, even when they are quite major (a licensing server being down, a bad node during jobs, ...) Overall I am very satisfied with the PDSF-specific consultants when they are available, and very dissatisfied with the help desk's ability to help (or even page the POC) with PDSF related off-hours problems.

I asked to have software supported and it took two months just to hear that it would not be supported. I am still waiting over two days for a request that should have been responded to in 4 hours. Basically, I cannot use NERSC because the support is so useless. This is in very stark contrast to all other experiences with DOE computing facilities. The support staff at PNNL is 1000 times more effective than anything I've seen at NERSC.

example, discovered substantial performance bug in mpi implementation on jacquard but did not receive a follow-up of the exact problem and the fix from either nersc or the vendor

I have requested info about wien2k software and it has been over 10 days without a response.........I have emailed back and hopefully someone will reply. I initially called NERSC help line and understand there may not be a staff person at the moment with expertise on the software wien2k available in Seaborg.

Services and Communications

 

Legend:

SatisfactionAverage Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied 5.50 - 6.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
significant decrease
not significant
UsefulnessAverage Score
Very Useful 2.50 - 3.00
Somewhat Useful 1.50 - 2.49

 

Satisfaction with NERSC Services

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
Account support services 1   1 4 2 47 147 202 6.64 0.76 -0.09
Allocations process   1 1 5 12 70 76 165 6.28 0.85 0.12
Response to special requests (e.g. disk quota increases, etc.)   1 4 3 4 36 50 98 6.24 1.08 -0.11
E-mail lists     2 8 4 33 42 89 6.18 1.03 0.10
Off-hours 24x7 Computer and ESnet Operations support 1 1 3 11 4 21 47 88 6.03 1.37  
NERSC CVS server       3 2 6 6 17 5.88 1.11 -0.32
Visualization services     1 7 3 7 11 29 5.69 1.31 -0.14

 

How Important are NERSC Services to You?

3=Very important, 2=Somewhat important, 1=Not important

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.
123
Account support services 2 40 144 186 2.76 0.45
Allocations process 3 31 119 153 2.76 0.47
Response to special requests (e.g. disk quota increases, etc.) 4 26 59 89 2.62 0.57
Off-hours 24x7 Computer and ESnet Operations support 14 38 38 90 2.27 0.72
E-mail lists 19 40 24 83 2.06 0.72
Visualization services 26 15 13 54 1.76 0.82
NERSC CVS server 23 11 6 40 1.57 0.75

 

How useful are these methods for keeping you informed?

3=Very useful, 2=Somewhat useful, 1=Not useful

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.
123
E-mail lists 1 38 156 195 2.79 0.42
MOTD (Message of the Day) 18 71 82 171 2.37 0.67
Announcements web archive 15 87 68 170 2.31 0.63
Phone calls from NERSC 34 43 50 127 2.13 0.81

 

Are you well informed of changes?

Do you feel you are adequately informed about NERSC changes?

AnswerResponsesPercent
Yes 215 96.8%
No 7 3.2%

Are you aware of major changes at least one month in advance?

AnswerResponsesPercent
Yes 202 91.4%
No 19 8.6%

Are you aware of software changes at least seven days in advance?

AnswerResponsesPercent
Yes 199 92.1%
No 17 7.9%

Are you aware of planned outages 24 hours in advance?

AnswerResponsesPercent
Yes 214 98.2%
No 4 1.8%

 

Comments about Services and Communications:   25 responses

 

MOTD / Communication of down times:   10 responses

Frequently I lose contact with SEABORG, or SEABORG goes down, yet the MOTD says nothing.

the MOTD on PDSF is nearly useless because so much scrolls by that the important messages are sometimes off the top of the screen by the time I get my prompt.

sometimes when the pdsf cluster goes down, I'm unsure whether to report it or not. I usually assume someone else has because so many people use it. I find that the pdsf webpage, which is supposed to report its current status, is usually very lacking in staying updated on an hour-to-hour basis for these types of crises

The MOTD on PDSF is too cluttered, it's hard to extract any useful information even if it hasn't scrolled out yet.

The notices are sufficient to inform me.

The email updates work very well.

I would like better email communication of unplanned outages and downtimes.

There are perhaps too many e-mail messages - I tend to lose track of more important ones (like major outages) amongst the many I receive from NERSC.

I don't understand the comment about software changes. Is this with respect to computer libraries or routines or is this with regard to my program?

 

Satisfied:   5 responses

By in large, I am very impressed with the quality and professionalism of the NERSC staff and organization. In my 35+ years of running on academic and government computer systems, NERSC is best experience I have had.

I would like to express greatly gratitude for Dr. Andrew Rose's help on PDSF service.

All seems to be fine.

NERSC runs a first-class operation.

I am most satisfied with the NERSC supercomputing facility. I enjoy using this facility even from Canada and I am most grateful to DOE and my PI for providing me access to this state-of the art facility. Thanks .

 

Off hours 24x7 support:   4 responses

Off-hours 24x7 Computer and ESnet Operations support currently does not provide account support (e.g. password reset). It would be more helpful if it provide account support if off-hours as well in the future.

Off-hours support for PDSF is limited. The PDSF-staff are always helpful and responsive, but if they are not available (off-hours or on travel), critical issues sometimes get delayed. For non-critical issues this is fine, but for critical issue, such as GPFS usability (stale file handles) and filesystem slowness should be addressed by off-hours support staff.

I would like to see better off-hours support for PDSF. The system is currently being run with full support during business hours only, I would like to see this support to be expanded to off-hours.
The PDSF system has grown up to be a significant system in the NERSC infrastructure and deserves more attention during the non-business hours. One of the problems we occasionally encounter is that one of the batch nodes is "bad" (HW or SW malfunction) and that the node "eats" jobs. Jobs will start on the node, but immediately abort due to the problem on the compute node, then the following job starts, draining the queues down without actually completing the jobs. NERSC operators do not appear to be willing to fix these nodes by taking them out of the batch queue system or other approaches and a node like that can be malfunctioning for a whole weekend. There are some work-arounds for this problem, but they require fairly advanced knowledge of the batch system. I would like to see NERSC support PDSF in the off-hours.

Generally, PDSF personnel seem to work very hard to respond to problems, even off-hours. Is off-hour support official, or just something they do to be nice? Really, PDSF should be officially supported 24/7 by NERSC.

 

Allocations issues:   3 responses

The Allocations process should make more use of external review, and be more open to researchers who are not currently funded by the DOE.

The automatic reduction in allocated hours is very inconvenient, because it does not let us schedule the runs as to suit the needs of the project.

Allocations process in terms of ERCP is good but the overall DOE philosophy of strongly favoring the big-big projects is somewhat short-sighted.
Many of us work on projects which are small now but may grow to be very major players --- i.e., from little acorns, might oaks grow and so on. Squeezing us out when small may lead to massive difficulties later.
One can argue it is best to develop MPP codes when a "small" player than when a major player (where bugs/mistakes could waste millions of CPU hours.

 

Security issues:   2 responses

I was not informed about the recent lock-out for more than a week. There was no announcement from the NERSC. I believe it is NERSC responsibility to inform the users in advance of such a long recess. Have you ever thought that you wasted about 3% of users' annual research time? It is very precious, if you have not notified that yet.

As mentioned earlier, I was not informed when my password was deactivated.

Correction of the security issue that occurred last month was handled very professionally and efficiently.

Other

Overall, I am satisfied with NERSC services. But I have never yet heard a response to requests (placed indirectly through my PI) for an increase in the inode quota on my home directories, which would make my work much easier.

Web Interfaces

Legend:

SatisfactionAverage Score
Mostly Satisfied 5.50 - 6.49
Significance of Change
significant increase
not significant

Satisfaction with Web Interfaces

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
WEB: Accuracy of information     1 6 7 79 106 199 6.42 0.75 0.02
WEB: NERSC web site overall (www.nersc.gov)     2 4 8 92 105 211 6.39 0.74 0.10
NIM     3 2 19 76 102 202 6.35 0.81 0.19
On-line help desk 1   1 6 11 41 55 115 6.21 1.02 0.04
WEB: Timeliness of information     1 13 21 69 90 194 6.21 0.92 0.09
WEB: Ease of finding information 1   4 9 33 90 72 209 6.02 0.99 0.09
WEB: Searching     3 13 18 41 35 110 5.84 1.09 0.14

 

Comments about web interfaces:   14 responses

Suggestions for improvement /difficulties using

Please update the web pages with queue information more frequently.

http://www.nersc.gov/nusers/resources/PDSF/stats/ (found by going to www.nersc.gov -> PDSF -> Batch Stats) should show the SGE batch statistics.

The web site does not document the process for changing passwords clearly.

the site certificate seems to have issues with some browsers

I occasionally come across outdated info. No other significant issues.

Comments about NIM

NIM should allow users to submit a file (pdf, word document, etc.) for allocations requests rather than fill in an online text box that doesn't allow for formatting, editing, figures, special characters, etc.

The NIM interface looks strange in Firefox because of the frames. Perhaps it's time for an upgrade.

I find the NIM interface hard to use; but I haven't spent too mch time learning it which might be the problem. ...

In general the website is OK to good. However, I find the NIM web user interface very poor and non-intuitive.

NIM.nersc has worked very well for me the few times I've needed it.

Comments about searching

Searching for relevant information, e.g. ways to optimize code on different architectures, suitability of numerical algorithms and libraries for solving specific problems, could and should be improved.

... Searching in general hasn't been too useful, but I don't know if that's the search function or missing content.

Sometimes very hard to find out special flags of xlC compiler.

Good website

The web site is really very useful, both for beginners & for advanced people. I am really very impressed.

Good online help desk

Messages left on the online help desk were answered usually within a few hours.


Training

 

Legend

SatisfactionAverage Score
Mostly Satisfied 5.50 - 6.49
ImportanceAverage Score
Very Important 2.50 - 3.00
Somewhat Important 1.50 - 2.49
Significance of Change
not significant

 

Satisfaction with Training

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

ItemNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.Change from 2005
1234567
New User's Guide       3 6 53 49 111 6.33 0.70 -0.04
Web tutorials       4 10 44 31 89 6.15 0.79 -0.07
NERSC classes: in-person       4 2 3 9 18 5.94 1.26 -0.18
Live classes on the web       5 1 9 6 21 5.76 1.14 0.04

 

How Important are these Training Methods?

3=Very important, 2=Somewhat important, 1=Not important

MethodNum who rated this item as:Total ResponsesAverage ScoreStd. Dev.
123
New User's Guide 1 25 69 95 2.72 0.48
Web tutorials 5 27 59 91 2.59 0.60
Live classes on the web 7 13 12 32 2.16 0.77
NERSC classes: in-person 11 11 34 95 2.03 0.83

 

What Training Methods should NERSC Offer?

MethodResponses
Web documentation 124
Web tutorials on specific topics 114
Live web broadcasts with teleconference audio 26
Live classes at LBNL 20
In-person classes at your site 20
Live classes on the web 13

 

Comments about Training:   10 responses

Suggestions

A web tutorial on parallel debugging would be v. useful.

I hope NERSC can provide some traveling support for my students to attend the traing on-site.

More training on 3D visualization software and how to include user-developed modules for AVS Express and/or Visit. An easy step-by-step web tutorial would be great.

It would be good ot have all past classes available as MPEG downloads if feasible (maybe you do already...?)

Satisfied

I mostly use the PDSF-FAQ, Ganglia pages for PDSF (to see if a server is down if I cannot reach it) and support request form. These all are fine.

I attended the ACTS workshop in 2005 and found to be very useful and well organized.

Don't need / Other

In general, I think my capabilities were pretty good before I started using NERSC facilities. So in general, I haven't needed too much in the way of training facilities. That's why I don't find the training services particularly useful. I'm sure they are useful for people who need them.

I don't use training.

Being in Ohio, my group takes courses offered by OSC. I have offered to send them to NERSC training. None have felt it necessary. I don't know if any have used online tutorials. OSC does a good job for our general education. Once they have that they can find it on the web.

The last question is tough to answer. This is a resource issue. For example I may not telecommunicate because I do not have the inherent capability...

Comments about NERSC

What does NERSC do well?

 

In their comments:

Their responses have been grouped for display as follows:

 

What should NERSC do differently?

 

In previous years the greatest areas of concern were dominated by queue turnaround and job scheduling issues. In 2004 , 45 users reported dissatisfaction with queue turnaround times. In 2005 this number dropped to 24 and this year only 5 users made such comments. NERSC has made many efforts to acquire new hardware, to implement equitable queueing policies across the NERSC machines and to address queue turnaround times by adjusted the duty cycle of NERSC systems, and this has clearly paid off. The top three areas of concern this year are job scheduling, more compute cycles, and software issues.

10: Change job scheduling / resource allocation policies
10: Provide more / new hardware; more computing resources
10: Software issues
 9: No suggestions / Satisfied
 7: Allocations issues
 7: Fix / improve hardware
 6: Data Management / HPSS Issues
 5: Improve queue turnaround times
 4: Provide different resources / resources for smaller jobs
 4: Security issues
 3: Account and Accounting issues
 2: Improve consulting services
 2: Visualization improvements
 2:

Web Improvements

How does NERSC compare to other centers you have used?

 

41: NERSC is the best / overall NERSC is better / positive response
11: NERSC is the same as / mixed response
 4: NERSC is less good / negative response
11: No comparison made

 

What does NERSC do well?   113 responses

  Provides access to multiple HPC resources / well managed center

Network connectivity is good. HPSS is reliable. Bassi and Seaborg are reliable. This makes post-processing large runs less of a headache than other places.

Fast computers with the software we need.

The computers are stable and always up. The consultants are knowledgeable. The users are kept well informed about what's happening to the systems. The available software is complete. The NERSC people are friendly.

Keeping the most advanced hardware available in a stable environment with easy access.

Availability of resources are good. Performance of computers are good. Documentations are good.

The facilities are good, queue times are shorter than at other facilities, and the administration is responsive and prompt at allocating time.

Discount charging program for large jobs is great.

Good computing infrastructure and excellent support.

NERSC runs a reliable computing service with good documentation of resources. I especially like the way they have been able to strike a good balance between the sometimes conflicting goals of being at the "cutting edge" while maintaining a high degree of uptime and reliable access to their computers.

Variety of hardware. Long term support for hardware (even if newer generation hardware is already available).

NERSC is doing a very good job. It is very important to me, since I need to analyze a large mount of data. NERSC is fast and stable.

NERSC has an excellent hardware and software resources, which are very important. I am most pleased with our request and acquisition of allocation hours, and the outstanding Help support (timeliness and accuracy).

Provides reliable hpc resources - hardware and software. Long term time allocations and sensible time allocation application process both providing a good match to ambitious long term scientific programs. Straightforward and transparent account policies and procedures.

The best part about computing at NERSC is the support and the reliability of the computers. I could use our local computers (LLNL) but the support is not nearly as good nor are the machines as stable.

One of the great benefits for us of using NERSC is the fact that the HPSS and PDSF systems are available. I think that the combination of the two is very powerful for experimental particle physics. We do not use the other resources offered by NERSC because they are not suitable for the type of analysis we do. However, being able to read a large data set from HPSS and process it on PDSF in a finite amount of time is very valuable. I also think that in general, the switch to GPFS as the filesystem of choice for NERSC has been an excellent decision.
I am also impressed by the ease with which one can request (small) resources for a start up project. I recently requested some computing resources for a new project we are planning for and was up and running in a few days. This helps us tremendously in trying to reach our scientific goals. Having worked with a number of computer centers, I have to say that NERSC does this very well.
I also think that NERSC is very sensible with the current overall computer security approach (see also below).
Furthermore, I am glad to hear that NERSC has decided to setup an open source software group. I hope that this group will work on some of the open source software that is in use at NERSC and build up detailed expertise using that software. One of the projects that I hope can be looked at is the Sun Grid Engine (SGE) - the batch queue software in use at PDSF. Perhaps this software can also be used on some of the other computer clusters.

The PDSF specific support staff are very good; they need more help.
HPSS can hold a lot of data.
Access to NERSC computing via ssh and scp is crucial for its overall usability. Please do not go to a keycard/kerberos/gridtoken etc. authentication. This would break much of the automation ability which is vital for large collaborative projects.

The computing resources are very good.
CPU-time allocation process is quick (also for additional time).

NERSC is a very well managed center. The precision and uniformity of the user environment and support is outstanding. I am fairly new to NERSC (INCITE award) but it compares very favorably indeed with NSF centers.
Our research is totally dependent on very large scale computation. I hope we will be able to work with NERSC in the future.

Excellent hardware and software and good communications.

Computing at NERSC is reliable. Documentation is complete and any information needed can be found online.

We compute at NERSC because it has computing resources that far exceed those of our home site. NERSC's support staff has provided very timely responses to our inquiries, and has resolved the few issues we've encountered very quickly. NERSC's support staff has constantly monitored our quotas and usage, and has adjusted allocations for our project in proportion to our usage. The response time for these adjustments is very fast! NERSC's support staff has definitely added to the efficiency and productivity of our project.

NERSC is very well managed and operated.

Generally, NERSC does capacity computing very well, servicing a large community of users; it also has (or soon will have) excellent capability platforms for many jobs, both small and large.

There is no other supercomputing facility in the world where I can carry out my theoretical and computational research in the Physics and Chemistry of Superheavy elements. I have been using the facility for ~ 10 years and I am most satisfied with the hardware, software, consultants,etc., and my first choice would be to use the NERSC facility.

NERSC is very important for my research. Its computer power, support facilities, and the reliability are far better than those provided by other super computer centers.

Good facilities with good support. I've had good turnaround on jacquard (less good on seaborg). But since our code is better suited to jacquard, this is not a problem.

NERSC has plenty of computing power, very good software configuration, and great support and consulting staff.

Excellent management and support of integral high performance computing resources.

NERSC offers unique high performance computing capabilities that enable new avenues in scientific computing, such as our "reaction path annealing" algorithm to explore conformational changes in macromolecules in atomic detail.
NERSC is continuing to improve their computing capabilities and support to users.

Staff has been very helpful. Proposal process is efficient. These resources are a tremendous help for our research. In fact, we could not do everything that we're currently doing without these resources.

Large parallel machines, turn around time, consultant support.

I am familiar with NERSC, and I think you guys provide a good, universal service with emphasis on HPC.

NERSC provides excellent, world-class HPC resources in almost all aspects, from hardware to software. What distinguishes it most from other supercomputing centers is, in my opinion, its superior user support, in both consulting and services, although there is still room for improvement. That has made our scientific work more productive, and that's why NERSC is important to me.

NERSC is the most reliable computational center on which I ever run large parallel calculations. The systems are stable and the support people are competent and in most cases come back with an solution or they do show that they take seriously user problems. Very professional team. As I'm working on developing parallel scientific applications I always need to test and produce data on reliable machines.

Queue management has been greatly improved recently, and things seem to move well. Networking is very good. Consulting is very helpful in resolving issues. The machines run well.

I am extremely pleased with NERSC. The resources have always been available when I needed them, they keep me well informed of changes, the machines have been reliable and have performed well, and they have been very quick to solve my problems when I had them (usually expired passwords, which is my fault).

Excellent overall picture. People are trying really hard to satisfy users' requests.

I am most pleased with the services provided NERSC staff. NERSC is important to me because it provides computing power that we do not have at our home institution.

I have been using NERSC (or MFECC) for 26 years. It always has been and remains the best run supercomputer center in the world. The staff responds to requests and is very helpful in general.

Aside from the sheer number of CPU hours available, NERSC's strengths are its knowledgeable and responsive staff, and its comprehensive list of well-maintained and up-to-date software libraries and compilers. I also appreciate the timely updates about outages, the low numbers of such outages, and the queueing policies that make it possible to run many instances of codes that require 100s of processors for 100s of hours as well as those that use 1000s for 10s.

NERSC provides significant resources and support for those with a minimum of hassle. It is an excellent example of a "user facility," with a sense that it really serves the users, not the people that manage it.

Excellent high-performance computing access, very professionally managed. High reliability.

Good variety of computer architectures and helpful consultants.

I remain quite satisfied with queue times and ease of use.

NERSC is a window for me for the whole world. It is part of my academic life like can not do without it. I am very grateful for everyone at NERSC for their continuing good services.

very satisfied with consulting, machine accessibility ....

Consulting Service.
Large scale computations (can not be done locally)

I trust the expertise on technical issues and the reliability of the availability of the resources (hardware, software, people).

I think NERSC is very well supported, with a very logical layout, and nearly all the tools I would need. This has allowed me to learn the system, and get useful work done in a relatively short time.

I think the NERSC machines are generally well supported and that the organization is solid. Applications are generally well handled, and the organization gives an impression of running a "tight ship".

I think nersc consult service, the act software service are the best in US.

1. (a) For the robust stable computing environment .
1. (b) I compute at NERSC as certain problems require the memory of 1000 processors
2. I'm pleased with the fair batch queuing system and prompt reply to inquiries.

 

NERSC offers state-of-the-art computing platforms and necessary softwares for conducting scientific research. I am very satisfied with the support of NERSC in carrying out my research projects.

NERSC provides excellent computational facilities and excellent support.
Since the late 80's NERSC has provided all computational resources for my research activity.

good

  Provides access to powerful, stable, computing resources; parallel processing; enables science (focus on hardware and cycles)

NERSC has a lot of computational power distributed in many different platforms (SP, Linux clusters, SMP machines) that can be tailored to all sorts of applications. I think that the DaVinci machine was a great addition to your resource pool, for quick and inexpensive OMP parallelization.

NERSC systems are very stable, and are thus an excellent place for developing code.

Lots of available computing time, Easy to get nodes.

There's a lot of computing power at PDSF and the system works. I like things that work.

running climate models requires large resources. a single or couple linux boxes just does not have compute power. Bassi has much better turn around than seaborg and if your application can only use less processors effectively is much better for work/cputime ratio.

Interactive runs on Seaborg - this is the ONLY useful means of debugging my MPP code available on NERSC or NCCS supercomputers.

There are machines that fit my calculations and there is the possibility to up (time and space) quotas to perform these very large, very long jobs. I could not do these runs on any other resource that I have access to.

NERSC provides a stable computing environment for the work that I could not done somewhere else. Many of my design and analysis in the area of accelerator modeling would not have been possible without NERSC computing power.

NERSC makes possible for me extensive numerical calculations that are a crucial part of my research program in environmental geophysics. I compute at NERSC to use fast machines with multiple processors that I can run simultaneously. It is a great resource.

I use NERSC because I have access to a lot of processors on seaborg.

NERSC provides me with tremendous computing power and availability. I have been a little disappointed with problems on Jacquard, related most likely to the MPI implementation. NERSC consulting was however able to help me with that but it was still a considerable decrease in the usability of the machine.

Computer resources are much better than other center (see below).
Just one little comment within my short experience of using NERSC: the interactive jobs for testing my own ideas are a little bit inconvenient before running longer/larger jobs.

seems to be a reliable, well-maintained system. we take advantage of the parallel processing resources at NERSC.

It is one of the few places where i can do the computations i need to do.

Bassi is really fast, davinci's unlimited quota is my favorite.

I use nersc because Seaborg has significantly more RAM/node than other clusters I have access to.

capability of massive parallization

Highly efficient clusters.

We enjoy the sizable computing resources in multi-way SMP nodes. In particular the 8cpu and above nodes. The large number of nodes permit us to do large simulation batches. We find it possible to do considerable amounts of code performance enhancements on this hardware thanks to acceptable queue times on debug queues and interactive queues and passable performance monitoring tools.

NERSC has the only resources available to complete my computation in a timely manner.

NERSC is important to me because I don't have enough computer resources in my group to perform the computation I need to do for my projects.

NERSC is important to me for the computation power and it is the main reason why i compute there.

NERSC is important to me because it allows me to run relatively big parallel job I can not run somewhere else.

NERSC is very important for me to accomplish important research projects.

maintenance (uptime and stable operation of computing nodes)

It is a reliable computing center, e.g., Seaborg is regularly up and by today's standards it is still a powerful parallel computing tool (we will be using Bassi more in the future though).

I do electronic structure using quantum monte carlo, so having a robust large computer is of extreme importance to me.

Provides access to large machines.

NERSC is extremely useful for my computing needs. I can effectively run my production jobs at NERSC.

Machine is easy to access.

pdsf

Very helpful and important for my research

Once on the system, I like the ability to run a job any time of day as well as how fast my program runs on the machine...

I work here (at Berkeley Lab) and computing with NERSC is my job.

Most pleased with: short job waiting time, ample CPU resources. Important to me: ample CPU resources.

Different architectures available with uniform "layout" - the huge amount of available nodes allows one to test his own codes to different amount of data.

The waiting time in seaborg has been getting worse and worse. Adding new machines such as bassi and jacquard was adequate. Currently I extensively use jacquard, which shows very reliable performance. However I feel that the home quota (5G) is rather small in jacquard even though I have an option to use NFS.

The computing resources at our university (university of Utha) is limited. We need more computing resources to finish our projects in time.

Availability of many processors (>64).
Large memory jobs possible (with 64 bit compilation)
"Minimal" down time
Jos 'eventually" get done

I compute at NERSC since one of my programs needs a lot of memory and nodes and runs long. NERSC is for me the next step up from the Ohio Supercomputer Center, which does not have the same machine capability. so, I am using OSC when developing code or with smaller code, and for more I need to come to NERSC. and I am happy with that.

  Good support services and staff

Consultant support at NERSC is very good - I rate it more highly than other supercomputer centers I have experienced.

Your consultants are great; without them, NERSC wouldn't be very useful to us.

User services is great!

Consulting services are very good.

Consulting is very good in NERSC, and consultants are kind, well-responding to my needs.

fairly good consultant support

As an experimentalist who requests NERSC time in order to collaborate with theorists, I am not a hands-on user (and therefore left most of this survey blank). Because my knowledge of the NERSC computer system is negligible compared to the average user, I expected it to be difficult or confusing to request time and manage an account. Yet the NERSC staff has always been very helpful and have made the process as easy and simple as possible. Thank you!

NERSC provides very good user support. I am very satisfied with the way that NERSC people handle user's questions and requests; they are very professional. Also the NIM website is probably one of the most organized online management system I have experienced.

  Good software / easy to use environment

The preinstalled application packages are truly useful to me. Some of these applications are quite tricky to install by myself.

I really think the nersc team is doing a great job of keeping the fortran compilers and the math libraries working properly. I have used other clusters and I have had tons of headaches. While in Seaborg and Bassi, my experience compiling the codes have been really smooth.

NERSC allows for large quantum chemistry jobs to be run quickly with MOLPRO.

Implements experiment software

the software env. always works as expected. time from uploading my code and data to having a working production environment is very competitive.

NERSC provides the easiest MPP access for many of us in the DOE sphere. For those of us who do science and do not program 12+ hours a day, the NERSC "interface" is relatively easy to use once you become familiar with it.

  Used to be Good / disappointed

PDSF used to be wonderful - always up, easy to use, lots of user support, etc.

Frankly, NERSC has been an utter disappointment. I thought I would be able to run big jobs quickly and get a lot of science done but instead I've spent all my time trying to figure out how to compile stuff. I only compute at NERSC because it was easy to get the time and the queues are short.

  Survey too long

This survey is much too long. Please try to streamline it next time.

Your email promised this would take only a few minutes. I have run out of town and must leave. Sorry. Should have put these questions first if they are important.

 

What should NERSC do differently? (How can NERSC improve?)   72 responses

  Change job scheduling / resource allocation policies:   10 responses

Little more favorable policies for smaller jobs.

You need a much better queue structure! Not every job runs effectively on a vast number of processors, and those of us with long running jobs that need relatively few processors should be granted a way to use the time that we're allocated.

NERSC response: The best machines to run long running jobs on smaller processor configurations are Jacquard, which has a 48 hour wall limit for 1-32 processor jobs (see Jacquard Batch Queues) and Bassi, which has a 24 hour wall limit for 1-120 processor jobs (see Bassi Batch Queues).

Seaborg is configured to favor jobs using large numbers of processors, and is therefore not the best platform for long running small processor count jobs.

The CPU limit on interactive testing is often restrictive, and a faster turnaround time for a test job queue (minutes, not hours) would help a lot.

Have more processors for interactive runs

The only thing I can think off is the time allocated for debug class jobs. It should be larger.

Longer batch times on Bassi would be very helpful!

I think the best way will be to have a few nodes where the walltime is larger than 48 hrs. Specially for the 64Gb nodes in Seaborg, if a calculation needs that amount of memory is also likely that it needs a longer walltime!

shorten the queue or make it more consistent wait time- sometimes a job starts within 1 min of submitting, sometimes it take 24-48 hours. Its very difficult to plan jobs and choose which computer to use if the queue is so unpredictable. It would be really helpful if the computer could give me an estimate after I've submitted something for how long it will take to get through the queue. Even a crude estimate would help... 1 hour or 48 hours?...

It would be good to have queue for long jobs at PDSF.

The queue structure, which has very few running jobs per user, heavily favors people running many-node jobs, not people running embarrassingly parallel problems. From a queueing perspective, it should not make a difference if I do my science through one 128-cpu job or 128 1-cpu jobs. There is nothing saying "better science" is done with large jobs, in fact, embarrassingly parallel jobs likely have less overhead and better efficiencies in the queue. This can be accomplished by changing the queue structure so that it's not the number of JOBS that is limited, but the number of NODES, ie one user can have at most x nodes at a time, regardless of how he chooses to use those nodes.

  Provide more / new hardware; more computing resources:   10 responses

Better/faster supercomputers. Lower point-to-point latency message passing.

More and faster machines! ...

Upgrading to faster machines will be a nice improvement.

To improve the speed

continue to expand resources

It is always necessary to keep increasing the number and speed of processors as possible, ...

The move now is to large numbers of CPUs with relatively low amounts of RAM per CPU. My code is moving the opposite direction. While I can run larger problems with very large numbers of CPUs, for full 3-D simulations, large amounts of RAM per CPU are required. Thus NERSC should acquire a machine with say 1024 CPUs, but 16 or 32 GB RAM/CPU. This would be as much RAM as is available on Franklin!

Get more modern hardware. Seaborg, for instance, is quite old.

Get Franklin online ASAP

Seaborg is getting old and slow, it would be nice to have a new computer with a similar size as Seaborg.

  Software issues:   10 responses

It would be desirable to to get rid of the PathScale compilers and the mvapichi package on Jacquard and replace them with better options. They are bug-prone, and there is no obvious reason why they were chosen the first place. ...

More variety of compilers on machines like jacquard. It misses a Fortran 2003 compiler for example.

It would be nice to have more quantum chemistry programs available on NERSC like ACESII or QCHEM 3.0.

I would like to see more quantum chemistry support. Mostly in the form of keeping up-to -date the existing software.

Support for software in the field of molecular dynamics/ biophysical chemistry could be somewhat improved, but the existing offers definitively already provide a basis to work with.

... I sporadically have problems with F90 compilers, and the documentation on the compiler options is mostly incomprehensible

... also to keep updated the mail libraries, but I think that is done properly.

better grid support

Improved reliability - software changes break my codes or change the results too often ...

Do you have large amount of memory available to matlab on Davinci?

  No suggestions / Satisfied:   9 responses

I am already quite satisfied with NERSC.

Nothing, keep up the good work.

It is very very fine as it is.

mainly keep doing what it is doing: bringing on new systems while keeping the old ones available long enough to make the transition easy.

no suggestions - very happy

Difficult to improve an already excellent organization

I really don't know. I'm very satisfied.

Hard to say...

??

  Allocations issues:   7 responses

More adequate and equitable resources allocation based on what the user accomplished in the previous year.

The applications process should allow for submission of a file rather than online text boxes. ...

... NERSC could also implement a more clear and fair computing time reimbursement/refund policy. For example (Reference Number 061107-000061 for online consulting), on 11/07/2006, I had a batch job on bassi interrupted by a node failure. The loadleveler automatically restarted the batch job from beginning, overwriting all the output files before the node failure. Later I requested refund of the 1896 MPP hours wasted in that incident due to the bassi node failure. But my request was denied, which I think is unfair.

One anxiety I've had is that my project typically only requires the submission of large jobs for short periods of time, followed by periods of testing new codes and working on other projects. But I still feel obligated to find ways to use my allocation, since it will be taken away otherwise, and failing to use a requested allocation puts me at risk for not having it renewed the next year.
I understand it might require additional administrative work, but it might be nice to have single-project applications where a specific block of additional allocation hours could be requested for a particular task.

I would prefer a less formal, more flexible allocation process. I find my need for computational resources can vary significantly through the year and is not always easy to predict in advance. Being required to commit to a certain level of usage a year in advance (with the implication that if it's not used, it will be difficult to get back to that level in subsequent years) seems likely to lead to a certain level of wasteful computing when averaged over all users. This type of an allocation system has been in place at NERSC for many years and I don't claim to have a detailed suggestion as to how to change it. However, it seems timely to consider going to some type of non-allocation based approach of access to resources as is used at other computing centers.

The Allocations process should make more use of external review and be more open to researchers not currently funded by the DOE.

Make it easier to get allocated resources for people who needs to do their job on NERSC. For instance, I do not know one year in advanced whether I will get certain support on doing certain computation. The startup account just does not have enough resources for those computation jobs I need to do. NERSC would mean nothing to me if I could not use it. Also, please be fair to people who use Monte Carlo algorithm. It may be trivial to parallelize a Monte Carlo code but it does not necessarily mean it can tolerate high latency or low bandwidth on a home brewed cluster when scaling to hundreds of processors. And, in some computational problems, Monte Carlo is a way to obtain the most accurate solution.

  Fix / improve hardware:   7 responses

Seaborg needs improvement, it keeps crashing.

Fix the login problems on Bassi!

... PDSF reliability is poor (bad nodes draining jobs, periodic slowdowns, etc.) ...

I am worried that the PDSF cluster is not scaling up very well. As more experiments start running code on PDSF and potentially more machines get installed in the cluster, I think that more operational support will be necessary. I have also the feeling that in the recent year, the system admins of PDSF have become more cavalier in their approach to the whole cluster. ...

The biggest problem I have is with using the PDSF interactive nodes. They often become unresponsive or slow. I commented on this earlier in the survey.

... PDSF interactive responsiveness is poor even on a good day. It can take several seconds to start a vi session, source a script that sets env variables, etc. Login delays of 10s of seconds are common.
It is striking to me that the primary things that I am ranking poorly this year are the same things I complained about last year -- the HPSS software interface is still terrible, production accounts still don't exist, and PDSF is still understaffed/under-supported. The conversion of NFS diskvaults to GPFS based systems is the only thing I can think of that has actually improved at NERSC for me over the past year (and that was a huge improvement, to be fair).

Fix whatever is wrong with interactive use of PDSF. I cannot believe that the backup of user home directories can be having that big an effect on interactive use (what I was told when I filed a support request). I have asked if my problems are related to sl302 vs. rh8 and I am always told that I should not go backward to rh8, that everyone will be transitioned to sl302 soon. It seems like it has been ~1 year since I was first told that. I cannot even use emacs effectively in sl302, and I was advised to use rh8 instead! Conflicting advice, poor performance... if it doesn't improve soon, I'll stop using it.

  Data Management / HPSS Issues:   6 responses

Mass storage system with two servers and hsi/ftp access is unconfortable.
Migrating HOME file system to disk in back ground is easier to handle and allows faster access.

... HPSS should have better interface options. ...

I'd like to see an expanded set of hsi commands, like md5, chksum, gzip, bzip2 and pkzip. I'd also like to be able to use htar to make a compressed archive either with gzip or bzip2, like the tar -j or tar -z options on gnu tar.

... Reliability of global filesystems would reduce wasted jobs.

Increased storage resources would be very helpful. Global file systems have been started and should be continued and improved.

NERSC needs to improve on disk space that users need. Some users like me need a large disk space where daily generated huge files are stored. But HPSS seems to be quite a headache to store files, since one can't open and read files, and transferring back and forth from it to other machines seems to be quite a slow process.

  Improve queue turnaround times:   5 responses

the waiting time for jobs to run is still on the high side.

One problem has been the long wait in the batch queque for large jobs. This was certainly a problem in 2005 (at some point I complained with Francesca Verdier about this). This year I have not been running as much, but it looks like the situation has improved.

... Improved batch queue throughput for medium/large jobs

The reduction of queue time definitely will improve my productivity and hence the advancement of science itself.

Jobs using a large number of nodes tend to have too long wait times. I know this is not a simple problem to solve, but perhaps there is room for improvement.

  Provide different resources / resources for smaller jobs:   4 responses

I am a little dismayed that NERSC is replacing Seaborg with the Cray XT. I should say that I am a strong supporter of Cray in general, but many centers are moving to "cluster-like" systems that have very weak capability on a per-node basis (for example, limited per-node memory) and the XT is at quite an extreme (but with an outstanding interconnect). Personally, I would have loved to see Seaborg replaced with a large Power6 with very large per-node memory. I expect the XT4 will have superior scaling at high processor count, but I doubt we will be able to use it at all due to its limited per-node memory. Pity.

I think the emphasis on really large machines is not a good thing. The reason is that *most* of the time the large queues are not being used. I think more smaller machines should be build that focus on the queue sizes that are most used.

In the big push to petascale computing, I would urge NERSC not to abandon the needs of smaller-scale codes (those most efficient on 1000 processors or less), which do not max out the capabilities of next-generation machines, but which do good science that cannot be effectively or economically replicated at smaller facilities or clusters.

Don't squeeze out the small (today) users in completely in favor of the "big guys" (today) --- small projects occasionally grow into big ones

  Security Issues:   4 responses

I don't understand why NERSC doesn't use one time passwords. It makes me a little nervous that the access is not controlled more like other large computer centers.

Do better job on security. Value users time and effort. ... In events of unplanned outrage, should give users time to make a backup plane. Can't just lock the whole system out without any notice.

the long SEABORG outage a month or so ago was pretty inconvenient for me; the timing was awful. ...

Notify users promptly by email if their passwords have been reset or deactivated.

  Account and Accounting issues:   3 responses

... The charge factors for newer machines should be reduced.

Having production accounts for collaborations would be quite helpful. ...

... I would like to see the possibility of having group production accounts. This is something that we have requested for the past couple of years. I fully realize that there are security and accounting implications, but there are ways of solving this issue in a way where it is fully trackable who submitted what when. I thought that there was a solution that was going to be implemented, but somehow this never happened.

  Improve consulting services:   2 responses

The consulting people should put more time to solve the customer's questions.

Would you send the plumber to fix your roof? No. Why then does NERSC force, for example, a chemist, to do all the system administration and software support he's not very good at, instead of having tech support fix the problem quickly? I really can't fathom why no one at NERSC is willing to help me get my code running. I'm four months into my work at NERSC and have yet to run anything non-trivial.

  Visualization improvements:   2 responses

Hope NERSC can have improvement on visualization software and hardware.

... So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request. ...

  Web improvements:   2 responses

improving web documentation for tutoring of softwares and etc.

Better information on the collection of software and promoting new tools for use in the scientific community. Help to simplify the use of computers.

  Other suggestions:   2 responses

It would be valuable to some of my applications to enable network access directly to and from the compute nodes via compute nodes from other DOE SC centers.

Enable connectivity with PCs. Allow a way to use commercial tools found on a PC with the CPU power of NERSC.

 

How does NERSC compare to other centers you have used?   67 responses

  NERSC is the best / overall NERSC is better / positive response:   41 responses

Your staff is more user friendly and that is crucial to success...

NERSC is more professional than other centers that I have used

NERSC has super people not only very knowledgably but also very friendly. Thank you so much for all of the people there.

We use ERDC and NAVO systems of DoD. NERSC has better websites, explanations .... for users.

NERSC is my favorite center.

I've used AHPCRC (army high performance computing research center and the super computers at the Minnesota supercomputer center). NERSC is much better about communicating problems/changes/new information.

I am also a user of RCF at BNL. I prefer to work on NERSC (PDSF), which is faster than RCF.

NIC, Germany:
NERSC does very well in terms of allocation of CPU-time compared to NIC.
Mass storage system with migration is simple to use as the NERSC HPSS file system.

cf COSMS, Cambridge, UK:
NERSC has much better facilities, and is considerably more stable.

It is better than ORNL-NLCF and PSC.

Compared to ABCC, NERSC has faster job turn over rate, shorter job waiting time, much more CPUs to use.

NERSC is the best one.

NERSC has greater amount of computational resources, easier access and constant availability through the year.

NERSC is better than NCCS at ORNL.

compared to Jazz in Argonne National Lab, I find NERSC provides much better computing resources in terms of availability and performance.

NERSC is the best computing center I have used. The user support and system administration is top-notch. This is compared to a big cluster I attempted to use at LSU, the SNO grid computers, and locally administered mini-clusters.

I will rank NERSC as one of the top centers among all centers that I have used. The quality of service that NERSC provides is comparable (or even better ) to that of Minnesota Supercomputer Institute or National Center of Computational Science (ORNL). In my opinion, significant expansion of the current computing facility to accommodate grand challenges in science is the probably the most important next step that NERSC should consider. The arrival of the new Franklin machine will definitely narrow the gap and we are looking forward to testing and porting our codes the Franklin platform as existing NERSC users.

In terms of reliability and user support it is very good comparative with NCCS at ORNL.

Hands down the best. Much better than SDSC, OSCER (our local center), or HLRN

Easy to get large jobs run. PSC,ARSC

Very well.

Very Well

In addition to NERSC, I have made use of the NCCS at Oak Ridge as well as local clusters at my home institution. Compared to the NCCS, NERSC has far better reliability and software support, a less overtaxed and better-informed support-and-consulting staff, and much more informative web resources, with timelier information about system outages, upgrades, and similar issues. I also find NERSC's queueing policies more congenial, and its security measures less of an impediment to productivity.

NERSC is generally better than most other centers

very good center, very reliable. ran at NCEP they had stability problems.

It is as good if not better than all facilities I utilize.

Good. ERDC MRSC

See response to the first question in this section. [NERSC provides excellent, world-class HPC resources in almost all aspects, from hardware to software. What distinguishes it most from other supercomputing centers is, in my opinion, its superior user support, in both consulting and services, although there is still room for improvement. That has made our scientific work more productive, and that's why NERSC is important to me.]

Excellent

NERSC is the best of all.

Compared to Juelich and DESY: NERSC's documentation is far more complete. I also find the queuing system on Seaborg (debug class etc.) much more efficient.

LLNL, SDSC, PSC, NCSA, TACC.
I think you do better than any of these in terms of user support.

NERSC is by far the best compared to ORNL NCCS, ARSC, SDSC, etc.

An excellent facility is indeed NERSC, compared to the Oak Ridge Supercomputing center.

NERSC compares very favorably to other centers (eg ORNL, LANL).

NERSC is very good compared to ORNL, OSC, PSC, SDSC.

LCF at ORNL. It compares well in consulting, although ORNL is getting better with the years.

 

Very favorably, in particular as far as a development of a long term research program is concerned unlike some others supercomputing centers often aiming at quick benefits at low costs (short allocation period, difficulties with extensions etc). E.g. MareNostrum at Barcelona SC.

NERSC is provides by far the best support. Compared to LLNL.

I have used the RHIC Computing Facility at BNL and I believe that NERSC compares very favorably with that cluster. One of the reasons for this is that NERSC appears to have a more pragmatic approach towards the security burden. Most users realize that in these times we need to be careful with computer security, on the other hand, this should not overburden the user. I know that at RCF, some users can no longer do their work because of the security situation there. I think that NERSC has mostly solved this by careful network monitoring and isolation of machines.

I have used Jazz in ANL. The cluster there is smaller, ~300 CPUS. However, the math libraries (i.e. SCALAPACK, FFTW, etc) and the fortran compilers are not as well integrated as it is on Seaborg. As a physicists that is more worried about the science rather than software issues; therefore, the experience of porting codes to Seaborg has been smoother

  NERSC is the same as / mixed response:   11 responses

as good as the best compute centers I've used

NERSC is among the top best.

We have used LBL's SCS (scientific cluster support) service in the past. NERSC support's response time and quality of cluster maintenance (both hardware and software-wise) is definitely in a higher league. This is likely due to the fact that SCS has many different cluster setups with different hardware and customized software, so issues are more complex. The only thing I can think of is that SCS's clusters seem to have a more secure gateway, namely that you need a secure key provided by a handheld device to log onto the system.

In terms of production run, NERSC is doing very good, especially with the machine Bassi, compared to SDSC. However, data visualization is still need improvement.

I also use NCCS at ORNL. NERSC hardwares are more stable and perform better. On the other hand, NCCS staff would assist in more direct ways with users to improve the performance of applications on their machines.

NERSC machines are up more reliably than NCAR & ORNL
We have fewer hardware and software problems with NCAR machines.
We get better turnaround on ORNL and NCAR machines
ORNL has much more responsive consultants.

I have used computing clusters at Fermilab, SLAC, CERN, and BaBar experiment clusters around the world. Generally the computing power at NERSC is better on paper, but the ease of use (production accounts, HPSS software, usable uptime/stability, etc.) seems worse at NERSC. Other centers allow production accounts for processing of collaboration data.

The seaborg processors are quite a bit slower (~factor of 3-4) than the current intel-Xeon processors we have on our local parallel clusters. The availability of large number of processors at seaborg is attractive.
Local clusters can have extensive down times ( several days).

I have been using NCSA, PNNL, OSC and the Cornell facilities. NERSC compares well with all other centers. Its strength is reliability and access to large numbers of processors. Its weaknesses are long wait time on seaborg and the missing Fortran 2003 compiler.

The TJNAF computer farm (>100 Linux boxes) has a much faster turnaround for small (test) jobs, but there are no real consultants available.

I have been extremely pleased with NERSC and it compares with the top centers I have used. I have used resources at LLNL, LANL, Sandia National Laboratory.

  NERSC is less good / negative response:   4 responses

RCF used to be terrible and PDSF was the model site. They have switched places, in my opinion. I also use Livermore Computing, and that system is also better managed and operates more smoothly than PDSF these days.

Hands down the worst I've seen. Other centers give their users timely and effective tech support, even if it means actually spending some time on support requests. If I struggle for more than week installing a code at PNNL, they install it for me so I can get on with my project.

The NASA advanced supercomputing division Columbia machine, which has much more flexible queueing policies than NERSC and quicker turnaround. (It also has less users, which isn't NERSC's fault.)

The allocation process at other centers (such as NCSA) is simpler.

  No comparison made:   11 responses

NERSC is the main center I use

No other centers.

I have allocations also at San Diego and Pittsburgh. San Diego has a queuing policy that favors large jobs. Both of them operate strategic user programs to help the large users develop efficient codes.

OSC has more different architectures around to test things out. that is great -- though I would not recommend NERSC doing the same -- NERSC needs to concentrate on big machines and run them well.

UCSD
ucsd has smaller number of processors available

Minnesota Supercomputing Institute, Minneapolis, MN

RCF

Not really sure.

I only use NERSC facilities.

no knowledge

 

NCCS.

Show Pagination