NERSCPowering Scientific Discovery for 50 Years

2009/2010 User Survey Results

HPC Resources

Legend:

Satisfaction Average Score
Very Satisfied 6.50 - 7.00
Mostly Satisfied - High 6.00 - 6.49
Mostly Satisfied - Low 5.50 - 5.99
Somewhat Satisfied 4.50 - 5.49
Significance of Change
significant increase
significant decrease
not significant

 

Hardware Satisfaction - by Score

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
PDSF: Uptime (availability)




9 22 31 6.71 0.46 0.35
HPSS: Reliability (data integrity)

2 2 2 33 124 163 6.69 0.68 0.01
HPSS: Uptime (Availability)


2 5 41 115 163 6.65 0.60 0.02
GLOBALHOMES: Reliability


7 5 47 160 219 6.64 0.68  
PROJECT: Reliability


3 4 24 79 110 6.63 0.69 0.08
GLOBALHOMES: Uptime

1 4 7 58 151 221 6.60 0.68  
PROJECT: Uptime

1 4 3 24 78 110 6.58 0.79 0.03
PROJECT: Overall


3 5 32 79 119 6.57 0.70 0.26
NETWORK: Network performance within NERSC (e.g. Hopper to HPSS)


3 10 51 114 178 6.55 0.68 0.04
GRID: Access and Authentication


1 3 26 39 69 6.49 0.66 0.06
GLOBALHOMES: Overall 1 1 3 6 7 65 146 229 6.48 0.92  
HPSS: Overall satisfaction 2 1 1 1 7 55 104 171 6.46 0.95 0.02
GRID: Job Submission


3 4 22 38 67 6.42 0.80 -0.07
PROJECT: File and Directory Operations
1 1 7 4 23 68 104 6.41 1.02 0.21
PDSF: Overall satisfaction



2 15 15 32 6.41 0.61 0.12
CARVER: Uptime (Availability) 1

8 5 26 68 108 6.39 1.03  
GRID: File Transfer

1 3
28 35 67 6.39 0.83 0.10
PROJECT: I/O Bandwidth

1 8 8 23 67 107 6.37 0.98 0.14
GLOBALHOMES: I/O Bandwidth

2 13 13 61 122 211 6.36 0.92  
PDSF: Ability to run interactively


2 2 8 16 28 6.36 0.91 0.21
CARVER: Overall


6 6 41 57 110 6.35 0.82  
GLOBALHOMES: File and Directory Operations 2
3 11 10 59 118 203 6.33 1.06  
HPSS: Data transfer rates 1
3 4 15 51 90 164 6.32 0.98 0.07
GRID: Job Monitoring 1 1
4 1 23 39 69 6.30 1.15 -0.26
HPSS: Data access time 1 1 3 5 9 61 80 160 6.27 1.02 -0.09
PDSF: Disk configuration and I/O performance

1 1 3 10 16 31 6.26 1.00 0.32
PDSF SW: Performance and debugging tools


2 1 10 11 24 6.25 0.90 0.29
HOPPER: Uptime (Availability)

2 7 12 60 66 147 6.23 0.89  
PDSF SW: Programming environment


2 3 12 14 31 6.23 0.88 -0.20
PDSF: Batch queue structure


2 5 8 16 31 6.23 0.96 0.01
PDSF SW: General tools and utilities

1 2 1 11 14 29 6.21 1.05 0.07
DaVinci: Uptime (Availability)

1 7
10 22 40 6.13 1.22 -0.31
FRANKLIN: Overall
2 6 10 35 142 117 312 6.12 0.94 0.37
NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)

11 14 23 66 108 222 6.11 1.13 -0.04
HOPPER: Overall 1
3 10 15 60 62 151 6.09 1.06  
Carver: Disk configuration and I/O performance 2
2 11 5 26 49 95 6.06 1.33  
PDSF SW: Programming libraries

3
4 8 14 29 6.03 1.27 -0.30
FRANKLIN: Uptime (Availability)
4 13 12 42 119 118 308 5.99 1.13 1.08
PDSF SW: Applications software

1 1 4 14 8 28 5.96 0.96 -0.27
FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35
FRANKLIN: Ability to run interactively 1
3 38 19 76 94 231 5.94 1.17 0.18
HPSS: User interface (hsi, pftp, ftp) 2
6 14 25 46 70 163 5.93 1.25 -0.09
PDSF SW: CHOS 1

1 6 8 11 27 5.93 1.33 -0.19
CARVER: Ability to run interactively 1 1 1 11 9 19 37 79 5.92 1.34  
DaVinci: Ability to run interactively
1 1 7 1 9 19 38 5.92 1.40 -0.39
CARVER: Batch queue structure 1 1 3 13 10 29 45 102 5.91 1.31  
HOPPER: Disk configuration and I/O pernkformance 3 1 1 19 12 45 55 136 5.88 1.34  
DaVinci: Disk configuration and I/O performance
1
7 3 12 16 39 5.87 1.28 -0.47
DaVinci: Overall

3 5 5 11 17 41 5.83 1.30 -0.38
CARVER: Batch wait time
3 8 9 12 22 47 101 5.81 1.45  
FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19
HOPPER: Batch queue structure 2
5 19 22 56 38 142 5.67 1.24  
HOPPER: Ability to run interactively 2 1 2 28 2 33 38 106 5.62 1.45  
PDSF SW: STAR

3 2 3 7 6 21 5.52 1.40 -0.67
HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41  
FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68

 

Hardware Satisfaction - by Platform

7=Very satisfied, 6=Mostly satisfied, 5=Somewhat satisfied, 4=Neutral, 3=Somewhat dissatisfied, 2=Mostly dissatisfied, 1=Very dissatisfied

Item Num who rated this item as: Total Responses Average Score Std. Dev. Change from 2009
1 2 3 4 5 6 7
Carver - IBM iDataPlex
CARVER: Uptime (Availability) 1

8 5 26 68 108 6.39 1.03  
CARVER: Overall


6 6 41 57 110 6.35 0.82  
CARVER: Disk configuration and I/O performance 2
2 11 5 26 49 95 6.06 1.33  
CARVER: Ability to run interactively 1 1 1 11 9 19 37 79 5.92 1.34  
CARVER: Batch queue structure 1 1 3 13 10 29 45 102 5.91 1.31  
CARVER: Batch wait time
3 8 9 12 22 47 101 5.81 1.45  
Franklin - Cray XT4
FRANKLIN: Overall
2 6 10 35 142 117 312 6.12 0.94 0.37
FRANKLIN: Uptime (Availability)
4 13 12 42 119 118 308 5.99 1.13 1.08
FRANKLIN: Disk configuration and I/O performance 1 1 3 36 25 112 103 281 5.96 1.10 0.35
FRANKLIN: Ability to run interactively 1
3 38 19 76 94 231 5.94 1.17 0.18
FRANKLIN: Batch queue structure 3 2 9 37 44 121 82 298 5.71 1.21 -0.19
FRANKLIN: Batch wait time 7 10 43 43 81 90 29 303 4.87 1.43 -0.68
Hopper Phase 1 - Cray XT5
HOPPER: Uptime (Availability)

2 7 12 60 66 147 6.23 0.89  
HOPPER: Overall 1
3 10 15 60 62 151 6.09 1.06  
HOPPER: Disk configuration and I/O pernkformance 3 1 1 19 12 45 55 136 5.88 1.34  
HOPPER: Batch queue structure 2
5 19 22 56 38 142 5.67 1.24  
HOPPER: Ability to run interactively 2 1 2 28 2 33 38 106 5.62 1.45  
HOPPER: Batch wait time 1 5 16 18 35 45 26 146 5.19 1.41  
DaVinci - SGI Altix
DaVinci: Uptime (Availability)

1 7
10 22 40 6.13 1.22 -0.31
DaVinci: Ability to run interactively
1 1 7 1 9 19 38 5.92 1.40 -0.39
DaVinci: Disk configuration and I/O performance
1
7 3 12 16 39 5.87 1.28 -0.47
DaVinci: Overall

3 5 5 11 17 41 5.83 1.30 -0.38
PDSF - Physics Linux Cluster
PDSF: Uptime (availability)




9 22 31 6.71 0.46 0.35
PDSF: Overall satisfaction



2 15 15 32 6.41 0.61 0.12
PDSF: Ability to run interactively


2 2 8 16 28 6.36 0.91 0.21
PDSF: Disk configuration and I/O performance

1 1 3 10 16 31 6.26 1.00 0.32
PDSF SW: Performance and debugging tools


2 1 10 11 24 6.25 0.90 0.29
PDSF: Batch queue structure


2 5 8 16 31 6.23 0.96 0.01
PDSF SW: General tools and utilities

1 2 1 11 14 29 6.21 1.05 0.07
PDSF SW: Programming environment


2 3 12 14 31 6.23 0.88 -0.20
PDSF SW: Programming libraries

3
4 8 14 29 6.03 1.27 -0.30
PDSF SW: Applications software

1 1 4 14 8 28 5.96 0.96 -0.27
PDSF SW: CHOS 1

1 6 8 11 27 5.93 1.33 -0.19
PDSF SW: STAR

3 2 3 7 6 21 5.52 1.40 -0.67
NERSC Global Filesystem - Global Homes
GLOBALHOMES: Reliability


7 5 47 160 219 6.64 0.68  
GLOBALHOMES: Uptime

1 4 7 58 151 221 6.60 0.68  
GLOBALHOMES: Overall 1 1 3 6 7 65 146 229 6.48 0.92  
GLOBALHOMES: I/O Bandwidth

2 13 13 61 122 211 6.36 0.92  
GLOBALHOMES: File and Directory Operations 2
3 11 10 59 118 203 6.33 1.06  
NERSC Global Filesystem - Project
PROJECT: Reliability


3 4 24 79 110 6.63 0.69 0.08
PROJECT: Uptime

1 4 3 24 78 110 6.58 0.79 0.03
PROJECT: Overall


3 5 32 79 119 6.57 0.70 0.26
PROJECT: File and Directory Operations
1 1 7 4 23 68 104 6.41 1.02 0.21
PROJECT: I/O Bandwidth

1 8 8 23 67 107 6.37 0.98 0.14
HPSS - Mass Storage System
HPSS: Reliability (data integrity)

2 2 2 33 124 163 6.69 0.68 0.01
HPSS: Uptime (Availability)


2 5 41 115 163 6.65 0.60 0.02
HPSS: Overall satisfaction 2 1 1 1 7 55 104 171 6.46 0.95 0.02
HPSS: Data transfer rates 1
3 4 15 51 90 164 6.32 0.98 0.07
HPSS: Data access time 1 1 3 5 9 61 80 160 6.27 1.02 -0.09
HPSS: User interface (hsi, pftp, ftp) 2
6 14 25 46 70 163 5.93 1.25 -0.09
NERSC Network
NETWORK: Network performance within NERSC (e.g. Hopper to HPSS)


3 10 51 114 178 6.55 0.68 0.04
NETWORK: Remote network performance to/from NERSC (e.g. Hopper to your home institution)

11 14 23 66 108 222 6.11 1.13 -0.04
Grid Services
GRID: Access and Authentication


1 3 26 39 69 6.49 0.66 0.06
GRID: Job Submission


3 4 22 38 67 6.42 0.80 -0.07
GRID: File Transfer

1 3
28 35 67 6.39 0.83 0.10
GRID: Job Monitoring 1 1
4 1 23 39 69 6.30 1.15 -0.26