|
When you are finished with this page click
"Save & Go to Next Section" or your
responses will be lost. Please do not answer a specific question or rate a specific
item if you have no opinion on it.
|
|
Please rate the NERSC systems or resources you have used.
|
|
Cray XE6: Hopper
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Batch wait time |
|
| Batch queue structure |
|
| Ability to run interactively |
|
| Disk configuration and I/O performance |
|
[Back to list of systems]
|
|
Cray XT4: Franklin
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Batch wait time |
|
| Batch queue structure |
|
| Ability to run interactively |
|
| Disk configuration and I/O performance |
|
[Back to list of systems]
|
|
IBM iDataPlex Linux Cluster: Carver
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Batch wait time |
|
| Batch queue structure |
|
| Ability to run interactively |
|
| Disk configuration and I/O performance |
|
[Back to list of systems]
|
|
Sun Sunfire: Euclid
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Ability to run interactively |
|
| Disk configuration and I/O performance |
|
[Back to list of systems]
|
|
GPU Testbed: Dirac
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Ability to run interactively |
|
| Disk configuration and I/O performance |
|
[Back to list of systems]
|
|
Parallel Distributed Systems Facility: PDSF
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Batch system configuration |
|
| Ability to run interactively |
|
| Disk configuration and I/O performance |
|
| Programming environment |
|
| CHOS environment |
|
| STAR software environment |
|
| Applications software |
|
| Programming libraries |
|
| Performance and debugging tools |
|
| General tools and utilities |
|
[Back to list of systems]
|
|
High Performance Storage System: HPSS
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Reliability (data integrity) |
|
| Data transfer rates |
|
| Data access time |
|
| User interface (hsi, pftp, ftp) |
|
[Back to list of systems]
|
|
Global Homes File System
|
|
In 2009 NERSC implemented Global Homes, where all NERSC computers share
a common home directory. Previously, each system had a separate,
local home file space.
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Reliability (data integrity) |
|
| I/O bandwidth |
|
| File and directory (metadata) operations |
|
[Back to list of systems]
|
|
NERSC /project File System
|
|
The NERSC "Project" file system is globally accessible from all NERSC computers.
Space in /project is allocated upon request for the purpose of sharing
data among members of a research group.
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Reliability (data integrity) |
|
| I/O bandwidth |
|
| File and directory (metadata) operations |
|
[Back to list of systems]
|
|
Global Scratch File System
|
|
Carver, Magellan, Dirac, and Euclid use the Global Scratch file system
as their only $SCRATCH file system. Global scratch is also accessible from
Hopper as $GSCRATCH.
|
| Please rate: | How satisfied are you? |
| Overall satisfaction |
|
| Uptime (Availability) |
|
| Reliability (data integrity) |
|
| I/O bandwidth |
|
| File and directory (metadata) operations |
|
[Back to list of systems]
|
|
NERSC Grid Resources
|
| Please rate: | How satisfied are you? |
| Access and authentication |
|
| File transfer |
|
| Job submission |
|
| Job monitoring |
|
[Back to list of systems]
|
|
NERSC Network
|
| Please rate: | How satisfied are you? |
| Network performance within NERSC (e.g. Hopper to HPSS) |
|
| Remote network performance to/from NERSC (e.g. Hopper to your home institution) |
|
[Back to list of systems]
|