NERSCPowering Scientific Discovery for 50 Years

1999 User Survey Results

Respondent Summary

NERSC would like to thank all the users who participated in this year's survey. Your responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, point us to areas we can improve, and show how we compare to similar facilities.

This year 177 users responded to our survey, compared with 138 last year. In general user satisfaction with NERSC was rated higher than last year, by an average of 0.6 on a 7-point scale across 27 questions common to the two years. The biggest increases in satisfaction were with the allocations process, the HPSS system and the T3E. See FY 1998 to FY 1999 Changes.

On a scale from 7 (very satisfied) to 1 (very dissatisfied) the average scores ranged from a high of 6.6 for timely response to consulting questions to 4.0 for PVP batch wait time. The areas users are happiest with this year are consulting services, HPSS reliability and uptime, as well as PVP and T3E uptime. Areas of concern are batch wait times for both PVP and T3E systems, visualization services, the availability of training classes, and PVP resources in general. See the table that ranks all satisfaction questions.

The areas of most importance to users are the overall running of the center and its connectivity to the network, the available computing hardware and its management and configuration, consulting services, and the allocations process. Access to cycles is the common theme. See the Overall Satisfaction and Importance summary table.

In their verbal comments users focused on NERSC's excellent support staff and its well run center with good access to cycles (although users wish we could provide even more), hardware and software support, and reliable service. When asked what NERSC should do differently the most common response was "provide even more cycles". Of the 52 users who compared NERSC to other centers, half said NERSC is the best or better than other centers, 23% simply gave NERSC a favorable evaluation or said they only used NERSC, 19% said NERSC is the same as other centers or provided a mixed evaluation, and only 4 said that NERSC is less good. Several sample responses below give the flavor of these comments; for more details see Comments about NERSC.

  • "I have found the consulting services to be quite responsive, friendly, and helpful. At times they went beyond the scope of my request which resulted in making my job easier."
  • "Provides reliable machines, which are well-maintained and have scheduling policies that allow for careful performance and scaling studies."
  • "Provides a stable, user-friendly, interactive environment for code development and testing on both MP machines and vector machines."
  • "It would be nice if there were fewer users, so turn-around time could be faster."
  • "NERSC provides a well-run supercomputer environment that is critically important to my research in nuclear physics."

NERSC made several changes this past year based on the responses to last year's survey.

  • We more frequently notified you of important changes and issues by email: This year 95% of users said they felt adequately informed, compared with 82% last year.
  • We changed the way we present announcements on the web. (We have no comparison rating between the two years.)
  • We restructured the queues on the Cray T3E: the satisfaction rating for T3E batch queue structure went up by one point (from 4.5 to 5.5).
  • We added additional debug queues on all the Crays: last year we received 2 complaints in this area; this year none.

Watch this section for changes we plan to implement next year based on this year's survey.

Below are the survey results. For the survey itself, click here.

  1. Overall Satisfaction and Importance
  2. User Information
  3. Visualization
  4. Consulting and Account Support
  5. Information Technology and Communication
  6. Hardware Resources
  7. Software Resources
  8. Training
  9. Comments about NERSC
  10. All Satisfaction Questions and FY 1998 to FY 1999 Changes

Overall Satisfaction

Legend

SatisfactionValueImportanceValue
Very Satisfied 7 Very Important 3
Mostly Satisfied 6 Somewhat Important 2
Somewhat Satisfied 5 Not Important 1
Neutral 4  
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1

 

Overall Satisfaction and Importance

TopicSatisfactionImportance
No. of ResponsesAvg.'98/ChangeNo. of ResponsesAvg.'98/Change
Consulting services 154 6.58 5.87/+0.71 145 2.63 2.70/-0.07
Account support 136 6.39 5.67/+0.72 124 2.48 2.31/+0.17
Overall satisfaction 176 6.25 5.43/+0.82 162 2.87 2.77/+0.10
Network connectivity 143 6.17 5.70/+0.47 130 2.87 2.84/+0.03
Mass storage facilities 120 6.06 5.13/+0.93 115 2.47 2.06/+0.41
Available software 129 5.99   116 2.57  
Available computing hardware 138 5.96   127 2.82  
Software maintenance and configuration 114 5.89   100 2.54  
HPCF web site 134 5.87 5.54/+0.33 123 2.31 2.51/-0.20
Allocations process 118 5.87 4.60/+1.27 109 2.61 2.31/+0.30
Hardware management and configuration 121 5.71   107 2.69  
Software documentation 117 5.46   104 2.50  
Web-based training 69 5.19   75 1.81  
Training classes 59 4.85   69 1.52  
Visualization services 57 4.37   67 1.58  

 

User Information

Number of responses: 177

What NERSC resources do you use?

PVP Cluster115 (65%)
Web Site106 (60%)
Cray T3E104 (59%)
Consulting Services93 (53%)
HPSS81 (46%)
Account Support57 (32%)
Operations Support23 (13%)
Math Server8 (4%)
Visualization Server7 (4%)
Other5  
  PDSF (2)
  Visualization lab
  Teleconference seminars
  AFS

How long have you used NERSC?

More than 3 years: 81 (48%), 6 months to 3 years: 68 (40%), less than 6 months: 21 (12%)

 

What desktop systems do you use to connect to NERSC?

(Percentages may sum to greater than 100% since each user could choose multiple platforms.)

SUMMARY
UNIX147 (85%)
PC66 (38%)
MAC46 (27%)
XTerminal2 (1%)
VAX/VMS1 (1%)
SPECIFIC OPERATING SYSTEMS
Linux65 (38%)
Sun Solaris65 (38%)
Apple MacIntosh46 (27%)
SGI IRIX39 (22%)
Windows 9831 (18%)
Windows 9529 (17%)
IBM AIX25 (14%)
DEC OSF22 (13%)
HP HPUX21 (12%)
Windows NT20 (12%)
Windows 3.02 (1%)
DOS2 (1%)
OS/21 (1%)

 

How often do you connect to NERSC from:

LocationOftenSometimesNever
Office 144 32 0
Home 37 74 43
Elsewhere 2 50 58

 

Connection type:

LocationEthernetCable ModemDSLISDNModem
Office 158 1 0 2 1
Home 11 28 2 7 65
Elsewhere 31 1 1 0 11
Total200303977

Visualization

 

Do you use visualization software to analyze or display your results?

No: 86, Yes: 84

 

Have you used NERSC visualization resources?

No: 147, Yes: 19

 

Describe the visualization software used to analyze or display your results - 94 responses

  • 28:    Download data and do visualization locally
  •   8:    Gave general description of what is visualized
  •   7:    Use home brew software
  • 51:    Others who listed software used
    • NCAR Graphics: 16
    • IDL: 14
    • AVS: 9
    • Matlab: 6
    • gnuplot: 6
    • Mathematica: 5
    • kaleidagraph: 5
    • xmgr: 4
    • noesys: 4
    • xmol: 3
    • vmd: 3
    • MSI Insight II: 3
    • 26 other software packages
      had 1 or 2 users

If you don't use visualization software, why not? - 25 responses

  •   9:    Don't know enough about it, don't know how to use it
  •   6:    Don't need visualization software
  •   3:    Haven't found the right software yet
  •   2:    Network access too slow
  •   2:    NERSC doesn't support the software I want
  •   2:    Learning curve is too steep, don't have the time
  •   1:    Not yet ready; might use in the future

Describe the NERSC visualization resources you have used - 19 responses

  •   7:    Use / have used NCAR Graphics
  •   4:    Use / have used AVS
  •   3:    Help from members of the Visualization group
  •   6:    Other visualization software

What additional visualization services or software could NERSC provide for you? - 36 responses

  • 18:    Don't need NERSC visualization services
  •   9:    Individual software requests
  •   7:    Don't know what I need, don't know what you offer
  •   1:    Help with AVS
  •   1:    Need more interactive computing

Describe the visualization software used to analyze or display your results - 94 responses

  • Download data and do visualization locally:   28 responses

    I have an SGI that satisfies my visualization needs (for now). In the future when I have larger data sets I may look into the visualization support that NERSC offers.

    In our work group, we've found that it is easier to do computations at NERSC and move the data back to the office and plot it there. It has been very frustrating in the past when software changes and you are forced to spend a lot of time reworking graphics.

    I use VMD to visualize my results. I have it installed on my SGI.

    I still use NCAR for some purposes. More often, I dump the data into NetCDF files and look at it with IDL using local (non-NERSC) computers.

    I use visualization at the ohio supercomputer center (OSC) since it was more convenient. the connection to nersc is sometimes on the slower side, so I did not think it worthwhile to start thinking about visualization at nersc

    Data explorer, run on our local AIX machine.

    For 3D visualization I use AVS and sometimes Mathematica on my local workstation. I also expect in the near future to use IBM Data Explorerer. I also use a variety of graphics software for more routine 2D viewing of my results. This would include Kaleidagraph, NOeSYS, NCSA Image on the Mac. On my workstation I use PLPLOT, NCSA Image and NCAR graphics.

    IDL on my desktop machine

    MATLAB at home institute.

    IDL and AVS
    Visualization is done at Princeton Plasma Physics Laboratory

    We have enough software in our home machines. However, I will try to use visualization software in NERSC.

    I am primarily doing performance analysis at NERSC. I do most of my production runs here at ORNL.

    I use visualization tools on my PC or workstation since it works faster than remote connection (actually, I never tried visualization tools at NERSC, but I believe that the above statement is true). I have visualization tools for molecular structures and plotting on my PC but I am not sure what is available at NERSC.

    PC spartan pro

    Usually I download data files and do graphics on PC

    I use Chem3D and Xmol locally after transferring files from NERSC to my PC.

    I use idl on my local workstation

    I use IDL, but not through NERSC.

    I download my data and use the visualization packages I have here on my PC.

    I am happy with my local visualization software and am not training in using NERSC's software.

    Data are analyzed in my local platform

    gnuplot (for 2-dim plots), but I use them on my local machine

    ftp the results to my PC and analyze them there

    Occasionally, we visualize our data using a PC-based program named Noesys. This is basically a modern version of the older NCSA "Datascope" software.

    We use our own visualization, TECPLOT

    Just simple plots, using xmgr or Mathematica running on my local machines.

    Have used SGI Explorer on our workstations, as well as AVS at NERSC to generate cross sectional images and isosurfaces of 3-D data arrays.

    I use IDL on a local HP workstation

     

  • Gave general description of what is visualized:   8 responses

    I display various kinds of slices of atmospheric data in several dimensions; usually: lat, lon, alt/level/pressure, species, concentration.

    Molecular graphics programs to visualise particular atomic configurations of systems under study.

    Our results generate enormous amounts of data and can only be analyzed by careful graphical analysis.

    3D visualization of atom locations
    Contour plots

    No. Generally our graphics involves only two dimensional (y vs. x) plots, so that no sophisticated visualization software is necessary.

    Complicate numerical and analytical results need visualization analysis and presentation

    Graphical plots of results for publication.

    Simple plotters do usually fine

     

  • Use home brew software:   7 responses

    Home made software

    Have used AVS and home-grown.

    Sometimes use private software

    NCAR for basic line plots and SGI graphics workstations with custom-written software for more extensive visualizations

    Programs: AVS, xmgr, and self-written programs

    I primarily use a package, AmrVis, written and maintained by our research group.

    PV Wave, and amrvis (viz software for hierarchical data sets)

     

  • Software used:

     

    • NCAR Graphics: 16 responses
    NCAR graphics. Simple 2D plots and contour plots, monochrome.

    NCAR graphics

    Many codes used by group use NCAR routines

    NCAR graphics package.

    NCAR Graphics (available on J90/SV1 cluster in NERSC).

    Just NCAR.

    NCAR Graphics

    Primarily use NCAR.

     

    • IDL: 14 responses
    Simple 2-D plots with NCAR graphics. Color contour plots and 2D plots and postprocessing in IDL.

    Mostly NCAR and IDL

    NCAR graphics. IDL in past but not in the recent year

     

    • GIST graphics: 2 responses
    • AVS: 9 responses
    Mostly NCAR and GIST graphics; post-processing using Yorick; occasionally we use AVS

    I primarily use the NCAR plotting library (on the PVP) and the gist plotting library (which is part of Yorick and has in interface in python) (on both PVP and MPP). I only occasionally make use of other packages such as AVS.

    IDL

    IDL

    We use IDL and other packages to plot our results.

     

    • mongo, supermongo: 2 responses
    idl; sm (supermongo)

     

    • Matlab: 6 responses
    idl, sometimes matlab

    AVS system to display streamlines and vector plots.

    Mongo

    Only Matlab.

    I use matlab to look at my data.

     

    • gnuplot: 6 responses
    • xmgr: 4 responses
    • Mathematica: 5 responses
    Besides line plots with gnuplot or xmgr, I make surface plots with Matlab or Mathematica. [...]

    I am using Gnuplot to display my results. It needs to be updated to latest version.

    just gnuplot

    gnuplot

    GNU plot

     

    • kaleidagraph: 5 responses
    • xmol: 3 responses
    gnuplot, kaleidagraph, xmol

    I use Mathematica for visualization, as well as some free UNIX software tools (xmgr, xmgrace, Plotmtv).

     

    • POVRAY: 1 response
    • MiniCAD: 1 response
    I use KaleidaGraph, POVRAY, Mathematica, and MiniCAD.

     

    • spyglass: 2 responses
    I use kaleidagraph for conventional plotting and spyglass for contour and 3-d plotting

     

    • NCSA Image: 2 responses
    • noesys: 4 responses
    kaleidagraph; image; noesys transform

    Noesys software

     

    • vmd: 3 responses
    My research group investigates the dynamics of atoms and molecules in complex systems. We frequently use molecular visualization tools (such as vmd and xmol) to analyze and display results.

     

    • rasmol: 2 responses
    VMD from UIUC, rasmol for protein visualization

    Rasmol

     

    • atomtv: 1 response
    I utilize atomtv to view atomic positions and trajectories from molecular dynamics simulations.

     

    • QUANTA: 2 responses
    Use QUANTA to look at molecular dynamics results from CHARMM calculations.

     

    • MSI Insight II: 3 responses
    quanta, insightII, molden

    visualization of molecules using INSIGHTII from MSI, Inc.

     

    • O (Alwyn Jones): 1 response
    Molecular Graphics
    Insight II (MSI)
    O (Alwyn Jones)

     

    • Data Explorer: 2 responses
    • other single answers:

    Yes, have tried this once or twice (MAVIS for viewing Gaussian results).

    I'm a new user and am not yet up to full speed. The code I will be using, called XOOPIC, has a GUI that can be used to postprocess image files generated during batch runs.

    i use the vampir tool for visualizing execution traces

    Totalview

    I use the PCMDI-developed tool VGS to view the results.

    RM Scene Graph, beta product of R3vis Corp

    Geomview for 3-D interface.

    I (seldom) use the emerging HENP standard tool ROOT

    Exodus

    Simple packages for making graphs of data, e.g. TOPDRAW.

    PLPLOT

    PC spartan pro

    Chem3D

    TECPLOT

    SGI Explorer


If you don't use visualization software, why not? - 25 responses

  • Don't know enough about it, don't know how to use it:   9 responses

    [...] I don't have a good idea of what is possible with some of the more advanced software on escher. I have tried experimenting from my desktop Sparc5, but am unable to run AVS.

    I need to learn more about this topic

    I don't know how to use this software.

    Don't know how

    I don't know enough about it, that's my fault.

    not familiar with it

    I do not know how to use the software at NERSC.

    complete ignorance on my part. PLease point me to a tutorial. I assume that it uses X-windows.

    do not know how to use

     

  • Don't need visualization software:   6 responses

    I'm not sure what I'd learn from visualizing my results. I've seen some visualizations of lattice QCD simulations, and they are somewhat useful to help explain to nonscientists what we are doing. But I think that some visualization techniques oversimplify things, and may actually be misleading in the course of the research. Then again, I don't have much experience with this.

    In appropriate for our application

    Others in our group do that.

    Use NERSC as a number crunching resource, only.

    Type of calculations are not really suitable for visualization.

    My work has not yet demand such software.

     

  • Haven't found the right software yet:   3 responses

    My group's work is mainly compiler/systems development. On the applications side, we would like to visualize adaptive (AMR) meshes but have not yet found good tools for this. (We are pursuing some.)

    I am working on a compiler rather than a number-crunching application per se. I am not familiar with any visualization software for abstract syntax trees, though that's an interesting idea....

    Still trying to find appropriate software

     

  • Network access too slow:   2 responses

    Network inside UGA too slow.

    Data transmission too slow.

     

  • NERSC doesn't support the software I want:   2 responses

    We use AVS (Advanced Visual System) for our visualizations. NERSC does not support AVS. [note from NERSC: AVS is supported on escher, the visualization server.]

    Our codes use DISSPLA and DISSPLA is not available on the CRAYs.

     

  • Learning curve is too steep, don't have the time:   2 responses

    Often there seems to be a large initial investment needed to come up to speed to be able to use large visualization software packages.

    Have not had the time.

     

  • Not yet ready; might use in the future:   1 response

    Currently , we are generating data which may be in future to display results graphically. Then we would use the visualization software.


Describe the NERSC visualization resources you have used - 19 responses

  • Use / have used NCAR Graphics:   7 responses

    NCAR graphics

    I have used NCAR graphics package before I got my own Fortner (originally Spyglass) package.

    So far I've only needed NCAR.

    NCAR on the J90's.

    I used to use the NCAR package, but didn't like it very much so I gave up about a year ago

    NCAR routines only

    [...] Presently only use NCAR at NERSC

     

  • Use / have used AVS:   4 responses

    Have used AVS at NERSC to generate cross sectional images and isosurfaces of 3-D data arrays.

    Yes, I have occasionally used AVS on Escher since it has a larger module library than my local workstation has. [...]

    I have tried AVS, but it didn't seem to offer significant advantages over my current software choices.

    AVS

     

  • Help from members of the Visualization group:   3 responses

    The visualization group prepared a demo for SC'98 using my data.

    help from Wes Bethel using Quicktime and other animation tools

    Viz lab & viz group staff support.

     

  • Other visualization software:   6 responses

    Use escher to compute various viz frames for large-scale combustion calcs. [...]

    Past used IDL on sas. Presently only use NCAR at NERSC

    Physics Analysis Workstation (PAW) of CERN library

    Gaussview

    PVWave a few years ago

    matlab


What additional visualization services or software could NERSC provide for you? - 36 responses

  • Don't need NERSC visualization services: 18 responses

    In general the network latency is too large to allow reasonable access to remote visualization facilities.

    I am used to Matlab. A visualization tool on the local machine performs better than over the net.

    None, really. It is much more efficient to run it on a local machine for what I do.

    None, I believe that off-line visualization (on my local LINUX workstation) works better for my research.

    My telnet connection is usually painfully slow, and the best bet is to generate a graphics file at NERSC and ship it home by mail or ftp.

    See above; I don't think that visualization software is really appropriate for the domain in which I am working. But I'm open to suggestions! :-)

    I have planned to, but other approaches were used first.

    none -- due to security reasons (I'm in LANL)

    I have had no need so far.

    We use TECPLOT [not at NERSC]

    no need for outside visualization

    Nope. Not much free time and no burning desire to do this.

    see above. [does visualization locally]

    see above [does visualization locally]

    (see above) [uses local PC software]

    See above. [doesn't use]

    Not applicable.

    See above. [doesn't use]

     

  • Individual software requests:   9 responses

    Software geared around visualizing fluid dynamic data (FAST, Tecplot, ...) which, however, does not rely on graphics hardware support.

    [...] Could use Tecplot as well if it were there.

    I'm not aware of current offerings. I would like tools for analyzing 1000s of CDF files and for producing isosurface plots.

    Something for adaptive mesh data (actually, I haven't looked myself to see if such tools are available, but I've heard that better stuff might be coming, and I'm not in much of a hurry right now, so I'm waiting).

    AMR visualization library, but we would need to interface it to our own language/compiler, Titanium.

    I'd like to use the NCAR routines on mcurie.

    No. Something similar to Exodus

    Some simple 3D visualization program

    Our codes use DISSPLA and DISSPLA is not available on the CRAYs. If it were, we would run our DISSPLA-based codes on the CRAYs.

     

  • Don't know what I need, don't know what you offer:   7 responses

    an email pointing us to relevant information on the web

    I do not know anything about the capabilities of your tool.

    I am not familiar with NERSC's visualization resources. Coordinates which characterize the processes we are interested in are commonly collective and not easily discerned from pictorial representations. There have been, however, occasions on which the tools available to me for representing data have not been sufficient.

    I don't know right now. I am new to the system and I have been using our own home-grown visualization systems.

    not sure - not educated enough about what's out there.

    I havn't check all the possibilities...

    do not know

     

  • help with AVS:   1 response

    [...] Several topics I could use help on with respect to AVS are:
    (a) An efficient way of writing 3D data out as VRML 2 format for web applications. AVS only has modules (contributed) which write out in VRML 1 format and they write very inefficient forms of VRML. I've seen examples of direct reduction of 3D objects similar to what I work with to VRML 2 (without using AVS) which are much more efficiently rendered in VRML viewers than those I've written through AVS.
    (b) Better ways of getting my data imported to AVS. For example, I often work with non-planar analytically described surfaces. It would be helpful to know if there is software which would help one subdivide general surfaces into triangular elements and then write this out along with nodal data directly into a binary-format AVS UCD data structure. I usually end up either writing out lots of 3D field data and then making an isosurface through it or writing out the surface and then letting AVS convert the field to a UCD (it's converter does not seem to be very efficient).
    c) Another area that I've found challenging is in doing time-series animations of 3D data coming from simulations. One tends to have to write time slices of the data out as many separate files and then set up a loop within AVS to read these in and render them and save the 2D images again as lots of separate files. It would be helpful if someone would create an AVS module which could read multiple time slices out of a single file (or small number of such files) along with the code fragment needed in the simulation code for writing out this file (or files).

     

  • Need more interactive computing:   1 response

    Much of our visualization is interactive using the actual simulation program; more emphasis on interactive computing would help.

Consulting and Account Support

Legend

SatisfactionValue
Very Satisfied 7
Mostly Satisfied 6
Somewhat Satisfied 5
Neutral 4
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1
TopicSatisfaction
No. of ResponsesAvg.'98/Change
Timely response to consulting questions 134 6.64  
Quality of technical advice from consultants 134 6.52 5.88/+0.64
Followup to initial consulting questions 106 6.43 5.57/+0.86
Response to special requests 103 6.28  
Ease of obtaining account information 124 6.26  
Ease of modifying account information 96 6.15  

 

 

Comments and suggestions regarding NERSC Consulting Services - 36 responses

  • 22:     Provides good service
  •   6:     Commented on other topics
    •  
      • Provide more hours and more inodes.
      • Setcub is cryptic.
      • 16-bit integer support in the Cray compilers.
      • Problems with lost HPSS data and J90 checkpoint files.
    •  
  •   5:     Service is uneven, too slow
    •  
      • Was confronted in a somewhat threatening tone about my misbehaving application.
      • Problems with accounts at the start of FY 2000.
    •  
  •   3:     Haven't used
    •  
      • Since most of the information I needed was on the web.
    •  
  •   1:     They should know BASIS

Comments and suggestions regarding NERSC Account Support Services - 25 responses

  • 14:     Provides good service
  •   7:     Could be improved
    •  
      • Would like to choose my username.
      • Put dates on emails.
      • Confusion about a T3E allocation.
      • ERCAP user lists confusing.
      • Took too long to get an inode increase.
      • Don't make us change passwords so often.
    •  
  •   2:     setcub improvements
  •   1:     Commented on other topics
    •  
      • Provide script for tracking progress of batch jobs.
    •  

Comments and suggestions regarding NERSC Consulting Services - 36 responses

  • Provides good service:   22 responses

    Consulting staff seems to be the best and most helpful I have come across anywhere.

    Consulting stuff is very knowledgeable and very helpful, Thank you.

    We have recently had excellent support in porting a piece of our code that was bottlenecking on the T3E. It is quite unique and pleasant to get this level of support.

    I ask a lot of questions, and get a lot of good answers. The consultants answer in a timely fashion. Occasionally, I have had problems that were not completely resolved, but these are typically very technical problems for which I have found no better source of assistance.

    The consultants are certainly the strength of NERSC. I have found them attentive, professional, and very helpful.

    I have always been very satisfied with the level of support and assistance I've received from NERSC consulting staff.

    I was very satisfied because NERSC support staff sometimes just call you and fix the problems. I would suggest that this is the most efficient way to help users.

    Excellent service.

    Consultants are very good and responsive. I would like to particularly thank Tom DeBoni, he gave us very needed and prompt help every times we ask.

    Doing a great job!! Keep it up, and thanks for all the help!

    I am more than " Very Satisfied" with NERSC , and especially the Consulting Services headed by Ms. Francesca Verdier are the "best" feature of NERSC. In particular , I would like to mention Dr. Jonathan Carter, Dr. Richard Gerber, Dr. Harsh Anand who have been most helpful in resolving problems I encountered while using the supercomputing facilities ( which are second to none!) at NERSC. I am very much indebted to Dr. Carter and Francesca Verdier for their help, advice and guidance without which I could not have achieved much.

    Prompt, timely, friendly and knowledgeable services and staff

    The NERSC consultants have always been professional and friendly when answering my questions.

    They are doing a great job. I have never had a problem reaching someone to help me out. THANKS FOR A GREAT JOB.

    Generally helpful. [...]

    I was exceptionally pleased with Frank Hale.

    Keep up the good work.

    The consulting service is to be highly commended. I have always received A+++ service

    I would especially single out the help given by Jonathan Carter, in helping myself and group members in running quantum chemistry codes and assisting to solve problems relating to software and hardware.

    Great attitude. Nice people to deal with. Very knowledgeable. Very patient.

    Their services are very helpful.

    great, and hope to keep on that.

     

  • Commented on other topics:   6 responses

    More hours of availability please!When

    my only real gripe is the i-node quota.

    I never really know what I'm doing with setcub, and the man page is not too helpful. I often have the feeling that units mysteriously appear and disappear.

    It would be nice if there were a 16 bit integer available in the Cray compiler, but apparently that is not possible.

    I lost several times some data stored on HPSS. I also encountered some problems to run jobs on J90 for saving checkpoint files.

    I don't think the PC-Mac support group falls into this category, but the only thing I'm displeased with regarding NERSC services has been the support I've received from that group. I think they may be understaffed, but I've had really slow response times to my requests. I'm not saying that they are not qualified for the job, but just that they take too long to attend requests.

     

  • Service is uneven, too slow:  5 responses

    very uneven response quality, depending on who you talk to.

    Service is very dependent on who it is that answers the phone.

    Some consultants are VERY good, others are not quite into the big league yet

    My initial encounter with consulting services was unfortunate. I was having difficulty managing a complicated application, and was called by a consulting services member who confronted me with a challenging and somewhat threatening tone. (My application, in failing, was wasting CPU cycles [though not many, as I was frequently monitoring its progress]. This was noted by the consultant.) Following this incident, however, I received helpful and respectful advice from another consultant, and was able to improve the application.

    We have some difficulties at the beginning of FY2000 which non-PIs of continuing accounts getting their setcub ceiling reset to zero. Then they were prohibited from running batch. It took several calls and days to get this resolved.

     

  • Haven't used:  3 responses

    I haven't used the consulting services to any significant degree.

    I am a very new user, and have not yet had the opportunity to call upon your services.

    I have not used NERSC consulting services for technical questions since most of the information I needed was available on the web.

     

  • they should know BASIS:  1 response

    Generally helpful. Wisded staff had more knowledge of BASIS


Comments and suggestions regarding NERSC Account Support Services - 24 responses

  • Provides good service:  14 responses

    When our proposal was late (due to jury duty), NERSC called and was able to accommodate us. This flexibility and courtesy is terrific.

    Your doing a great job from my perspective.

    Same as above. [I was very satisfied because NERSC support staff sometimes just call you and fix the problems...]

    The account support personnel are straightforward, professional, friendly, available ... what more could a user want?

    Same as consulting. [Doing a great job!! Keep it up, and thanks for all the help!]

    excellent service.

    They are real professionals and iron out problems very fairly and quickly. Congratulations for a work well done!

    Prompt, timely, friendly, approachable and extremely helpful services and staff

    They have always responded quickly when I needed account help.

    I've always been 100% satisfied by their help.

    ibid [They are doing a great job....]

    Fine.

    Keep up the good work.

    Their services are very helpful.

     

  • Could be improved:  7 responses

    I was told I couldn't change my account name from "xxxxx",
    which I find awkward, to "xxxxa", which is what I use everywhere
    else. I use ssh for good security, and having different usernames
    makes this a real hassle for me.

    When you send out multiple form letters with superseding account information, please put the DATES on them so users can look back and figure out what changed when

    Responses from account services are typically rather slow. I spent an afternoon trying to assess the state of our account, having spoken with a member who did not understand the details of T3E allocation. Information of the NERSC webpage was at odds both with his misunderstanding and with the true details.

    The web pages for new allocations this year were somewhat confusing, in terms of listing old/new people and old/new repo accounts.

    for most issues, support has been very forthcoming. But for one issue in particular, everyone in our group cannot seem to get response. The inode limit on one machine (the T3E) is far too stringint for us to get much work done. Support services claims to be looking into alternatives, but our groups waits and waits....

    It would be nice if we did not have to change passwords so often. Fortunately consulting seems to know.

    Improve the above.

     

  • setcub improvements:  2 responses

    It would be nice if the setcub view command showed the units by default so that I don't have to remember if they are minutes or seconds.

    I'd like to have a command within SETCUB that gives me the usage of all members of my repo within a given period of time. Something like: usage repo=xxx since=xxx end=xxx members

     

  • Commented on other topics:  1 responses

    I would like to see a simple to use script that would track the jobs of a given user on the SV1 cluster, instead of having to go looking for them with qstat and other commands that require remembering a number of flags or checking pages of documentation every time. Something like "Show jobs" or "whereare myjobs"

Information Technology and Communication

Legend

SatisfactionValueUsefulnessValue
Very Satisfied 7 Very Useful 3
Mostly Satisfied 6 Somewhat Useful 2
Somewhat Satisfied 5 Don't Use 1
Neutral 4  
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1

 

HPCF Web Site

TopicSatisfaction
No. of ResponsesAvg.'98/Change
Accuracy of information 94 6.22  
Getting Started Guide 73 6.08 5.54/+0.54
Timeliness of information 90 5.99  
T3E Section 72 5.99 5.48/+0.51
Info on using NERSC-specific resources 87 5.93  
File Storage Section 66 5.82 5.10/+0.72
General programming information 86 5.74  
Search Facilities 83 5.69  
Ease of finding information 117 5.70  
PVP Section 55 5.69 5.34/+0.35

 

How useful are these for keeping informed of NERSC issues and changes?

TopicUsefulness
No. of ResponsesAvg.
E-mail notices from NERSC 136 2.63
News and Announcement web pages 120 2.16
MOTD on computers 114 2.09
Phone calls from NERSC 103 1.89
Online web magazine 105 1.54

 

Do you feel you are adequately informed?

TopicYesNo%'98/Change
Do you feel you are adequately informed about NERSC changes? 128 7 95% 82%/+13%
Are you aware of planned outages 24 hours in advance ? 106 10 91%  
Are you aware of major changes at least 1 month in advance? 109 13 89%  
Are you aware of software changes at least 7 days in advance? 87 18 83%  

 

Do you like the new hpcf.nersc.gov web site? 42 responses

We created a new web site just for users of the NERSC High Performance Computing Facility The URL is http://hpcf.nersc.gov/. Do you like the new web site?

  • 33:     Like it
  •   6:     Could be improved / changed
    •  
      • Some info buried too far down (batch queue structures, how to submit batch jobs).
      • More cross-referencing between machines, in general more cross references.
      • Fewer links were required on old site to obtain detailed info (T3E queues).
      • Too many changes are confusing.
      • Having more than one website for NERSC is confusing.
      • Too much clutter for slow-speed connections.
    •  
  •   5:     Don't use it, no answer

How could we improve our web site? 25 responses

  •   6:     Improve its organization and searching abilities
    •  
      • Add an acronym list.
      • Steer users directly to the most important topics.
      • Improve f90 documentation.
    •  
  •   3:     Provide more optimization info
  •   2:     Better software documentation
    •  
      • Provide a short reference of the most important commands, queue limits, whatever one tends to forget.
    •  
  •   2:     Comments pertaining to www.nersc.gov
    •  
      • www.nersc.gov easier to remember than hpcf.nersc.gov
    •  
  •   6:     Other suggestions
    •  
      • Website should be updated within minutes when machines are unavailable.
      • Ability to setup, submit and monitor jobs via the web.
      • Don't change its design so often.
      • No web passwords.
      • Provide user discussion boards.
      • Improve access over low speed connections.
    •  
  •   6:     No specific suggestions

How would like to keep informed of changes and issues at NERSC? 42 responses

  • 22:     By email (only email mentioned)
  •   4:     By email and motd
  •   2:     Use email more
    •  
      • Provide ability to sign up for email announcements on specific topics.
      • Notify us of planned outages by email.
    •  
  •   2:     Use email less
  •   6:     Improve the motd / don't like motds
  •   5:     Current methods are good
  •   2:     Keep us better informed of plans
    •  
      • Tell us more about planned hardware and software acquisitions before they occur.
      • Have a user comment period before final decisions are made on major changes.
    •  
  •   2:     Comments on the online newsletter
    •  
      • Didn't know it existed.
    •  
  •   2:     Other comments
    •  
      • Maintenance times are awkward.
      • Use the "news" system.
    •  

Do you like the new hpcf.nersc.gov web site?   42 responses

  • Like it:   33 responses

    yes, very much. Accurate and up-to-date information.

    The new website is an improvement. It is quite good: informative and the information is relatively easy to get.

    Yes, however see how to improve.

    Yes, it looks good.

    yes, it is a good idea

    In general, I think it's a fine web site. [...]

    The site is good and getting better. [...]

    Yes, the new web site certainly has significant improvements in relation to what was available online from the previous year.

    Yes, I do it very much. Thanks.

    Yes indeed. Very helpful. Thanks.

    much better than the old one!

    Yes. It is a little easier to find things.

    I find it very helpful.

    It's OK. I don't give this much thought.

    Yea, it is a good idea.

    Yes, very good.

    yes [17 responses]

     

  • Could be improved / changed:   6 responses

    In general, I think it's a fine web site. I've occasionally noticed that some information that one needs to check fairly often, such as batch queue structures and how to submit batch jobs is buried fairly far down in the hierarchy. Also, there could be more cross-referencing between machines. For example, having just checked the batch queue structure or software availability on the T3E, maybe I would like to see the same thing on the J90. But there is no direct link at this point; I have to go back up to the home page, find the J90 and work my way back down.

    I prefer the old website. Fewer links were required to obtain detailed information on, for instance, the T3E queue system.

    The site is good and getting better. A few more links between pages would be good. For example, in the J90/SV1 page, in the section covering batch, there are no links to the tutorial pages on batch or to the page on special considerations on running batch on the PVPs.

    Too many changes are confusing.

    I find it a little confusing to have more than one site for NERSC.

    There is a lot of clutter which makes it difficult to use through a low speed connection. Huge pictures or many frames just slow things down enormously.

     

  • Don't use it, no answer:   5 responses

    Not sure; don't use it all that much.

    Have not looked at it yet.

    Have not spent time looking it over.

    Haven't used it

    No answer.


How could we improve our web site?   25 responses

  • Improve its organization and searching abilities:   6 responses

    improve the internal searching capabilities of the site

    Add an acronym list. For example, if you didn't know what PE stood for, you'd never find out from your site, docs or search tools...

    The information most important for users should be easily located. The content of the website suffers from, if anything, overabundance. This degrades the quality of searches (often locating irrelevant information). Perhaps a clearer organization of material would be helpful.

    One way is to steer the users directly into the areas, which they are looking for, rather than going though a long windy way, if possible.

    My impression is that it is still somewhat difficult to get specific questions regarding the F90 language/compiler answered without going through reams of Web pages

    Make it as organized and logically-structured as you possibly can.

     

  • Provide more optimization info:   3 responses

    I'd like to see more information on Fortran 90. I would very much like to see optimization information in a more accessible format. Some tricks for improving my codes (such as using E-registers and Cray pointers) were not very easy to figure out from the documentation on the web. The information on SHMEM as it relates to optimizing MPI code is useful but could be expanded and made more accessible.

    Put some more technical info about hardware back on the site; things like vector lengths, etc. SGI's website are fairly useless nowdays for this kind of info, which can often be quite helpful when you need to squeeze out a few more GFLOPs.

    provide direct access to performance tuning aspects

     

  • Better software documentation:   2 responses

    I would appreciate ONE page somewhere on the web containing a short reference of the most important commands and its flags, queue limits and whatever one tends to forget from one login to the other.

    documentation on using the BLAS and LAPACK libraries are a little incomplete (e.g., it took me a while to realize that only "single" precision routines are relevant/available).

     

  • Comments pertaining to www.nersc.gov:   2 responses

    Your old web site: www.nersc.gov still exists. This is somewhat confusing to users why you maintain these two web-sites. Why don't you merge them or at least put a very visible link on www.nersc.gov to redirect us to hpcf.nersc.gov given that most of us find www.nersc.gov is easier to remember than hpcf.nersc.gov.

    If you have a slow link, loading the home page with all of its graphics takes a long while. Reduce the glitz.

     

  • Other suggestions:   6 responses

    It would be nice if the website could be updated within minutes when machines are unavailable. Yesterday, for instance, I was trying to log into mcurie, but I could not, even though it was not a scheduled downtime. I looked at the news section of the website, but didn't see anything. I couldn't figure out if it was my network connection or something else. A message posted in the news section would have been nice.

    The web site could be improved if there were a way to setup, submit and monitor jobs on the NERSC machines via an online HTML interface.

    Some suggestion I have for most web sites I visit......Don't change it's configuration and design so often. Many times, after I find something, it's not in the same place the next time I go to look for it.

    I'd like to see the seminars and talks that you organize better advertised.

    1- Take out web passwords.
    2- Open up user discussion and suggestion boards
    3- Users-help-Users section?

     

    Make access as semaless as possible even over low speed connections

     

     

  • No specific suggestions:   6 responses

    No comment

    No answer.

    See above. [Not sure; don't use it all that much.]

    Probably

    It is really very good. Keep going on.

    na


How would like to keep informed of changes and issues at NERSC?   42 responses

  • By email (only email mentioned):   22 responses

    Email messages are best for important announcements.

    email is best. Phone calls are nice if I'm in the office but that is sporadic

    I find email is the most efficient way.

    The email notice is the best I can think of

    important messages via e-mail, other on the web

    In general, email notices are the best

    I think e-mail is the quickest method to communicate any changes,etc. because one may not sign daily to NERSC computers and may miss MOT,etc. Of course, sending e-mail can be a problem. But it would work and not useful e-mail could be deleted in due course.

    email is good. [...]

    email is the best way to be kept informed of changes at NERSC

    email with possibly more detailed info on your web site

    I get e-mail messages, and this seems to work fine.

    email [11 responses]

     

  • By email and motd:   4 responses

    e-mail and MOTD messages are the most useful for me. If I have to actively check a web-site for important changes, then I will miss them.

    e-mail notification for anything important, motd (with a pointer to a web page or ? for details) otherwise. I don't regularly consult the web pages unless I'm doing something new or looking for specific information.

    email & motd

    e-mail and MOTD

     

  • Use email more:   2 responses

    [...] I'd like the OPTION of signing onto a mailing list for T3E and HPSS announcements, so I get Emailed separately about each maintenance item.

    Are you aware of planned outages 24 hours in advance? - SHOULD BE NOTIFIED BY E-MAIL.

     

  • Use email less:   2 responses

    fewer emails

    [...] Also, sometimes too much e-mail and I end up not reading a lot of it.

     

  • Improve the motd / don't like motds:   6 responses

    Because I'm lazy...I think MOTD's are useless--everyone ignores them. [...]

    Simplify all the generic logon warnings and add brief bulletins to the logon, with URLs pointing to details. (Fix tcsh so logon material doesn't scroll by twice.)

    I use t shell and most of the daily messages appear to get lost after the first login

    [...]I would like a clearer message of the day (tends to get obscured by other verbage) with pointers to additional info if I want more details. [...]

    [...] The "news' banner that scrolls by at supersonic speed upon login is mostly annoying.

    The message of the day is way, way, way too long. You should have a short MOTD and then put news into the "news" system. The latter is completely neglected and that's a mistake. "News" is a great way to inform users.

     

  • Current methods are good:   5 responses

    Doing a good job on that, you can not know when the machines will crash.

    Present system seems okay.

    Please keep going on

    If I were more active I'm sure I'd keep myself better informed. I expect the methods presently in place are sufficient.

    Present practice is quite adequate.

     

  • Keep us better informed of plans:   2 responses

    It would be nice to know more about planned hardware and software acquisitions before they occur.

    If a truly major change is planned, one month notice might be insufficient in some cases. I suggest it might be useful to have a "comment period" before final decisions on made truly major changes (e.g. dropping a widely used graphics library, changing the queue system).

     

  • Comments on the online newsletter:   2 responses

    A (bi-)monthly newsletter distributed via email would make it easier to keep up with any changes at NERSC.

    Did not know that there was a "NERSC News Online Magazine."

     

  • Other comments:   2 responses

    The times for maintenance are awkward. On mcurie, 16:00 to 21:00 are just bad times of the day. Also, having the shut down either tuesday or thursday is confusing. The notification is adequate, it would just be easier if always on the same day.

    [...] You should have a short MOTD and then put news into the "news" system. The latter is completely neglected and that's a mistake. "News" is a great way to inform users.

Hardware Resources

Legend

SatisfactionValue
Very Satisfied 7
Mostly Satisfied 6
Somewhat Satisfied 5
Neutral 4
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1

 

Cray T3E - MCurie

TopicSatisfaction
No. of ResponsesAvg.'98/Change
Uptime 93 6.26 5.58/+0.68
Overall 64 6.17 5.20/+0.97
Ability to run interactively 85 5.60  
Batch queue structure 81 5.47 4.51/+0.96
Disk configuration and I/O performance 71 5.23  
Batch job wait time 80 5.04 4.43/+0.61

 

QuestionNo. of ResponsesAvg.
Uptime estimate (%) 56 89.6
Batch wait time estimate (hours) 49 14.5
Max. number of PEs used 77 142.2
Max. number of PEs code can effectively use 60 379.4

 

Cray PVP Cluster

TopicSatisfaction
No. of ResponsesAvg.'98/Change
Uptime 73 6.29 5.69/+0.60
Disk configuration and I/O performance 54 5.56  
Ability to run interactively 68 5.18  
Overall 58 5.05 4.92/+0.13
Batch queue structure 60 5.03 4.85/+0.18
Batch job wait time 62 3.95 4.79/-0.84

 

QuestionNo. of ResponsesAvg.
Uptime estimate (%) 48 88.6
Batch wait time estimate (hours) 43 43.7

 

HPSS

TopicSatisfaction
No. of ResponsesAvg.'98/Change
Reliability 81 6.46 5.51/+0.95
Uptime 81 6.33 5.39/+0.94
User Interface 72 6.06 4.88/+1.18
Overall 69 6.12 5.09/+1.03
Performance 73 5.90 5.46/+0.44
Response Time 75 5.68 5.29/+0.39

 

QuestionNo. of ResponsesAvg.
Uptime estimate (%) 50 91.7
Reliability estimate (%) 45 94.2
Performance estimate (MB/sec) 12 11.8

 

Servers

TopicSatisfaction
No. of ResponsesAvg.
Visualization Server - Escher 11 5.45
Math Server - Newton 12 5.25

 

 

Comments on NERSC's Cray T3E - 39 responses

  •   7:     A good machine / get more T3E processors
    •  
      • Stable, reliable.
      • Good scalability.
      • Provides lots of production time.
    •  
  •   7:     Comments on queue structure
    •  
      • 7 asked for longer time limits.
      • 1 asked for fairer turnaround across queues.
    •  
  •   6:     Comments on throughput, performance
    •  
      • 4 said bad; 2 said good.
    •  
  •   6:     Comments on interactive computing
    •  
      • 4 asked for more; 2 said good.
    •  
  •   5:     Comments on disks, inodes, file systems
    •  
      • 4 said inodes too restricted.
      • 1 said /usr/tmp too full.
    •  
  •   2:     Comments on memory
    •  
      • 1 said enough; 1 said need more.
    •  
  •   5:     Other comments
    •  
      • need more optimization info.
      • Provide better queue stats (let us know which queue best to use now).
      • PVM errors, I/O bottlenecks.
    •  
  •   7:     Don't use it (yet) / no specific comments

Did the C90 retirement go as smoothly as possible? - 42 responses

  • 31:     Yes
  •   2:     Yes, but...
    •  
      • Would have liked access to C90 files right away.
      • Too many changes at the same time.
    •  
  •   5:     No
    •  
      • Too much congestion on the J90s afterwards.
      • We lost cf77 and DISSPLA.
      • Now there is less interactive access.
      • Should not have removed C90 until full SV1 cluster was in place.
      • "As smoothly as possible" should have meant keeping the C90!
    •  
  •   4:     Didn't use the C90 / hadn't used it recently

Does the current NERSC PVP cluster meet your vector computing needs? - 52 responses

  • 24:     No
  •   3:     Probably no / am waiting to see
  •   3:     Yes, but... / yes and no
    •  
      • Slow throughput / slow processors: 20 responses.
      • Need interactive SV1s / interactive too slow: 8 responses.
      • Shouldn't interactive environment be same as batch environment?
      • Doesn't like charging rate.
      • Problems with multitasking (OpenMP) and C++.
      • Disk system poorly configured.
      • Poor network access.
      • cqstatl response slow
      • Wants prompt to use the arrow keys.
      • Doubts he can run BASIS.
    •  
  • 21:     Yes
  •   1:     No answer

Comments on NERSC's Cray PVP Cluster - 26 responses

  • 15:     Slow throughput, slow processors, need more PVP cycles
  •   5:     Need interactive SV1s, interactive too slow
  •   4:     Comments on software, configuration
    •  
      • Doesn't like the automatic logout on killeen.
      • Doesn't like f90 compiler.
      • Multitasking (OpenMP) instabilities.
      • Disk system poorly configured.
      • Needs more info on text editors.
    •  
  •   4:     They're good machines, satisfied
    •  
      • Stable, good software development/testing environment.
      • Excellent accuracy.
      • Running short jobs works well.
    •  
  •   3:     Comments on queues
    •  
      • Needs a queue with 200 MW memory, 50 GB disk, 100-150 hours.
      • Make scheduling fairer.
      • Too many jobs are allowed for 1 user.
      • Too many job failures.
      • Frustrated when exceeds CPU and disk limits.
    •  
  •   1:     Haven't used them

Comments on NERSC's HPSS Storage System - 38 responses

  • 12:     It's a good system, satisfied
    •  
      • Can easily handle large (8+GB) files.
      • Happy with reliability.
    •  
  • 11:     Would like a better user interface
    •  
      • More like Unix, more like NFS (3 responses).
      • Provide the ability to manipulate entire directories (3 responses).
      • Provide better failure recovery (2 responses).
      • Provide better search capabilities (2 responses).
    •  
  •   7:     Improve the transfer rate
  •   4:     Command response too slow
  •   3:     Stability problems
  •   2:     Password problems
  •   4:     Don't know what it is, don't use:

Comments about NERSC's auxiliary servers - 10 responses

  •   2:     Like them
  •   1:     Newton needs an upgrade
  •   1:     Provide a data broadcasting web server
  •   6:     Don't use them, no answer

Comments on NERSC's Cray T3E - 39 responses

  • A good machine / get more T3E processors:   7 responses

    the best supercomputer I know of

    Nice machine !

    It is definitely a very stable and reliable machine!!!

    I like the machine, and it is helpful to my work.

    Ideally, it would be good to have more T3E's so that more projects could be awarded greater than 10**5 processor hrs./year.

    Good scalability. [...]

    The machine is extremely useful in the amount of production time that is available on it, however it is out of date as a massively parallel supercomputer.

     

  • Comments on queue structure:   7 responses

    I'd like to have a long 512-processor queue

    Batch queues that run longer than 4 hours would be nice. 12 hours would make many of my jobs easier to run and manage. Some things just can't be done in 4 hour segments.

    Some jobs in the production queue start fairly quickly. The pe128 especially seems to be under utilized. But for long jobs, the wait can the long. In one case I launched a job on gc128 and was disappointed by the slow turn around. The jobs started the evening after I submitted, then ran a couple of hours for the next two nights. The results were not ready for 3 days. I would have been better off, I think, if I submitted multiple jobs in the production queues.
    I've never tried more than 256 processors, but I could. I have some problems that I would like to solve that are too large for the current queue configurations. I am confident that my code will scale past 256.

    Time limits per job seem to be too restrictive for long jobs, but that require few processors. For example, the T3E system allows a time limit of 4:10 h for jobs with less that 64 processors.

    Climate modeling requires very long runs. In general, the queue structure is inappropriate for these multi-month calculations.

    As a "naive" user, I do not know what the possibilities or alternatives might be with regard to batch turn-around time. I would like to be able to run batch jobs longer than 4 hours with fewer than 64 processor

    It would be good to be able to run a job for longer time, say 24 hours, maybe with smaller number of processors (4-64).

     

  • Comments on throughput, performance:   6 responses

    Recently the queue wait time has been a real problem.

    The batch job wait time is too slow

    Too more users on MCURIE

    [...]Batch jobs are sometimes slow getting into and through the queues. This tends to lead to heavy interactive use to get adequate response time. My jobs tend to grow in time and thus require increasing numbers of processors. This is difficult to manage with slow queues.

    good turnaround; few down times

    Very high throughput compensates for lack of speed. [...]

     

  • Comments on interactive computing:   6 responses

    Interactive and network performance are poor. [...]

    I asked my postdoc. His main complaint was with the inability to run small jobs interactively when two big jobs were doing so. Maybe some sort of queue where a new job can take at most half of the available processors.

    I know it's very hard to set up this architecture to run with multiple jobs like a PVP --- but some interactive use of many processors would help debugging.

    If possible, keep a small number of processors available for debugging after midnight.

    I find the ability to run parallel jobs interactively is particularly useful as I am developing a parallel code and this makes debugging a lot easier compared with some other systems that I use.

    Do a lot of my work on your T3E because I can run small test interactively, as well as have access to totalview. This is probably THE most important thing for me. Do not like it when mcurie is not available after 4 pm on some weekdays, but can live with it. Don't run batch jobs very often, other people in the group do that.

     

  • Comments on disks, inodes, file systems:   5 responses

    [...] The /usr/tmp/ disks are lately full enough that one has to take special measures with large datasets to avoid having a PE hang in an I/O cycle. Richard Gerber helped me with this problem, and knows what I am talking about.
    The NERSC implementation of AFS is not user friendly. I have given up on it, and find this annoying. I never figured out how (from a NERSC computer) to access AFS directories that belonged to a different user -- sharing code development directories with other users is one of the main reasons I use(d) AFS.

    [...] The limitation on the number of files allowed in the home area is very restrictive as compared to the limitation on memory used.

    Storage restrictions, in particular inode restrictions, seem somewhat more restrictive than is necessary.

    inode limits, especially in temp seem weird. also, what exactly do the previously two questions mean??

    In general, the inode limitations on temporary and home disk space are very restrictive. Having limits on the number of inodes used seems excessively limiting. Good configuration of the drive space shouldn't require these type limitations. [...]

     

  • Comments on memory:   2 responses

    I like that it has 256 memory .. the nersc t3e has been the only machine I have access to with this kind of memory .. I need that for part of my work.

    [...] The machine need more memory per PE, the current 256MB severely limit what I can do on it. I'd need 0.5-1GB/PE to get larger calculations done.

     

  • Other comments:   5 responses

    [...] Would like access to more performance tuning on single node performance (e.g. a quick reference on best set of compile flags for good performance). The Cray documentation is either too verbose or do not give any information at all.

    mcurie has been up and down a lot lately, but mostly it is pretty reliable. It is much better than when new schedulers were being tested. [...]

    The queue stats are not very useful. A useful stat would be something that better informs which queue is best to run on "now," based on history, like a year-long plot of daily Mflops/wait time, perhaps on a per-queue basis or otherwise so we know if it's better to wait or try another queue.

    PVM calls cause my code to stop with errors very often. Resubmits without changes are successful.

    [...] Couldn't use many more than 16 PEs due to IO bottlenecks (so far, NERSC is working on this).[...]

     

  • Don't use it (yet) / no specific comments:   7 responses

    N/A

    Ever so slowly I creep up on porting my major code from SMP architectures to MPP...

    I don't use it.

    Haven't used it yet

    I have not used it yet, but will start in FY00.

    (I haven't fully tested my code, but I plan to try it on a larger number of PEs than I have so far.)

    (This is based on the experience of my group, not personal experience.)


Did the C90 retirement go as smoothly as possible? - 42 responses

  • Yes: :   31 responses

    I did not have any problems and found the website and consultants very helpful in storing and/or moving files around to accommodate the retirement.

    Yes I was well prepared for it thanks to your encouragement to get on to the J90s

    The transition from the C90 was handled fine.

    Yes, not bad.

    it was reasonably smooth for me

    Yes - I had moved essentially everything off the C90

    It was fine for me.

    No problems

    No problems for me.

    yes [22 responses]

     

  • Yes, but...: :   2 responses

    Yes, it did. It would have been nice to access files right away after the transition rather than waiting until after Jan. 20.

    Essentially, yes. There were too many changes going on all at the same time back then, so it was overwhelming to us here.

     

  • No:   5 responses

    There was a lot on congestion on the J90 for a number of months following this. It didn't really seem like enough capacity was added to handle all the users coming from the J90.

    C90 retirement meant that we lost the very fast f77 compiler and DISSPLA. This has forced me to use the very slow F90 compiler on the J90s and much code conversion to retrofit NCAR graphics. Also, the J90s are slower than the C90 and could stand to have more interactive access.

    Wished the c90 was not shut down until full PVP cluster was in place. J90 interim had a very adverse effect on our productivity

    The C90 was a great machine. "As smoothly as possible" should have meant keeping it on the floor.

    No.

     

  • Didn't use the C90 / hadn't used it recently:   4 responses

    I did not use C90.

    I haven't used the C90 for a long time. I found the tern around time on the J90's to be superior to the C90.

    I never used it.

    It was perfectly smooth for me. I had nothing on the C90.


Does the current NERSC PVP cluster meet your vector computing needs? - 52 responses

  • No:   24 responses

    no. I have to wait long in batch queues

    NO! The batch queuing environment results in wait times (for me) that are HUGE compared to what I was used to on the C90!

    The PVP cluster has been very difficult to use. In FY99 there were many times that jobs I submitted took many days to start.

    Not really. Need higher performance.

    No-no-no!!! if the waiting time to run batch jobs is any indication. I hope the new queue system is a big improvement because the present situation is nearly intolerable.

    The jobs seem to be waiting so long in the queue that I am at present discouraged to run jobs. But I do need this facility for my computing.

    The system seems to be overloaded given the long wait time on the batch queue. More batch machines might help (of course, that requires money). I have heard of a proposal to turn one of the batch machines into an interactive machine. Since there aren't enough batch machines as it is, I am opposed to any reduction in the batch resources.

    No, it is way too slow. I work on many systems all at the same time, and the CRAYs are by far the slowest to get out my results. And I mean by one or more orders of magnitude. However, I am not exploiting the vector capabilities so perhaps this is not fair.

    Very slow-- lots of wall clock time.

    batch queues too slow
    need more capacity

    turnaround slow in recent months, but I recognize the emphasis must be given to MPP, in regard to resource acquisition.

    Based on the very long wait time in the batch queues (sometimes as much as a week or more) there seems to be greatly too little PVP resources. I also notive very large numbers of jobs submitted by one or a few users. I hope the batch system accounts for this and runs only a few of those jobs at a time, otherwise, it is unfair and possibly wasteful, to encourage submission of many jobs.

    turn-around for large memory jobs is tooslow

    Batch is too slow.

    No. they are very slow.

    I do not use it much because my jobs stay in queue for a very long time (It might be related to the end of the year).

    Noty exactly, needs to wait long time to obtain the result, sometimes the compiler (CC) fails to overcome any overflow proplems. Also, need the prompt to be customized such that the user can retrieve the previous commands using the arrow keys.

    NO. MORE processors needed.SV1 processors are NOT as fast as they are supposed to be. Account charging makes it even worse. I have no clue why I am charged 3 times more on SV1 than I was on J90SEs, my code runs only 2 times faster! SV1 charging factor should be reduced to 1 or 1.2 . What is the benefit of generously charging to users for NERSC? Total allocation awards for FY2000 on SV1s is less than the total computing power on the cluster. Repositories going over the allocation should be automatically borrow time from the DOE reserve, as the process takes time that users don't want to lose. It is a good strategy to urge users use their allocations early in the year and take off some of it if they do not.

    No. So far, I have not managed to get better than about 180 Mflops out of a single SV1 CPU. This compares with 500 Mflops for the C90 and 100 Mflops for the J90. I really need a vector machine with the C90's capabilities.
    Multitasking on the J90/SV1 has problems of frequent crashes and sometimes wrong results. Tracking down these problems is almost impossible since there is no way of doing safe multiprocessing i.e.multiprocessing which gives reproducible results down to the last bit!
    The disk system is poorly configured, with insufficient inodes. It needs to have its file systems rebuilt with of order 1 inode/5Kbytes which is a typical value for workstations/PC's. The old 15000 inode quota was totally inadequate: the new 5000 inode quota is totally ridiculous. Adding a few more disks to the cluster would help. After all, disks are dirt cheap.

    Not a true replacement for the C90. Switching one or two of the SV1's to interactive use would help. They'll be a small part of the total resource, soon enough.

    I do interactive software development and testing now and then on killeen. Sometimes interactive response time is slow in the afternoons. I think the SV1's are currently "batch only." How about giving interactive access to one of them?

    No. There should be one more box (SV1) available for interactive use.

    NO...NOT AT ALL. THIS IS VERY IMPORTANT TO ME AND MY GROUP. We need to have at least 1 of the three machines devoted to pure interactive use. I recommand 2 of 3 with interactive in the day time and 3 of 3 batch during 11-5 PDT. The J90 is such a dog. Unless the PVP cluster gets significant interactive use it is likely we will not find it very useful and will increasingly turn to other computing resources.

     

    The interactive J90 is very slow. Batch access of late has been terrible.

     

  • Probably no / am waiting to see:   3 responses

    The configuration is OK, but the interactive machine is often too busy. A small persistent annoyance is the long wait to get job status info out of cqstatl. My main problem is my network connection - I have had consulting help a couple of times on this, and it has always ended up with you saying it's
    the fault of the CUNY network provider and them saying the delays are on the NERSC side.

    Since this is a fairly new cluster - since the SV1 is a new machine - will hold my fire.

    Have not used it. Can I run BASIS codes there? I doubt it.

     

  • Yes: :   21 responses

    It is ok.

    My vector computing needs are very small.

    NA. I only run short interactive jobs.

    No complaints

    NO problems so far, but I haven't made any significant demands lately.

    yes [16 responses]

     

  • Yes, but... / yes and no:   3 responses

    Yes, but with the complaint given below that the interactive response of the J90se is often too slow.

    yes, but note that the compilation is done on a J90 while the software is run on an SV-1. Doesn't this slow the code since it is not compiled on the SV-1?

    Yes and no. While it is good to have the faster SV1s available, I don't use the machines as much as I could simply because interactive response on the J90 is very poor. Simple UNIX operations such as 'cd' and 'ls' can takes many seconds to happen. Also, I often use my codes in an interactive manner, mixing significant computation with interactive steering. While it would be possible to work this way using batch (i.e. submitting little batch jobs between each bit of steering), the current state of the batch queues (overloaded) makes that completely impractical. (Note that I do use batch when interactive steering is not needed.)
    I strongly urge you to make the interactive machine a SV1. This would ease up the overload on the K machine and make it much easier for us to do our work. It is inevitable that anyone who makes use of the batch machines will also make use of the interactive machine, for testing and debugging purposes. Putting that on the slowest machine available impedes our work. Also, making the

     

  • No answer:   1 response

    N/A


Comments on NERSC's Cray PVP Cluster - 26 responses

  • Slow throughput, slow processors, need more PVP cycles:   15 responses

    The batch response is often slow, presumably due to high demand. The 36 hour estimate above [for batch job wait time] averages 24 over the summer and 48 lately.

    Over the last 5 years have been solely using the scalar/vector machines at NERSC. In the last 2-3 years, shifted to the PVP cluster. In the early stages stability of machines were very questionable - memory and disk failures + communication between machines (loss of output). Bascially could not run large jobs (memory / disk / time) on this cluster. Reduced, at least for the past several years, to run simple short to medium runs on the PVP machines. The point of NERSC was supply users to state-of-the-art computing resources and thus push the limits of these machines. The PVP cluster has failed to live up to the promise both in terms of software (OS) and hardware failures. Maybe the SV1 will overcome these problems - see answer to the above [since the SV1 is a new machine - will hold my fire]. .

    [...] looooong wait in batch queues

    [...] The reconfiguring of the batch queues so that the "nice" value decides when the job starts, rather than its priority during execution was the stupidest idea to come out of NERSC since the decision to replace the C90 with the J90/SV1's. The ideal is a system where the nice value acts merely in the way it is supposed to under the Unix standard, and the user is given the capability to change the nice value up and down over the whole range allowed for the batch queues (say 5-19). This would be similar to the old control command, but without separate slow, medium and fast queues. As it stands, I sometimes have to wait almost a week to get a job to start. I am tempted to submit multiple copies of each job with different nice values.

    It's usable but most people I speak with feel it should be avoided unless absolutely necessary. Mostly because the performance is so poor. [...]

    So far, the batch wait times hve been longer under the charging system started Oct. 1 than they were last year under the old system.

    See above. [MORE processors needed.SV1 processors are NOT as fast as they are supposed to be. Account charging makes it even worse...]

    Currently, the wait time before jobs begin is completely unacceptable. I have to wait 2-3 days for a job to begin. The date today is Sept. 28, 1999. Hopefully, after Oct. 1 when the fiscal year begins anew, the wait time will be less.

    These computers are not very fast. I learned parallel computing techniques to get away from using them. I now use them only for diagnostic post-processors that require NCAR graphics, which is not supported on MCURIE.

    I have found the time waiting in the queue extremely long!!!!

    [...] but wait long time to get the my share of cpu time.

    There was a big backlog of jobs during the summer. I haven't run any jobs recently so am not aware of how things stand at present.

    My only complaint is that in the last few months the turnaround time to perform my simulations has gone way up. When I first started using NERSC computers (circa 1995), it took about a day to cycle a moderate simulation through the 5MW queue. Now, I have had jobs sitting on the queue for several days before they even begin.

    Get more SV1 machines if batch remains over-subscribed.

    I think that my answer to the previous question says most of it [low megaflops, ...] Don't try to argue that I am an isolated case since most of your users seem satisfied. Note that a number of your big users have moved off the PVP cluster entirely.

     

  • Need interactive SV1s, interactive too slow: 5 responses

    slow interactive response; [...]

    could use an interactive PVP machine

    I think you should seriously consider making one of the SV1's the interactive machine rather than the J90se. Many users were expecting that this would be the case before the SV1's arrived. They were disappointed then when there was no improvement in interactive use. There still seem to be many times when the J90se has very slow interactive response. Since this uses up valuable human time staring at a monitor waiting for something to happen it is worth trying to improve.

    [...] only use killeen, wish it was faster; how about making one of the faster machines available for interactive use?

    Maybe more interactive sessions on machines other than killeen.

     

  • Comments on software, configuration:   4 responses

    [...] I also don't like the non-standard aspects of the way they are set up. Most annoying is the automatic logout on killeen - that's just ridiculous and everyone knows how to circumvent it anyway.

    Make the F90 compiler as fast as F77 was on the C90. inept f90 compiler; lacks graphics software that was on c90

    I think that my answer to the previous question says most of it [..., multitasking instabilities, disk system poorly configured, insufficient inodes].

    [...] Need more information about text editors.

     

  • They're good machines, satisfied:   4 responses

    My needs are simple. I only run short interactive post-processing on killeen

    Very stable and good software development/testing environment. [...]

    it's great

    Excellent accuracy, but wait long time [...]

     

  • Comments on queues:   3 responses

    Possibility of running batch jobs with RAM of 200 MW, Disk 50 Gb , and a max of CPU about 100 -150 hrs at J cluster should be made available in the near future.

    The batch queue system functions but has features that really seem candidates for improvement. The queuing system seems unpredictable and somewhat unreliable. Jobs seem to be scheduled in random order. Of two similar jobs submitted simultaneously, one could run within 12 hours, skipping over several hundred other jobs in the queue, the other could wait many days. So, it does little good to submit jobs in priority order in order to do a systematic series of jobs where the next step depends on previous results. Some users seem to be able to submit 100 jobs and have them all run very quickly. Occasionally, jobs are "failed" because of a system problem and it is necessary to resubmit and wait again several days for the job to start after already having waited several days before the system problem was encountered. When a system problem is causing jobs to fail, often jobs continue to be submitted, all of which fail causing the problem to multiply. Finally, it

    As a beginner, I experienced frustration with exceeded CPU- and disk space limits (due to a core file of one job). Is there a possibility to automatically (or manually) keep alive or resubmit jobs that crash due to such external problems? The introduction of premium jobs helps greatly to detect errors in the batch script.

     

  • Haven't used them:   2 responses

    I have not used the Cray PVP cluster

    N/A


Comments on NERSC's HPSS Storage System - 38 responses

  • It's a good system, satisfied:   12 responses

    Very good job on maintaining such a monster storage system.

    HPSS is the best mass storage system I have used for the last 20 years.

    I have a lot of data stored. When I need it, it is there, so I am happy.

    mostly works well [...]

    Very nice and efficient system, a great way to store data. Can handle< large files (8+GB) easily, which is extremely useful.

    Very good

    Without HPSS , I could not use NERSC computing facilities.It is a sine quo non for my work.

    I use the storage to HPSS thru my batch jobs since I use the stored files for restarting my programs. Perfomance is not a big thing for me but reliability of HPSS being up to store the data is. I am very satisfied with the reliability.

    very useful, for backing up files from my work computer system

    It's good but [...]

    It works for me.

    useful for our RNC group to storage the mass data.

     

  • Would like a better user interface:   11 responses

    It would be nice to have better interface software to HPSS. I'm thinking of something that would let one see the whole (or large portions of) one's directory tree structure in a graphical display. Also, some type of user friendly search capability would be a big help.

    It would be nice if the system were less like using ftp like, and more like using an NFS fileserver.

    [...] (Please fix the HSI wildcard char recog.! Seems sensitive to term emulation, backspace, mouse-paste etc.)

    I would be nice to have masget etc. commands we used to have for the mass storage before. These commands might exist, and I am just not aware of them.

    [...] ftp interface a bit primitive

    Again, because I'm lazy, I'l like the interface to look as much exactly like Unix as possible. I'd like to be able to use foreach loop and wildcards in the same way that I do in tcsh...it's pretty close, but still frustrating at times.

    The FTP interface is a bit clumsy for many problems, and it appears that the FTP restart command is not supported, so there is no way to restart a failed transfer, or just get a portion of a file.

    More UNIX like interface is an improvement.

    It's good but needs some super-simple wrappers to allow one to store away (and retrieve) a whole directory in one fell swoop. I can write these for myself, but really they should be provided. For example, a command called "stash" that just tar's the files in the current directory (not below!) and stuffs that into a like-named directory in HPSS would be a great help. I wrote such a tool for CFS but dare not try using it now without careful attention to the changes needed.

    [...] HPSS needs to be configured to send an error flag to the shell when the transfer fails so that a batch job could be made to crash when a transfer failed. Note, cfs was able to do this.

    It should be easier to search the whole directory structure for files by name, date, etc.

     

  • Improve the transfer rate:   7 responses

    It's not as fast as NCAR's MSS. [...]

    Improve transfer rate

    I believe my upper bound on the transfer rate is due to the network, but I which nevertheless that it could be higher. At Brookhaven I have (or used to have) 10 MB/sec.

    FTP is slow

    slow at transferring lots of large files; needs bigger pipe

    At times, the HPSS can be slow.

    Reliability and performance appear to be highly variable. At times both are good for long stretches. At other times, they appear to degrade and remain so for a considerable time.

     

  • Command response too slow:   4 responses

    It might be useful if the ls command did not take so long when deep in a directory structure. This is often the decision maker as to what files are needed.

    HPSS is too slow in responding to directory commands in directories with more than a few thousand files. [...]

    System response seems slow, but I have no expertise by which to judge it

    I'm not sure how hpss works, but there seems to be a lengthy delay after entering some command like "dir" or "get file" before it begins execution. Probably this is unavoidable due to the size of the storage system.

     

  • Stability problems:   3 responses

    PFTP seems to die transferring large files more often that I would expect/hope.

    The system does not seem to be very stable.

    Reliability and performance appear to be highly variable. At times both are good for long stretches. At other times, they appear to degrade and remain so for a considerable time.

     

  • Password problems:   2 responses

    Right now it takes a long, long time to authenticate my password.

    sometimes it requires the password more than once

     

  • Don't know what it is, don't use:   4 responses

    Don't use it as much as I should as I was a cfs user and have not taken much time to learn how to use the new system.

    I am told that I have an HPSS storage system account. I haven't the faintest idea what that means.

    I have not used the NERSC's HPSS storage system

    I'm not sure if this is an HPSS issue, per se, but I don't understand why i-nodes are so limited on t3e machine. In our work, we have lots of small files, so this has been a problem.


Comments about NERSC's auxiliary servers - 10 responses

  • Like them:   2 responses

    They are powerful, useful, and well maintained.

    Good response times.

     

  • Newton needs an upgrade:   1 response

    Newton needs to be upgraded with more powerful processor(s)

     

  • Provide a data broadcasting web server:   1 response

    You might consider also providing a password protected web-server where users could broadcast data from their runs on the PVP and MPP machines directly to themselves and to their collaborators. It would be particularly useful if such a server could provide some software (e.g., using JAVA) which would gather one's data off of PVP or MPP platforms and then make graphical web-based displays of it. I recently saw a demonstration of such a collaborative graphical simulation environment developed at a Japanese supercomputer center (at JAERI) and it looked like a very useful capability.

     

  • Don't use them, no answer:   6 responses

    don't use these servers

    Not used

    No Answer

    Have not had the time or pressing need to use escher this past year.

    N/A

    I have not used them.

Software

Legend

SatisfactionValue
Very Satisfied 7
Mostly Satisfied 6
Somewhat Satisfied 5
Neutral 4
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1

 

PVP Software

TopicSatisfaction
No. of ResponsesAvg.
User environment 52 6.08
Fortran compilers 52 6.04
Programming libraries 33 5.94
General tools and utilities 35 5.89
Accounting tools 38 5.74
Local (NERSC) documentation 44 5.68
Software bug resolution 24 5.62
Application software 26 5.54
Performance & Debugging Tools 35 5.46
C/C++ Compilers 20 5.45
Vendor documentation 31 5.03

 

T3E Software

TopicSatisfaction
No. of ResponsesAvg.
Programming libraries 50 6.42
Fortran compilers 66 6.20
User environment 68 6.15
C/C++ Compilers 40 5.97
General tools and utilities 44 5.91
Software bug resolution 34 5.91
Application software 33 5.85
Local (NERSC) documentation 52 5.83
Accounting tools 47 5.72
Vendor documentation 37 5.49
Performance & Debugging Tools 49 5.45

 

Comments about NERSC's software resources, suggested improvements, future needs - 24 responses

  •   8:     Comments on compilers
    •  
      • f90 slow, unforgiving, I/O is slow, data structures cryptic
      • f90 needs to optimize for the SV1
      • KCC slow and doesn't work with totalview
      • wants gcc
      • wants FORTRAN 77
    •  
  •   5:     Comments on graphics
    •  
      • NCAR on the T3E (2 requests)
      • more graphics on the T3E in general
      • wants DISSPLA, PGPLOT
      • wants graphics other than NCAR
      • uses own version of gnuplot
    •  
  •   5:     Comments on debuggers, analysis tools
    •  
      • Problems using totalview on the T3E, doesn't work with KCC.
      • Provide better performance profiling tool on the T3E.
      • Needs easier ways to debug (as easy as on a workstation).
      • Needs a non-graphics based debugging utility.
    •  
  •   4:     Comments on utilities, editors, shells, user environment
    •  
      • doesn't like modules
      • GNU tools and emacs need to be kept up-to-date
      • system doesn't recognize user's terminal type
      • wants easier editors
    •  
  •   3:     Comments on setcub
    •  
      • it's cryptic
      • wants job level accounting
      • doesn't have updated account info
    •  
  •   5:     Other comments
    •  
      • prompt fix of MPI bug
      • need good man pages
      • as a center, should help with software performance improvements
    •  

  • Comments on compilers:   8 responses

    [...] The f90 compiler [on mcurie] is very slow.

    I am somewhat dissatisfied with KCC's compiler on the T3E for 2 reasons.
    1. It takes a long time to compile, and a longer time to link. This make the debugging cycle extremely painful. I do as little code development as possible on the T3E.
    2. The totalview debugger does not work with KCC. This makes debugging especially difficult.

    It would be nice if the latest gcc were ported to the T3E, but that may be wishful thinking.

    It is very hard to figure out exactly how data structures are laid out. I eventually had to rewrite all my IO in C because the F90 IO is too slow and the layout of F90 structures by the compiler is not helpful. [T3E user]

    The currently installed Cray C++ compiler has a runtime environment that is so archaic and so at odds with the ANSI/ISO C++ standard as to make it exceedingly difficult to use with even the most modest degree of code portability. That might be a Cray problem rather than a NERSC problem, but it is certainly a problem. The availability of KAI's "KCC" C++ compiler ameliorates this somewhat. [...]

    It would be great if NERSC had a standard FORTRAN 77 compiler.

    The FORTRAN compiler on NERSC is very "non-forgiving." I compile my programs on my NT-station (ABSOFT) with no problem, however, upon transferring it to NERSC it will not compile at all. Sometimes, it looks like I am better off rewriting the program on NERSC than to debug it. [...] [PVP user]

    The SV1 needs fortran compilers better able to optimize for its particular architecture. Documentation on how to optimise C90 codes to perform well on the SV1 is needed. The multitasking (OpenMP) software and fortran compilers need a safe mode, i.e. one where the code is guaranteed to produce reproducible results (to the last bit).

     

  • Comments on graphics:   5 responses

    [...] Please load NCAR on the T3E. [...]

    I would like to see graphics packages on the T3E, especially NCAR.

    As mentioned above, having DISSPLA would help. I also think you should have PGPLOT available. NCAR Graphics is a pile of crap.

    I would like additional graphics libraries. I do not like using NCAR graphics because it is difficult to use and the contour plots look bad. When I move codes from other computers or run old codes from NERSC, I always have to first strip off the graphics calls to DISSPLA, PLPLOT, TV80, Quickdraw, etc.

    I use gnuplot, installed in my bin directory.

     

  • Comments on debuggers, analysis tools:   5 responses

    [...] Totalview is usually good [on mcurie], but barfs on complicated (but standard) Fortran-90 constructs, requiring re-writes for some debugging purposes. Totalview does not handle multiple PE debugging with total success very often. The problems can be worked around, but a new version would be nice. Managing memory, checking memory usage (for dynamically allocated arrays), and finding memory errors in my codes is not very convenient. If there are tools available to help with this, please consider getting them.

    [...] The totalview debugger does not work with KCC. This makes debugging especially difficult.

    Better performance profile tool on T3E.

    [...] Also the debugging of FORTRAN does not come close to that of ABSOFT (NT-version). [PVP user]

    As a remote user, I miss having a non-graphics based debugging utility.

     

  • Comments on utilities, editors, shells, user environment:   4 responses

    I am not a fan of the Modules package.....Just thought I'd let you know there is at least one of us.

    [...] Some of the software could stand to be updated more frequently, particularly GNU tools. Emacs, for example, is a pretty fundamental tool for any software developer, yet the T3E version is rather ancient. That said, the system administrators are quite responsive to specific requests for installations and upgrades. I've been holding back on some requests because I don't want to make a pest of myself.
    More shells are needed! "zsh" and "tcsh" should be installed, and it should be possible to set "zsh", "tcsh", or "bash" as one's login shell.

    I have to specify my terminal type at each login.

    Use more friendly environment, easy editors, such as pico.

     

  • Comments on setcub:   3 responses

    Accounting tools are rather cryptic.

    A quick way to account how much repo was used by each job.

    I find that setcub doesn't have updated account information. Twice I've been fooled into thinking I have a larger account balance than I really do, and then I unexpectedly used up our time. Now I'm keeping track on my own (or trying to) of how much time I've used.

     

  • Other comments:   5 responses

    [...] An MPI bug I reported was fixed promptly. Thank you.

    The best one among all I have used.

    I need to learn the basics of how to use a piece of software very quickly and get on with my work. If I can't do this for whatever reason, I probably don't use it unless I absolutely have to. I get the bulk of my info from the on-line man pages, not the web documentation.

    Others in my group have beat through the configuration aspects, so that my code essentially works now out of the box. I gather however that there was a bit of pain getting it there....a long winded way of saying no opinion, I s'pose.

    I think in the next decade, demands for an order of magnitude improvement in the speed of hardware and performance of software ,etc. would be mandatory for any supercomputing facility like NERSC. NERSC should start plans NOW how it would tackle this mandate. Dr. Horst Simon is most knowledgeable in this matter and I am sure he would cross the bridge when he would be close to it. Dr. Simon at Boeing ( 25 years ago) and NERSC is doing a most magnificent job! He deserves our sincerest thanks for making NERSC what it is today. NERSC is undoubtedly a state-of the art supercomputing facility second to none in the western world. Congratulations to Dr. Horst Simon.

Training

Legend

SatisfactionValueUsefulnessValue
Very Satisfied 7 Very Useful 3
Mostly Satisfied 6 Somewhat Useful 2
Somewhat Satisfied 5 Not Useful 1
Neutral 4  
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1

 

Overall Satisfaction and Usefulness

TopicNo. who have usedSatisfactionUsefulness
No. of ResponsesAvg.No. of ResponsesAvg.
Classes 18 21 6.19 21 2.48
Online web tutorials 36 40 6.17 34 2.68
Class slides on web 19 22 5.95 22 2.45
Teleconference lectures 7 9 5.78 13 1.77
Streaming AV on web 8 10 5.70 14 1.93

 

 

Comments about training. In what area should we focus our attention? - 11 responses

 

  •   5:    Provide concise web documentation and online tutorials

    Easily accessible, concise information.

    Regarding online information, I prefer "cookbook recipes" rather than discourses.

    Brief documents for every command and option, plus examples.

    Training is excellent for getting started. Please keep the number of mouse klicks small if one just wants to refresh one's memory concerning a command or important number (->"I have read this once, but where is this page...")

    I've been happy with the on-line tutorials and NERSC specific documentation. It should be a high priority to maintain them. It also might be useful to provide PDF files with tables of contents and page numbers of the longer tutorials because they are more useful as printed reference materials.

     

  •   2:    Provide web-based training

    I have not used any of these classes in the recent past. I occasionally download some slides if I need to look into some particular issue.

    web training .. I usually cannot go to classes .. I live in Ohio.

     

  •   1:    Provide classroom training

    Off-site training held at major centers around the nation (ORNL,BNL,etc.) would make it easier to learn about the most efficient ways to utilize the NERSC resources.

     

  •   1:    Need SP training

    We'll need training soon concerning the IBM-SP

     

  •   1:    Continue as before

    Continue as before.

     

  •   1:    Don't use

    Haven't had a chance to use

Comments about NERSC

What does NERSC do well? - 91 responses

  • 40:     Consulting and support staff, good people
  • 38:     Provides cycles, access to high performance computing, good hardware or software support, reliable service
  • 12:     Everything
  •   8:     T3E support
  •   7:     Web documentation, training, online tutorials
  •   3:     Allocations process
  •   2:     HPSS support
  •   1:     Support for interactive computing
  •   1:     Support for remote users

What should NERSC do differently? - 53 responses

  • 16:     Comments on computers, cycles, downs
    •  
      • provide better turnaround, better processors (7 responses)
      • PVP improvements (6 responses)
      • T3E improvements (4 responses)
      • fewer downs (2 responses)
      • more forethought in planning changes; listen to users more (2 responses)
      • stay abreast of latest technology (1 response)
      • HPSS more dependable and efficient (1 response)
      • enhance symbolic math server (1 response)
    •  
  • 11:     Comments on queues
    •  
      • longer T3E queues (4 responses)
      • unspecified T3E queue configuration change (1 response)
      • queuing system not straightforward (3 responses)
      • need short turnaround queues on the PVPs (1 response)
      • limit number of jobs one user can run in batch on the PVPs (1 response)
      • go back to nice scheduling for PVP jobs (1 response)
    •  
  •   9:     Comments on software
    •  
      • improve setcub (3 responses)
      • don't change software so often (2 responses)
      • didn't like that request for PGPLOT wasn't approved
      • didn't like loss of DISSPLA and other engineering software
      • provide portability for PC-based programs
      • if vendors don't have a a solution then provide NERSC solutions
    •  
  •   8:     Comments on web documentation, training:
    •  
      • better orientation for new users (2 responses)
      • better web training for those who can't attend onsite training
      • Classses are offered more often in the West; East users disadvantaged
      • easier/quicker access to the "information I need"
      • more info on performance tuning
      • write better web code (want responses to web forms to persist thru window resizing, following links etc.)
      • stop modifying website
    •  
  •   6:     Comments on interactive computing
    •  
      • increase interactive resources (4 responses)
      • don't increase interactive resources (1 response)
      • wrtite software to enforce the T3E interactive policy
    •  
  •   4:     Comments on the allocations process
  •   3:     Other comments
  •   2:     Make the survey shorter
  •   2:     Don't Know, No changes

What additional services would you like NERSC to provide? - 29 responses

  • 11:     Software requests
  •   8:     Web services, dissemination of information
  •   4:     More computers, cycles, inodes
  •   3:     Visualization requests
  •   3:     Don't know / no specific requests

How does NERSC compare to other centers you have used? - 55 responses

  • 26:     NERSC is best, better than
  • 12:     Favorable evaluation, only use NERSC
  • 10:     NERSC is the same as; mixed evaluation
    •  
      • Less flexible than Caltech's CACR.
      • Provide a T3E 24hr queue as Pittsburgh does.
      • I can often get bigger jobs through faster at EMSL.
      • The amount of CPU time that we can get there is much smaller than at several other centers.
      • Slower turnaround time for batch jobs but otherwise comparable to SDSC, NCSA, PSC, NASA Goddard.
    •  
  •   4:     NERSC is less good
    •  
      • NERSC's allocation structure and queues less flexible than Ohio Supercomputer Center.
      • Smaller centers have better throughput and no accounting.
      • NPACI sends out email when a system goes down unexpectedly & gives estimate of uptime.
      • NCSA has more flexibility in time limit and PE limit for batch jobs.
    •  
  •   3:     No comparison provided

What does NERSC do well? - 91 responses

  • Consulting and support staff, good people:   40 responses

    Qualified people and good support hotline.

    I have found the consulting services to be quite responsive, friendly, and helpful. At times they went beyond the scope of my request which resulted in making my job easier.

    People to people issues are done very well.

    I appreciate the help from the consulting staff. I have used NERSC computers for years, and rarely asked for help until this year, when I made the transition to parallel computing. I have been happy with the assistance I received.

    [...] The interaction with your staff is a particular pleasure.

    I am impressed with the rapid response that I have seen in all areas: hardware problems, software questions, and new account creation. I have not run into many problems, and my experience so far leads me to believe that if I ever have a problem it will be attended to very quickly.

    Very good consultant knowledge when I have a question.

    As a very new user, I have not yet interacted with many NERSC employees. But those who I have worked with have all been wonderful. You have a great set of people up there.

    NERSC consulting and accounting services are good. In general, I get good and rapid response to my inquiries.

    As PI I am not as up on things that happen as I would like. I have a tendency to forget passwords, but NERSC staff have always been helpful

    good support staff

    friendlyness and promptness of user services. Francesca is really good. [...]

    I appreciate very much the consulting staff: they are very reliable and competent

    Human interaction is great.

    [...] The ability to quickly contact consultants, support, and operations staff is nice. They're always frendly and helpful.

    [...] Good consulting

    Outstanding phone support!

    Service and consulting

    User support. [...]

    [...] - mostly knowledgeable consultant staff

    consulting services

    [...] Personally, I am very satisfied with the software availability and Consulting services, [...]

    [...] and friendly consulting personnel.

    Gets someone on consultant questions immediately.

    We've been very happy with the support (we're not heavy users though). It seems we wait a long time in queue for batch jobs on the J90s (longer than last year). Also, the machines were down for a while over the summer.

    [...] and response of staff is great

    The management of accounts [...] is very professional.

    User support - efficiency

    Tech. support was very good

    NERSC is strong in user support, consulting [...]

    [...] and help with parallilization.

    The consultants are great.

    [...] good consulting
    [Visualization] "What's New" page is dated

    Interact with people. Technically sound. Keep emphasis on the end user.

    performance of [...] and support excellent.

    consulting, keeping us informed of changes

    Providing valuable [...] technical assistance when needed.

    Consulting staff and user support are excellent.
    Majority of the time NERSC keeps the user abreast of status and availability of machines.

    [...] The help I have received from the consultants is excellent.

    [...] The consulting services are also excellent.

     

  • Provides cycles, access to high performance computing, good hardware or software support, reliable service:   38 responses

    NERSC provides a well-run supercomputer environment that is critically important to my research in nuclear physics.

    Provides reliable machines, which are well-maintained and have scheduling policies that allow for careful performance and scaling studies.

    NERSC is a good facility. It provides a very great service to users who need the supercomputers to do their research.

    Provides a stable, user-friendly, interactive environment for code development and testing on both MP machines and vector machines.

    Provide an extremely powerful computing resource in a very reliable, usable way.

    Support and maintain one of the best high performance computer facilities in the world.

    NERSC provides access to a good machines and supports them well. I look forward to using the IBM SP.

    Provide state-of-the-art high speed computing

    The computers and software are very reliable.

    NERSC offers the best computational resources that I have ever found. [...]

    Provide a very nice facility for us to run lattice QCD code.

    For the 16 years that I have been using NERSC, it has provided the most computational power accessible to me by far.

    Deliver a lot of cycles to users on state-of-the-art machines.

    The computers and related resources run very smoothly. [...] It is the best supercomputer facility I have used.

    system maintenance, notification of scheduled down time, availability of many software packages, libraries, tools

    Provides resources for large computing problems which are difficult to implement on a workstation.

    NERSC provides vital computing cycles to my (Magnetic Fusion) community in as advanced and reliable a computing environment its resources allow.

    Overall, NERSC is doing pretty good in providing valuable computing resources to science community. Personally, I am very satisfied with the software availability [...]

    Provide reliable high performance CPU cycles.
    [...] Take care of software and hardware maintenance in a professional manner. [...]

    NERSC maintains the usability of the computing resources very well. I have had no major problems related to system crashes, corrupt output, etc. I am pleased with this aspect.

    Fast and efficient, but long awaiting in queues.

    [...] Computer maintenance.

    NERSC has the good computers, and they are up most of the time.

     

    Machine reliability and availability [...]

    uptime [...]

    The management of accounts and hardware is very professional.

    provides the satisfactory and reliable access to the computers with appropriate sofware over the ethernet.

    For me, it gives me the ability to use large computer resources in a relatively comfortable environment.

    [...] and delivering reliable service to a variety of users

    Deliver cycles [...]

    provide high performance computing

    provides lots of CPU cycles, [...]

    Delivers a lot of cycles [...]

    performance of computing resources and [...] excellent.

    Providing valuable computer resources and [...] when needed.

    provides high performance computing

    Provide high performance computing cycles to high priority projects.

    For me, it provides large-scale computing facilities in a reasonable way.

     

  • Everything:   12 responses

    As far as I have seen, everything.

    Seems to be everything to me.

    Quite a lot --- basically an excellent center.

    NERSC is a supercomputing facility par exellence.As I have checked out "Very Satisfied" ( I should have checked "excellent" but no such option exists in this questionnaire ), NERSC does exellent in all phases of its activity, from Consulting to guidance, help, advice so as to get the most important work done, viz, supercomputing. My sincerest thanks to all of you at NERSC for giving me the opportunity to use NERSC facilities for my research.

    almost everything

    plenty

    I think you do everything pretty well. [...]

    just about everything

    I think it is doing very well.

    THe general organization is good.

    I twice filled out this form in its entirety, only to have Netscape crash each time. Sorry for the sparsity of comments. Summary: NERSC does most everything very well. I would like the mass storage (HPSS/migration) to be more reliable and to have some inspection capability (if I've waited 30 minutes for a file to de-migrate, is it because there's a tape stuck, a daemon not running, or just in a long line).

    I am overall satisfied with NERSC.

     

  • Good T3E support:   8 responses

    queue system on T3E

    For the T3E nersc provides good cycle time and turn around.

    NERSC does a good job of providing massively parallel hardware and the software necessary to use it to a large scientific community.

    The machine is up, it works and an effort is made to make it useful. [T3E user]

    Throughput on the T3E is fantastic.

    - only a few crashes on th T3E
    - high uptime %[...]

    Very stable environment from parallel point of view. Good maintenance [...]

    NERSC has traditionally provided an environment aimed at the big production user. There have been some moves away from this recently, but nothing that can't be corrected. Management of the T3E has been good.

     

  • Good web documentation, training, online tutorials:   7 responses

    [...] as well as web based training (online manuals and references).

    Documentation on the web.

    [...] and user documentation to get started

    I find the information on the website quite useful. Although I did not get around to taking part in the training yet (I've been using NERSC only for a few months), the things offered by NERSC (web tutorials, teleconference lectures) seem to be quite useful.

    [...] The documentation/training while not perfect is certainly better than anywhere else I have used.

    [...] Provide training resources on timely topics
    Good web site [...]

    [...] Online documentation of compilers, software, tutorials.

     

  • Good allocations process:   3 responses

    [...] I was pleased that I could get additional time in july-august when I had used my time at the end of the 3rd quarter. it is difficult to live in the allocation structure .

    [...] - generous additional CPU "donation" for busy users year around

    Allocation process using the web.

     

  • Good HPSS support:   2 responses

    I mainly use the HPSS file storage system and find that the system works very well for my "modest" needs.

    [...] and excellent storage capacity.

     

  • Good support for interactive computing:   1 response

    I find that working interactively goes very well. [...]

     

  • Good support for remote users:   1 response

    Support remote users. Other large supercomputing facilities that I've worked with have been very poor at this.


What should NERSC do differently? - 53 responses

  • Comments on computers, disks, inodes cycles, downs:   16 responses
    • provide better turnaround, better processors (7 responses)
    • PVP improvements (6 responses)
    • T3E improvements (4 responses)
    • fewer downs (2 responses)
    • more forethought in planning changes; listen to users more (2 responses)
    • stay abreast of latest technology (1 response)
    • HPSS more dependable and efficient (1 response)
    • enhance symbolic math server (1 response)

    It would be nice if there were fewer users, so turn-around time could be faster.

    More processors. Less down time.

    less downtime, more forethought in planning changes (e.g., CFS and the A-machine were removed at the same time. That took place during a critical time before when everybody was scrambling to finish other things before the holidays. Add to that the changes in the allocation process, etc. As a NERSC Account manager, I ended up spending 20 % of my time on NERSC business. This was excessive.

    NERSC should try listening to the users more. Among my co-workers, there is rampant dissatisfaction with the CRAYs' interactive and batch performance, but nothing seems to change over many years. [...] [PVP user]

    Improve the PVP disk system to give at least an order of magnitude more inodes. Some increase in the total disk space should be provided.

    A better handle on the PVP machines - the J90's were appalling - put more pressure on the hardware vendor for quicker solutions.

    Up to the demise of the c90, I was very happy with NERSC. Forcing us onto the j90, without the PVP cluster inplace, actually causes us to look for other computing resources, inhouse and at the San Diego Supercomputing Ceneter. If you get the PVP cluster returned to significant interactive use, we would be much happier. [...]

    I primarily run Gaussian 98 jobs. Lately, the jobs just sit in the database and never run. I can run jobs faster on my Pentium III PC. [...] [PVP user]

    Improve I/O on T3E

    Some changes in the hardware and queue configurations. Jobs are a bit slow getting through the queues and the inode limits are to restrictive. [T3E user]

    More memory per PE on the T3E.
    Get a distributed memory/disk machine (the SP addresses that)

    I was pleasantly surprised with the success of porting our 3-D plasma fluid turbulence code to the T3E (Bill Dorland of the Univ. of Maryland did this for us). However, as you know, typical performance on cache-reliant RISC-based MPP's is only about 5-10% of the theoretical MFLOP/CPU, while on the Cray C-90 it was common to get 50% of the theoretical MFLOP/CPU rate. I am concerned that the bandwidth to local memory, and the communications bandwidth to remote memory, are not keeping up with CPU speed, and that the promise of true multi-TeraFLOP performance has not been realized. Perhaps there is nothing NERSC could have done about it, but it is a little disappointing that the IBM SP-2 that was purchased doesn't seem to be very different in capability from the older T3E. We have to communicate to our sponsors and to hardware vendors that many important scientific codes require high bandwidth, and that sticking lots of PC's on an ethernet might be okay for some problems but is completely inadequate for other important problems. It is worth paying the extra money needed for specialty high-bandwidth parallel supercomputers at centralized high-performance computing centers like NERSC. I would prefer a 500-processor MPP with a very high speed per processor to an MPP with 20000 slow processors.

    To a user, it seems that the management of hardware could be handled in a way that impacts users less severely. The scheduled Tuesday and Thursday afternoon shutdowns are interruptive and frustrating, particularly because mcurie usually goes down at least once a week anyways. However, as a system administrator for a workstation cluster, I recognize that maintenance is much more difficult than is usually apparent to users.

    NERSC needs to stay abreast of the latest hardware technology available.

    Make HPSS more dependable and efficient.

    enhance symbolic math server

     

  • Comments on queues:   11 responses
    • longer T3E queues (4 responses)
    • unspecified T3E queue configuration change (1 response)
    • queuing system not straightforward (3 responses)
    • need short turnaround queues on the PVPs (1 response)
    • limit number of jobs one user can run in batch on the PVPs (1 response)
    • go back to nice scheduling for PVP jobs (1 response)

    On the T3E I would like to be able to run jobs longer than 4 hours with fewer than 64 processors

    4 hours of duration for batch run and 0.5 hour for interactive run is too short. [T3E user]

    Queues in the T3E system should be more flexible in time limit restrictions.

    more long batch jobs [T3E user]

    Some changes in the hardware and queue configurations. Jobs are a bit slow getting through the queues [...] [T3E user]

    I had some trouble with queue structures ... when the job just did not fit into anything. i.e. was just a little too long (10 min. or so) don't be so strict with queue structures!

    Queuing system is not straightforward. qstat command does not display job id's

    make the batch queue process more transparent (i.e., understandable) to the general user. This thing about finding your batch job via the web is cumbersome & the result ('pending') pretty much useless.

    An improvement in the batch queue I think would be very important, jobs that wait for 4-7 days in the queue might be needed in short time. [PVP user]

    [...] I would suggest limiting the number of jobs that each user can put in the batch queue. [PVP user]

    [...] Go back to an execution based priority system fore PVP batch jobs. [...]

     

  • Comments on web documentation, training:   8 responses
    • better orientation for new users (2 responses)
    • better web training for those who can't attend onsite training
    • Classses are offered more often in the West; East users disadvantaged
    • easier/quicker access to the "information I need"
    • more info on performance tuning
    • write better web code (want responses to web forms to persist thru window resizing, following links etc.)
    • stop modifying website

    Basic orientation for new users needs to be improved. I still have no idea what HPSS is or does, or what it means that I have an "HPSS account".

    I would like a few pages written as welcome, together with some start-up info, instead of only the email. [...]

    Better docs/web training for people who cannot regular attend NERSC talks.

    People in the east cannot have too much access to the workshops and training sessions that are mostly offered in the west.

    I would like easier/quicker access to the "information I need." Sometimes I don't know what I need, oftentimes you do not either. It's a difficult job trying to read the minds of your users.

    Provide better web links to performance tuning.

    write better web code -- for instance, I had filled out more than half of this form when I clicked on a link and came back to find all my answers gone -- hence the minimal response

    Stop modifying the NERSC website (most importantly its structure) continuously. There was a period in the last 12 month where every single time I went to the NERSC website, it looked different. Even if it is getting "better" every time you touch, it is very annoying to figure out where the important piece is located now, within the new and different structure. Updating it too often defies the purpose of easy and fast acces of info.

     

  • Comments on software:   9 responses
    • improve setcub (3 responses)
    • don't change software so often (2 responses)
    • didn't like that request for PGPLOT wasn't approved
    • didn't like loss of DISSPLA and other engineering software
    • provide portability for PC-based programs
    • if vendors don't have a a solution then provide NERSC solutions

    The accounting system mystefies me.

    I like the change to default yearly accounting, the monthly accounting was a headache!

    As a NERSC PI, I would appreciate it if you could provide access to better accounting information. For example, I find a number of things I don't like about setcub. It would be helpful if it would explicitly print out in the "user time remaining" and "charge time" columns what units are being used (i.e., PE-minutes, PE-hours) and over what time period the time remaining units apply (e.g., month's or year's allocation). This is confusing to users and tends to make them ignore setcub. Another capability which would be helpful to account managers would be if you would provide access to time history information by account and user; i.e., usage on a daily basis through the whole year which I could download into a spreadsheet for plotting and to extrapolate at what rate we're using up our time and check which users are burning it up most rapidly and if usage patterns have changed over time. Obviously, I could run setcub every day and collect this information myself, but that's a pain.

    [...] Avoid changes in the compiler software if at all possible. Improve notification when it happens

    I/we typically run FORTRAN codes at NERSC and port the data back to the office for plotting. We like to see minimal changes in compilers, graphics routines, software, etc... Once a code is working, we do not like to debug it or have to troubleshoot it.

    [...] Also, I once requested that PGPLOT be installed on the CRAYs. This is free software, super-easy to install, and generally a wonderful library. But I got beaten down and told it was junk and would not be installed. In my opinion, NCAR Graphics is junk, yet you support it and promote it at the expense of simpler, better alternatives like PGPLOT. In this case, it really doesn't matter since I was able to install it myself in my own area in five minutes. But I'd rather use official, system-installed software whenever possible. and a terrific package for simple graphics.

    I deplore the erosion of the engineering software and loss of DISSPLA graphics particularly nine months ago.

    work on portability between PC-based programs and NERSC

    [...] In the 'good old days' NERSC remedied deficiencies in vendor-supplied software by writing software of their own. This should be revived.

     

  • Comments on interactive computing:   6 responses
    • increase interactive resources (4 responses)
    • don't increase interactive resources (1 response)
    • wrtite software to enforce the T3E interactive policy

    Interactive computing with 16-32 processors can be very helpful for some applications, even for production. I much prefer such computing to batch submissions, but I realize batch is a necessity. So do what you can for interactive computing.

    My work requires that I run interactively. I may execute a program thirty to forty times a day, with each run requiring information from the previous one. With the availability of only one interactive machine (Killeen), NERSC seems to be strongly discouraging interactive use. There are several batch machines and those are always the ones that get the improvements and upgrades first. If it is NERSC's policy to minimize interactive computing , then that policy and reasons for it should be clearly stated.

    NERSC should try listening to the users more. Among my co-workers, there is rampant dissatisfaction with the CRAYs' interactive and batch performance, but nothing seems to change over many years. [...] [PVP user]

    I would appreciate very much if you could extend the interactive hours use. We would like to have ,at least some, processors at night, between 10 and one at night.

    I think NERSC should heavily penalize people who use the facility interactively. There has been some recent suggestion that it be made more interactive and I strongly oppose that move.

    Enforce the interactive usage policy on mcurie automatically (i.e., with software).

     

  • Comments on the allocations process:   4 responses

    Allocations

    simplify application process

    [...] At the same time, an explanation of how my proposal has been judged and why this amount of computing time has been reserved would be useful for my next proposals. I can't say much about the system itself, since I've just received the login and now I'm optimizing my codes. So, basically I haven't used the system jet.

    Make the ERCAP request form somewhat shorter and the Web interface more bulletproof. Do not penalize T3E users so heavily for not using their time in the first two quarters of the year (some researchers have a lot more personal time/student assistants to accomplish more in the summer quarter than the previous 3 quarters).

     

  • Other comments:   3 responses

    Have consultants respond immediately rather than leave a message and have the consultant call back

    Since I'm sited at LBL, I'd like there to be more seminars such as the ones CASC at LLNL hosts. That is, of course, peripheral to NERSC's charter, but maybe the center can help.

    This is difficult to say from the user's point of view, as there are , I am sure many constraints on the NERSC staff and personnel who ensue a smooth operation of NERSC. I know it is only personal matter , but discontinuance of a toll-free telephone access from CANADA has caused me some concerns to contact NERSC by phone when I should do this. However, if I am alone in this position, I would go along with this policy as I have been doing for over a year now.

     

  • Shorter survey:   2 responses

    Your survey should be shortened if you want many people to respond.

    Get the "importance" part of the survey to default to "Somewhat Important" :-). Make the survey shorter...I've nothing to say about most of these subjects. Maybe a checkbox at each section, "Section Does Not Apply" (e.g. training, have taken none from NERSC).

     

  • Don't Know, No changes:   2 responses

    Too early for me to tell.

    None.


What additional services would you bke NERSC to provide? - 29 responses

  • Software requests:   11 responses

    Better critical evaluation of state-of-the art math/scientific/computational/visualization software. i.e., put together a sort of "consumer reports" evaluation of the available packages to assist users in making choices.

    software support
    make users better aware of support services in general-- perhaps a list and brief description could be made available to new users

    The ability to use rsh, rlogin, rcp, ssh, would greatly improve the access to the NERSC systems.

    MOLPRO, Gaussian 98 [PVP user]

    software porting services [...]

    I would like NERSC to implement ssh-2 (secure shell - 2) as soon as possible. We are running it on some of our local workstations. Of course, ssh-1 is better than no ssh at all.

    I would like to have a non-graphics debugging tool.

    I hope Fortran77 is still supported

    The PVP CRAYs really need to have GNU enscript installed. This is software I can hardly live without. Also, GCC is needed but I hear there is a problem with that. In general, you should have as much of GNU as you can get working.

    T3E NCAR.

    Allow me to put whatever software that I need onto the public space

     

  • Web services, training, dissemination of information:   8 responses

    User message board

    Web server for users?

    see below [email notification of downs]

    Perhaps the ability to check how jobs are running from a web page, rather than having to log in. Would be especially useful if trying to connect from an unreliable service.

    Get me the "information I need to do my work on your machines effectively and efficiently," quickly and concisely.

    [...] meeting with scientist users

    how about running classes on selected topics off-site (e.g., here at MIT)? You guys did a wonderful job when we converted from CTSS to unix.

    Many production codes spend most of the CPU time on a small section of the code. NERSC experts could help user to optimize these critical part.

     

  • More computers, cycles, inodes:   5 responses

    A 2048-node T3E-1200 ...

    more computer times

    More inodes in T3E tmp storage space :-)

    More PVP capacity

    Singificant Beowulf cluster (bigger than current, say 100-400 CPUS)

     

  • Visualization requests:   3 responses

    More visualization tools

    A few basic graphics programs: IDL, gnuplot, ...

    Develop additional in-house (not dependent on LBL's graphics group) 3D/movie generation expertise and provide good Web pages (and or links to tutorials by other graphics groups at other institutions) in these areas. Present pages are not of uniformly high quality and/or seem to be frozen in time. For example, the "What's new" page is dated 5 June 1998 (as of 27 Sept. 1999). Also, there appears to be an emphasis on the Khoros/XPRISM tools as opposed to (possibly) more mainstream ones such as IBM's DataExplorer which has become more freely available (at least on workstations).

     

  • Don't know / no specific requests:   3 responses

    Don't know at this time.

    Service is not the issue. I just want to run my jobs.

    Keep updating machines and software (very general).


How does NERSC compare to other centers you have used? - 54 responses

  • NERSC is best, better than: 26 responses

    Excellent.
    I use [name deleted].
    NERSC is far superior...

    NERSC provides more computation cycles than NCSA or SDSC.

    The best I have used. Others I have used include: (1) Argonne , (2) Pittsburg, and (3) San Diego.

    NERSC is easier to get in and out of than LANL/ACL, which is the other center I use. Most of my experience is at NERSC, so I can't say much more than that.

    Much better machines and support (ORNL CCS, SNL to a lesser extent).

    Big advantage of NERSC: no limit on the number of jobs you can submit at any one time. It doesn't have no-queue-flood requirements.
    North Carolina SuperComputer center.

    Much better than the [name deleted].

    NERSC is the best I know of.

    I'm using SDSC's SP and UGA's supercomputer (O2000, IBM SP). NERSC has by far the best turnaround time of all of these, the SDSC hardware is better for my application but waiting time is too long.

    Better.

    Much better than anywhere else (SDSC, LANL, NCEP(NOAA)

    the best service and computers

    The best one

    I stopped using [name deleted] quite sometime ago because your allocation process is much easier and yours staff is so helpful.

    By far the best.
    LLNL,ORNL,LANL ACL, EPA NESC, etc

    THIS IS THE BEST. I have used Canadian supercomputing facilities at U of Toronto, and Eagan Supercomputing Center.

    It is more responsive to users needs (compared to [name deleted] for instance)

    NERSC appears to be less bureaucratic than [name deleted], and I hope it will stay this way.

    Much, much better in almost every way than [name deleted].

    NERSC has tended to provide better production facilities than other centres I have used, and has been more stable (2 of the centres I can compare it with PSC and SCRI are now defunct).

    better than any other used [names deleted]

    NERSC ranks at the top. Have used ULCC and MCC (UK Centres).

    very well compared to [name deleted], where I work

    The computational power simple does not compare to any other I've used.
    [NERSC offers the best computational resources that I have ever found.]

    I used the IBM SP at the Cornell Theory Center from 1995-1997. The batch queueing system at NERSC is a lot better than what they had there.

    Compared to Maui SP and the Berkeley NOW, the NERSC machines are much more reliable.

     

  • Favorable evaluation, only use NERSC: 12 responses

    Have not used any other resource than NERSC, as I could not find a better place to run my projects.

    In spite of some of my answers, I think that overall NERSC does an outstanding job and meets the bulk of my requirements. It is one of the very few (only?) computer centers that provides a useful interactive environment. Some of your consultants are absolutely outstanding, others unfortunately are not. The other centers I use are primarily NAS, GFDL, and LLNL.

    Very well.

    Pretty well.

    I've predominately used NERSC so have nothing else to compare to.

    Have not used other centers.

    Pretty good

    It's been so long since I used anyone else...

    Excellent center

    NERSC does very well.

    You are very good!

    I have not used other centers.

     

  • NERSC is the same as; mixed evaluation: 10 responses

    Comparable.
    The only other centers I have used are:
    Lawrence Livermore National Laboratory Computing Center (LLNL LC).
    NASA Goddard

    Better than ORNL's CCS. On par with MHPCC (Maui). Less flexible than Caltech's CACR (but also much larger).

    About as good or slightly better than NCAR.

    NERSC is comparable to the other supercomputing center I am using (San Diego Supercomputing Center). SDSC has a better vector machine (t90), but it would not be a matter. SDSC will phase out t90 soon.

    NERSC compares well with other centers. I would only appreciate a 24hr queue on T3E as it exists at Pittsburgh. The other computer centers I have used are: Pittsburgh, NCSA and National Laboratory.

    Compares well with other computer centers (EMSL, MCS here at Argonne, the HPC centre at Auckland University and csc in Espoo, Finland with maybe the only disadvantage being that I can often get bigger jobs through faster at EMSL- this is not a big issue for me however.

    NERSC is probably the best center in terms of user support, meaning consulting and account services. Probably as a result of this, there are a large number of users competing for time at NERSC, so the amount of CPU time that we can get there is much smaller than at several other centers.
    I am comparing NERSC to the NSF centers at San Diego and Illinois, to the former DOE center at ORNL, and to the Pittsburgh Supercomputer Center.

    San Diego Supercomputer Center - about the same

    Nersc and the LLNL centers compare very favorably in quality of services and performance, and excellent attitude.

    Slower turnaround time for batch jobs but otherwise comparable to SDSC, NCSA, the late lamented PSC, and NASA Goddard (NCCS). As a remote user with my own codes I don't find any very significant difference, aside from turnaround and the connectivity problem above.

     

  • NERSC is less good: 4 responses

    OSC: like their flexible allocation structure (allocation committee meets every 2 months)
    more flexible as far as queues are concerned

    I can only compare with centers consisting of fewer individuals and that is not be fair. Last year I had access to a 16 processor SGI, very lightly loaded. MPI was part of the operating system and codes were extremely easy to run. And there was no accounting!

    NPACI (not really a center but a collection of centers) sends out e-mail when a system goes down unexpectedly and and also projects when the affected system will be available again. Perhaps not everyone would like to be on such an e-mail list, but I would appreciate that kind of information about the T3E.

    Compared to NERSC, the NCSA supercomputer system (in Urbana-Champaign) has much more flexibility in time limit and PE limit for batch jobs.

     

  • No comparison provided: 3 responses

    Emerson Center (Emory University). This group's PC cluster.

    SDSC, NAVO

    North Carolina Supercomputing Center

All Satisfaction Questions and FY 1998 to FY 1999 Changes

Legend

SatisfactionValue
Very Satisfied 7
Mostly Satisfied 6
Somewhat Satisfied 5
Neutral 4
Somewhat Dissatisfied 3
Mostly Dissatisfied 2
Very Dissatisfied 1

 

How Satisfied are you?

TopicSatisfaction
No. of ResponsesAverage
Consulting: Timely response 134 6.64
Consulting overall 154 6.58
Consulting: Quality of technical advice 134 6.52
HPSS: Reliability 81 6.46
Consulting: Followup 106 6.43
Account support 136 6.39
HPSS: Uptime 81 6.33
PVP: Uptime 93 6.29
Consulting: Response to special requests 103 6.28
Account support: Ease of obtaining account info 124 6.26
T3E: Uptime 93 6.26
Overall satisfaction with NERSC 176 6.25
Web: Accuracy 94 6.22
Training: classes (attendees) 21 6.19
Software: Programming Libraries 50 6.18
Training: Online Tutorials 40 6.17
Network connectivity 143 6.17
T3E: Overall 64 6.17
Account support: Ease of modifying account info 124 6.15
Software: User Environment 68 6.12
Software: Fortran Compilers 68 6.12
HPSS: Overall 69 6.12
Web: Getting Started Guide 73 6.08
Mass storage overall 120 6.06
HPSS: User interface 72 6.06
Web: Timeliness 90 5.99
Web: T3E Section 72 5.99
Available software 129 5.99
Available computing hardware 138 5.96
Training: Online class slides 19 5.95
Web: NERSC-specific info 72 5.93
Software: General tools and utilities 44 5.90
HPSS: Performance 73 5.90
Software maintenance and configuration 114 5.89
HPCF web site overall 134 5.87
Allocations process 118 5.87
Web: File Storage Section 66 5.82
Software: C/C++ Compilers 40 5.81
Training: Teleconference lectures 9 5.78
Software: Bug resolution 34 5.77
Software: Local documentation 52 5.76
Web: Programming Info 86 5.74
Software: Accounting tools 47 5.73
Hardware management and configuration 121 5.71
Web: Ease of navigation 117 5.70
Training: Streaming AV 10 5.70
Software: Application software 33 5.69
Web: Searching 83 5.69
Web: PVP Section 55 5.69
HPSS: Response Time 75 5.68
T3E: Ability to run interactively 85 5.60
PVP: Disk configuration and I/O performance 93 5.56
T3E: Batch queue structure 81 5.47
Software documentation 117 5.46
Software: Performance and Debugging Tools 49 5.45
Visualization Server: Escher 11 5.45
Software: Vendor Documentation 37 5.26
Math Server: Newton 12 5.25
T3E: Disk configuration and I/O performance 71 5.23
Web-based training 69 5.19
PVP: Ability to run interactively 68 5.18
PVP: Overall 58 5.05
T3E: Batch wait time 71 5.04
PVP: Batch queue structure 60 5.03
Training classes (all responses) 59 4.85
Visualization services 57 4.37
PVP: Batch wait time 62 3.95

 

 

FY 1998 to FY 1999 Changes

The following is a comparison for questions common to the FY98 and FY99 user surveys.

TopicFY99 SatisfactionFY98 SatisfactionChange
Allocations process 5.87 4.60 +1.27
HPSS: User Interface 6.06 4.88 +1.18
HPSS: Overall 6.12 5.09 +1.03
T3E: Overall 6.17 5.20 +0.97
T3E: Batch Queue Structure 5.47 4.51 +0.96
HPSS: Reliability 6.46 5.51 +0.95
HPSS: Uptime 6.33 5.39 +0.94
Mass Storage overall 6.06 5.13 +0.93
Consulting: Followup 6.43 5.57 +0.86
Overall Satisfaction 6.25 5.43 +0.82
Account Support 6.39 5.67 +0.72
Web: File Storage Section 5.82 5.10 +0.72
Consulting overall 6.58 5.87 +0.71
T3E: Uptime 6.26 5.58 +0.68
Consulting: Quality of technical advice 6.52 5.88 +0.64
T3E: Batch Wait Time 5.04 4.43 +0.61
PVP: Uptime 6.29 5.69 +0.60
Web: Getting Started Guide 6.08 5.54 +0.54
Web: T3E Section 5.99 5.48 +0.51
Network Connectivity 6.17 5.70 +0.47
Web: PVP Section 5.69 5.69 +0.35
HPSS: Performance 5.90 5.46 +0.44
HPSS: Response Time 5.68 5.29 +0.39
HPCF Website 5.87 5.54 +0.33
PVP: Batch Queue Structure 5.03 4.85 +0.18
PVP: Overall 5.05 4.92 +0.13
PVP: Batch Wait Time 3.95 4.79 -0.84

Show Pagination