NERSCPowering Scientific Discovery Since 1974

2000 User Survey Results

User Comments

What does NERSC do well? 58 responses

34 user support
29 stable, well managed production environment; good hardware
9 everything / nothing singled out
7 documentation
6 software, tools
6 storage environment
5 well managed migrations and upgrades
3 allocations process
3 announcements to users

What should NERSC do differently? 49 responses

18 provide more cycles, improve turnaround time
7 inodes/storage improvements
6 software enhancements
4 manage systems differently
4 provide different hardware
3 accounting/allocations improvements
3 batch improvements
3 better documentation
2 networking/bandwidth improvements

How does NERSC compare to other centers you have used? 49 responses

25 NERSC is the best / better than
9 NERSC is good / only use NERSC
7 NERSC is the same as / mixed response
6 NERSC is less good

 


What does NERSC do well? 58 responses

User support

I have been very satisfied with most NERSC services and competencies. Great response time and quality answers to my questions/requests. Also, I find the web page well done.

[...]Very responsive consulting staff that makes the user feel that his problem, and its solution, is important to NERSC. [...]

consulting is awesome

People to people contact is excellent. General attitude from Horst, to Kramer, to Verdier, to account support and consulting is outstanding with respect to dealing with the users and their issues.

listen to users and effect changes in service

[...] Gives users good access to consultants.

Responds to users needs promptly and effectively.

The consultants are especially helpful.

Consulting, web, availability of machines.

Once I established a good rapport with the consultants, they were helpful. At first it was difficult to get straight answers.

Customer support is always timely and accurate.

[...] 2. User services (i.e. consulting and account support) are excellent.

Consulting service is excellent!

Good response from the consultants and sysadmins.

The consultant and account services are superb.

Consulting is good but very little else.

Information to users, maintainnance.

Consulting team is very excellent.

Stable, well managed production environment; good hardware

Provides stable production environment with excellent support services and first rate hardware for scientific computation.

Provide state-of-the-art computation, maximum speed, processors, capacity

Keep everything working smoothly. Excellent computer personnel.

Good management and accounting of a few big machines; good effort at maintaing WWW pages, help with standard questions, etc.

Keep allowing interactive time. Consultants helpful at times. Pretty good access to hardware. Pretty good tools.

SP. Batch turnaround time. I/O space. Mass storage

Provide good hardware, respond well to users.

Overall availability of resources and waiting times are quite predictable and constant through the year.

Typically tries to provide an adequate production environment. [...]

1. Provide world-class supercomputing resources. [...]

Provide access to high-performance computers with a variety of different architectures.

Provide excellent computing resources with high reliability and ease of use. [...] My work requires interactive use and the conversion of SEYMOUR was extremely helpful and welcomed. However ... see next box...

Good provision of flops and support.

NERSC is doing very good job to give us a very good environment of computing. I am very satisfied overall.

Documentation; announcements

[...] The announcement managing and web-support is very professional.

Warn us of scheduled downtime.

I'm very impressed with the friendliness and helpfulness of the consulting staff. I also find the e-mails about down-times helpful.

Nersc provides good support services, documentation, etc.

High availabilty of machines. Good online documentation. Responsive support team.

Software, tools

NERSC is a very well-managed Supercomputer Center. It provides excellent Fortran compilers and run-time environment on the Crays. NERSC is a most valuable resource for my research in nuclear structure theory.

Support of software. Have knowledgable staff to assist researchers with computer difficulties - both hardware and software aspects.

Maintenance of hardware and software is excellent. [...]

NERSC maintains the most updated hardwares and softwares which are very user-friendly.

Storage environment

Manages large simulations and data. The oodles of scratch space on mcurie and gseaborg help me process large amounts of data in one go.

Storage, connectivity.

ease of use of mass storage, access time to stored data

Executes the jobs, stores and transfers the data

Well managed migrations and upgrades

In general you are to be congratulated on the transition from 1980's supercomputing to Y2K multiprocessing. Machines are generally up and the storage facilities seem good (from my perspective as a fairly light user).

NERSC has been the most stable supercomputer center in the country particularly with the migration from the T3E to the IBM SP

keeps machines up. Upgrades facility in a timely fashion.

everything

yeah, NERSC does well

Most everything. A first-class operation.

Almost every aspect. Hardware, software, and consulting. I really happy to see efforts going on to keep on improving the current system.

It is the best among all I have used. I gives it five stars.

Makes supercomputing easy.

Provide timely computational service upto expectations.

NERSC undoubtedly is the best supercomputing facility that I have used over the years. NERSC has become available to academics all over the world with resources which are unthinkable in any academic environment. Credit must go to a major extent to Dr. Horst Simon and his associate Directors for this achievement and success! Ms. Francesca Verdier and her staff , especially those mentioned above in the Consulting Services have done an excellent job of helping users how to utilize the unmatchable resources at NERSC for solving major scientific and Engineering problems. I sincerely express my thanks to all at NERSC for making it a great pleasure for me to use the facilities at NERSC from a remote site [name omitted]. I look forward to use the NERSC facilities in the FY2001.

NERSC does very good job.

yes

Allocations process

Consulting was very good. Allocation service was very helpful.

User support and reponse. System allocation of resources.

The web-based allocation procedure is very convenient.

Other

I hope to solve my problems with MPI so my code can compile and run well on both SP and T3E.

access, consultants, visualization help

Training, consulting, web pages, making bleeding edge hardware available.

I think that the support is very good. The new IBM SP was a very welcome addition.

 


What should NERSC do differently? 49 responses

Provide more cycles, improve turnaround time

NERSC is doing a wonderful job. My great need is just for more resources (more time on the machines and more nusers/resources/greater storage speed.

Wait time in large job batch queues is long, which costs DOE programs a lot of money. Need to increase throughput.

Much more work needs to be done on providing greater resouces to the community.

[...] Also NERSC really needs many more processors given the demand.

Give sole access of all machines to me.

Find a way to shorten batch queues (!) [...]

DOE should put new machines in NERSC other than other places if DOE wants new machines.

The batch queue system on the PVP cluster does not fit my needs. I get most throughput on the slowest machine.

more PVP machines

Shorten the time it takes a job to run on the PVP.

provide more pvp cycles, particularly this year

Add more capacity for the heavy work loads.

It takes too long time to run a big memory job on PVP.

Improve on its vector computing. [...]

Inodes/storage improvements

The user file limits are unrealistically low on the IBM and Cray systems. NERSC seems unfriendly to users with large data/memory requirements.

[...] Improve on its disk resources, especially its inode resources.

I do not like the "inode" business in user file quota. I think it is outdated now and should be removed.

My only complaint (this is the same from year to year) is the I-node limit.

Taylor to individual request. [What I meant was something like the allocation of file space (and other restrictions) should consider individual needs. Please do not take this as my criticism. I am doing well within the allocated space.]

Software enhancements

Improve the global working environment for remote sites by installing AFS on the IBM SP. This way, for example, the same CVS repository can be used by several users at different sites.

[...] Some support for heterogeneous computing and more support for code steering on the T3E and SP.

Build computer systems comparble to those available at Los Almos and LLNL. Put more effort into using software tools such as GLOBUS as a model for remote computing using NERSC resources.

I am very satisfied with NERRSC. If I could ask you for one favor it would be to make the Nedit text editor available on the Crays (open source software).

More support for Mathematica is appreciated.

Keep investing in adding quality software in chemistry (an others) aplications. For example Jaguar...

Manage systems differently

NERSC should reduce the number of interactive machines. It should encourage batch submission and give more "credit" for use of more processors.

Interactivity and wait times for batch jobs at times can get very poor on your systems. Instead on aiming for maximum utilization of CPU cycles, you ought to find ways to maintain better "headroom" between the resources you have available and the user demand.

You need to make new systems available on a more rapid time scale once they are installed. NERSC seems to take a much longer time to make new systems available to users than other computer centers I've used (with no apparent improvement in functionality resulting from this slow acceptance process).

NERSC sometimes makes bad choices in how they set up their systems. For example, the way Fortran-90 modules have to be handled on the IBM SP is very time inefficient for users who are developing codes. Apparently, from my experience with other IBM SP's, the awkward way NERSC chose to do this is completely unecessary since others have not chosen to use this configuration.

Because NERSC makes supercomputing easy, it is somewhat a victim of its own success. By this I mean that truly large computational tasks suffer because resources are used by smaller tasks. Climate runs often take many hundreds of hours to execute, even in highly parallel configurations. The successful climate modeling centers (none are domestic...) all are able to access dedicated resources. It is difficult for the US climate modeling community to compete with European and Japanese groups if it must further compete with other fields for needed computational resources. As this situation is controlled by forces external to NERSC, I don't see much relief soon.

Managerial types claim than NERSC is a "capability" center. From my limited experience this is not really so however. Looking at gseaborg, e.g., there's a 200 node job that's been in the regular queue waiting for a 6 hr slot for a week and a half, but the machine's full of 1,3,4,8,16 node jobs. None of the jobs can run longer than 6 hrs, and they all presumably have a tight limit of the number of files they can generate as output.

Provide different hardware

save money: dump the Cray's (or out them into a museum), get a O2000 class box as an alternative.

Provide more middle of the road computing resources

The PVP machines are at the end of their line, it seems. NERSC should help users learn how to migrate away from these machines in the coming year.

It would be great to have alternative platforms, such as a large scale linux cluster.

Accounting/allocations improvements

Information about remaining budget should be attached to each output file.

I'm not very happy with the new GETNIM versus SETCUB command. Also, having to go to the NERSC web page to consult the remaining detailed allocation is clearly *not* a progress. I do not understand why this change happened. PS- Sorry not to have more time to fill in detail the rest of the survey.

Improve the allocation process to reflect likely results of hardware changes, such as the conversion of SEYMOUR to interactive. It costs six (6) times as much to compute interactively on SEYMOUR with only a factor of 2 or so improvement in execution time. My 2001 allocation was based on KILLEEN usage for most of FY2000 ... hence I did not use but 30-40% of my 2000 allocation. As a result my 2001 allocation was reduced to 1/3 of the 2000 allocation. Now in FY2001 I cannot use SEYMOUR at all , as it will deplete my allocation in a few months. I could make very good use of SEYMOUR to expedite my work, but that is now not an option. Hence, the availability of SEYMOUR will not help me AT ALL in FY2001 ... just because of the shortcomings of the allocation process.

Batch improvements

I used NERSC only for computation and for me the time available and the time for the job to stay in a queue are the most important. And it was OK. The way to monitor a job can be improved.

Should consider increasing the debug queue time limit on IBM and T3E.

I would like to run still longer jobs, but this is in conflict with the point above, I suppose ... [Overall availability of resources and waiting times are quite predictable and constant through the year.]

Better documentation

I would like a more friendly interface. [What i mean is that when i encounter some problem in the programming in FORTAN or shellscript, I can not find some help quickly on line. For example, on line help for "nertug", "ja", "$If DEF, BACK", "#QSUB", some FORTAN function such as "SSUM(...)" etc can not be found.]

describe access and usefulness to HPSS a little better

maybe you should consider an FAQ on questions to consultants in areas such as programming, UNIX utilities, etc which come up repeatedly or would be useful for active users to be aware of.

Networking/bandwidth improvements

Improve connectivity from outside labs (eg. Los Alamos Natl Lab) that also have firewalls.

Greatly improve the ease and speed of very large dataset transfers between NERSC and other labs. Security, finger pointing, and multi point of contact are impeding research.

Other

Don't bring the machines down for maintanence at 4 pm. Re-do section 6 on this Web form.

Use a survey with many fewer and less vague and overlapping questions.

Nothing comes to mind.

Keep up the excellent job you all are doing at NERSC even after some machines get transferred to a building in Oakland.

The consultant help with specific machines is sometimes weak. We had lots of problems porting a well tested code that ran on another IBM SP3 with a somewhat different architecture.

The recent stability problems with mcurie have gone on long enough to make me wonder that something is wrong somewhere. I have no idea if the problem is mostly one with NERSC or elsewhere, but I am unpleasantly surprised every day or two by another crash. Yuck.

Return to the way things were at Livermore.

Time difference is somewhat of an issue - relocate to the east coast :-)

 


How does NERSC compare to other centers you have used? 49 responses

NERSC is the best / better than

NERSC is generally superior to all others I have used. Hence, I don't care to use others much anymore.

NERSC is proably the best center I have been using. It has a very good assistance service and resources.

It is superior in its consulting, account support and training to [site anme omitted].

Much better support than provided at UCSB, where an Origin 2000 is available, but there is basically no support for using it. Machines are changing so rapidly that it is impossible for the researcher to keep up with the changes without the sort of help NERSC can provide. You are performing a vital service to the research community.

Much better than any other centers. [4 site names omitted]

As I said, it is the best, it is better than others I have ever used, such as computer centers in [3 site names omitted]

Much better!

Unmatchable!

I would say NERSC's IBM SP runs better than SDSC's BH SP.

Much better [site name omitted]

NERSC is the best of the centres I have used.

NIC in Juelich, Germany. NERSC allows for more flexible dealing with budget and generally budget enables more calculations.

Better than SDSC/NPACI in terms of system (IBM) reliability and throughput. Most of our effective computing is done at NERSC. My NAS account is too recent to compare NAS to NERSC in a fair manner.

Better than BNL, CERN (Switzerland), JINR (Russia), IHEP (Russia)

The hardware (file systems especially) on gseaborg seems to be much more reliable than that on the IBM SP bluehorizon at NPACI/SDSC. The gpfs nodes at NPACI are suddenly unavailable on occasion.

Much better than [site name omitted]

I have used [site name omitted] in the past (about 6 years ago). You are doing much better. Keep up the good work.

In my opinion, NERSC does better than most other centers that I used, such as [2 site names omitted].

Although I haven't really use some other centers, except I had an account in NCAR 5 years ago, I should say NERSC is doing the best.

Compared to [site name omitted], how could you not be superb in comparison. Relative to the LLNL NERSC of the early 90's, things are far better overall.

NERSC is better. [2 site names omitted]

Comparing to: [2 site names omitted] NERSC has the BEST consultants. Their web pages are easily superior.

Top of the list. [site name omitted]

The allocation procedure in NERSC is more convenient than the one in NCSC.

Best center. Easier to access than LANL or LLNL. More responsive than NCAR. Keep up the good work.

NERSC is good / only use NERSC

I only use NERSC, so i can not make a comparation.

NERSC is pretty compared to other centers.

Great. (SDSC, German Max Planck Center)

Very well. CCS at ORNL, Maui.

Very very good! The other center I have used: Livermore computing

It is very good. I use LLNL and NAS also, but spend a good deal of my time on NERSC machines. Keep up the good work!

Very well. Maui, SDSC.

Very well.

Hi

NERSC is the same as / mixed response

Principal other experience is with the LLNL center, which is also excellent. In distant past, used several others which offered mostly cycles but little infrastructure.

Compared to NCAR, machines at NERSC go down more regularly and jobs are killed more often. Compared to LANL, NERSC is more stable.

Apart from NERSC, I have used NCSA and Argonne National Lab machines. NERSC is comparable in service to these centers.

san diego supercomputer center. nersc is better except as indicated in one instance above [maximum batch job running times should be 24 hours. it is 18 hours at san diego supercomputer center]

Nersc is competitive with other major facilities, such as ERDC (DoD)

I only have LLNL LCC to compare to (and lots of the LLNL NERSC staff who stayed at LLNL). Both are outstanding.

Roughly the same as San Diego, NASA Ames and Goddard.

NERSC is less good

DoD systems seem more oriented to the large user. I have used systems at NAVO and ERDC (Cray and IBM).

I have never developed a code on a NERSC machine, since this is quite inconvenient due to long wait times. in that respect the experience I made last year at a different large center (Forschungszentrum Juelich, Germany) were quite different and more pleasant.

I prefer modi4 (at NCSA) as it has a longer wallclock limit, and still has quite reasonable queue waits.

I used also SDSC SP2 and Blue Horizon and University of Texas Cray SV1 and SP2. Blue Horizon was the best (just the best hardware).

Compared to DoD's CWES site, the limits on outfiles, and the queue systems are just too much geared at the little guy. Running on NERSC requires far too much babysitting of my runs: resubmitting, running high priority, tarring up output, etc.

I also use the eagle machine at the DOE High Performance Computing Research Center at Oak Ridge National Laboratory. The interactivity and turnaround time for batch jobs is much better here than on GSeaborg. Also, I like the fact that they have configured their system so that one doesn't have to go through the unusual contortions with Fortran-90 modules (i.e., putting them off in special disk areas which are not permanent) that NERSC requires of us. NERSC should learn how they have set up their IBM-SP in this respect and do similar things.

Other

LANL open supercomputing, Argonne, local clusters

ACL at Los Alamos National Lab and QCDSP at Columbia University.