Comments about NERSC
What does NERSC do well? 69 responses
|35||Stable, well managed production environment / good hardware|
What should NERSC do differently? 50 responses
|10||Provide more cycles / improve turnaround time|
|8||Provide different hardware / keep PVP cluster|
|5||Provide more training|
|4||File storage improvements|
|3||Manage systems differently|
|3||Better interactive services|
|3||Longer batch queues|
|3||No need for change|
|2||Accounting / allocations improvements|
|2||Authentication / password improvements|
How does NERSC compare to other centers you have used? 49 responses
|28||NERSC is the best / better than|
|8||NERSC is the same as / mixed response|
|5||NERSC is good / only use NERSC|
|4||NERSC is less good|
|4||No comparison made|
What does NERSC do well? 67 responses
- Stable, well managed production environment / good hardware:
The availability of the hardware is highly predictable and appears to be managed in an outstanding way.
It provides a reliable professionaly managed computing resource, which is greatly appreciated. I have had little problems with machines not working properly, which given the flakey nature of parallel machines is very impressive.
Processors provided are reliable. Job scheduling is fair.
provides stable computing environment. [...] uninterrupted service.
NERSC makes it possible for our group to do simulations on a scale that would otherwise be unaffordable.
provide stable and reliable hardware and software for supercomputing.
Provide both capacity and capability. [...]
Delivering capacity and capability. [...]
Having a focal point for DOE computational research in the university community is useful. Of course, having the raw resources is very useful. Having them available to non-US citizens is special among the DOE large-scale facilities.
I have only good experience with running series of jobs on the vector machines. The system is very reliable, fortran compilers cause no problems. Very good place to produce results. In the future I would also like to use NERSC T3E machine.
NERSC have moved agressively to provide high end computing services to users. I hope that it will continue to do so, as I expect user needs to continue to grow rapidly. I know mine will. NERSC runs its machines very well, [...]
Lots of Computers and Disk
Provide me with access to Crays.
Keep the machines going [...]
Keep good access to the fastest boxes available. Being a little bit undersubscribed is good. It means real work can be done, not just the background stuff.
Provides high-performance computing hardware, [...]
Changes are occurring rapidly in all areas. It sometimes is difficult to keep up with everything and still do research! But certainly the advances in the last few years have been substantial. With a few years of stability, much can be achieved.
You operate faster computers than exist at GA
[...] provides state-of-the-art hardware
Running machines. ;)
lots of cpu cycles; [...]
providing tremendous computing power
Supercomputer support! But useless to me! [PDSF user]
NERSC has significant computing and data-storage. [...]
Provides exellent computational resources
[...] and efficient use of resources.
usability, uptime, [...]
Provides a reliable production service.
Have an ideal configuration for "seaborg" and [...]
Provides a reliable and very useable high performance computing power.
Building and maintenance of high performance computers.
Massive computing [...]
Good system support
Keep the computers running smoothly. [...]
- User support:
Consulting Service. Excellent!
Consulting by telephone and e-mail. Listens to users, and tries to setup systems to satisfy users and not some managerial idea of how we should compute
[...] I have also found the user support staff to be very helpful and responsive (I particularly appreciate how rapidly they respond to both e-mails and phone calls.)
[...] This combined with strong consulting has been a tremendous resource for my research.
Provides computing resources in a manner that makes it easy for the user. NERSC is well run and makes the effort of putting the users first, in stark contrast to many other computer centers.
[...] and respond to our needs promptly and in a fully satisfactory manner.
They are always there to help you out.
[...] is responsive to users, and provides outstanding consulting services.
[...] consultant help
[...] excellent support. [...]
Your consultants and [...] is very, very good. [...]
Provides exellent [...] and support
Consulting and [...]
support is very well done. [...]
Accounting operations and support is superb.
[...] user support
account support is exception, with prompt responses.
[...] Timely and effective help [...]
[...] Providing first rate consulting services. Providing first rate training.
Supports a large community
Interacting with and helping its users.
Good [...] and consulting. NERSC responds well to users needs.
Consulting and account services are very helpful whenever I have a problem.
[...] Generally responsive and proactive to user needs.
Consulting support is excellent and offered in a timely manner.
nterface with users is excellent. Francesca and her people do a great job. Account support is also great. Senior management (Simon and Kramer are excellent)
- Documentation / announcements:
[...] Very good web site with well arranged information
The web page, hpcf.nersc.gov, is well structured and complete. Also, information about scheduled down times is reliable and useful.
Training and information on web pages are excellent.
Web management, especially NIM is great advantage for the users.
[...] I find the web documentation very good too. Incidentally, these are the only recourses I use, so I cannot comment on anything else.
[...] good documentation, [...]
the NERSC web pages [...]
[...] Their web site is especially a great place.
[...] your web site is informative.
[...] Maintain a very good and useful web site.
It is great that NERSC does everything related to super computing great! [...]
As to me - NERSC all does well.
Most parts, including hardware and software capabilities, online documentation and information.
NERSC is great at support, web pages, and keeping well equipped machines running efficiently and with good software that are responsive. When machines are up, it is quite pleasant to work on NERSC facilities.
PDSF is a close to a perfectly run facility as I have ever experienced. Clear strategic planning, highly competent technical support, intelligent management. Don't change it.
Pretty much everything I need. In particular, compilers (f90), running interactive and in batch. Storage works very well for me and seem very reliable.
Nearly every thing.
Most things are fine.
NERSC is state-of the art second to none supercomputing facility available to thousands of scientists all over the world. To run such a complex operation smoothly and efficiently requires at least dedication, intelligence and adeptness in public relation management. Horst Simon and his colleagues at NERSC deserve heartfelt thanks from thousands of scientists all over the world who not only have used the supercomputing facilities at NERSC but enjoyed using this most user friendly supercomputing facility . Need anymore be said? I feel very assured that the management of NERSC in the best of hands and I look forward to continue my research with great joy in the FY 2002 at NERSC.
- Software, tools:
[...] Keep up to date the all the software. [...]
[...] excellent software maintenance
[...] and choice of libraries is very, very good. [...]
[...] reasonable software support
[...] generally good software support
[...] software, [...]
- Storage environment:
The mass store system is fast and easy to use. [...]
[...] Your archiving system is pretty good, but somewhat slow.
reliable file storage on hpss, ease of access to stored data
[...] allows mass storage
This is too long. Anyway, I have only recently gotten an account and haven't used it much yet. I will have more opinions once I do.
Have only just gotten my account, can't really comment on any of the services/facilities at this point.
I only got my account about 2 weeks ago and have no experience using the machine yet. I believe that it is going to be great but I lack the information to complete a meaningful response to your survey and have thus left most questions blank.
What should NERSC do differently? 48 responses
- Provide more cycles / improve turnaround time:
Don't become oversubscribed. I'm worried that SciDAC will push for oversubscription, please don't go there.
Get more hardware. DOE is falling way behind NSF.
My only partial dissatisfaction is caused by the long batch job wait time for large memory jobs on the PVP cluster.
Batch job waiting period sometimes is too long.
Increase the capability of its computing facilities even more rapidly than it is presently doing.
Stop oversubscribing machines. If this means limiting quotas, so be it.
No complaints. It would be nice if the batch queues moved faster.
More hours available ... (I'm joking)
Keep the machines up longer.
- Software enhancements:
Push Cray harder to support certain software: g++, C++ STL, gdb (totalview is very good, but sometimes has problems.)
I would appreciate having netCDF tools (nco) available on killeen.
[...]Should support commercial packages such as Code Warrior.
[...] better debuggers, more compilers to offer choices
Install zsh, please.
more debugging and optimization support for MPP platforms like seaborg
[...] also better debug tools. It might also be useful to have access to applications such as electronic structure codes, MD codes, quantum chemistry codes, etc.
[...] I would like the GNU gcc/g77 compiler on the Cray PVP's; if only to test against some of the Cray f90's idiosyncracies. I've found that Cray binary files are *very* difficult to read elsewhere; a standalone utility (whose source can be exported anywhere - not run only on the Cray) to translate them into more common forms is needed. I was unable to find any useful documentation on the *detailed* structure of the Cray binary files.
This may be something that you already have. I find totalview to be slow since I am always passing graphic information back an forth from here in Michigan. If there is a text only debugger this would be useful to me.
- Provide different hardware / keep PVP cluster:
I want something 10 times faster than Killeen but not MPP
[...] Find a replacement for the PVP systems.
More access to capability machines that let long jobs of 32-64 pes go for 8 hours or more. Although many applications can use a lot of processors, science studies often ramp up and down in size as one walks through parameter spaces. Having a complement of smaller parallel machines to match the big one is very useful. These smaller machines do not need to scale much past 64 pes.
It just seems that your (Cray PVP) CPU power hasn't stayed up there w.r.t. PC's. [...]
Supply mid-level computing resources to people to take the load off of the SP3 and T3E. I get the sense many people only need something which could be handled by a Beowolf cluster.
Keep capacity engines around
The PVP cluster should not go away.
Provide both capacity and capability. [...]
- Better documentation:
Better maintained NERSC online consulting answers page - perhaps a more comprehensive FAQ type page. [...]
[...] I've had lots of odd problems with the batch system; I've tried to follow the web directions, but still can't seem to get it right. [...]
Batch queue structure improved or explained in more detail
Better indexing of the sprawling website. Finding, e.g. compiler options or queue limits takes some knowledge.
Better organization of web based documentation and tutorials
orient webpage more towards beginners to supercomputing with detailed discussion of issues such as optimization etc.
- Provide more training:
Training for the users far from the site would be benefitial.
SAS should have the online tutorial installed; need better training in nersc usage
I'd like to see better online training tools, [...]
More training classes so that I can effectively use my NERSC time. I'm constantly worried that I'm wasting MPP time with memory leaks, code inefficiencies and the like.
How about having some of NERSC's people involved in profiling and performance tuning of some of the major codes running on Seaborg? I really liked the workshop with the ACTC guys a year and a half ago, where we learned many important details about performance tuning on the SP. I think it would be time for an updated version of this workshop...
- File storage improvements:
[...] Sort out the mess of different home user space on the SP3 and mcurie....
Be more liberal with disk allocations. Chose file systems with better ratios of inodes to disk space -- 1 inode per 5-10 Kbytes would be ideal. [...]
[...] Some system of notification of when temporary space is to be cleared.
Get rid of some weird limits (number of files?)
- Manage systems differently:
NERSC needs to improve their productivity at accepting new hardware; they take way too long at doing this. In spite of the time they take, the hardware is often unstable after it is released (for example, the experiences with early Seaborg use and the addition of the extra 50 nodes have not been good, i.e., the whole machine has been down too much, users have been able to log in, but find that they cannot access their files, etc.). [...]
NERSC appears to be chasing big money and large initiatives. Where these are consistent with the support of a big center, they do not necessarily lead to an efficient, flexible computing environment. Strategic investment in small/mid sized efforts can be as important to the success of NERSC as a few large projects. I would increase the LDRD budget for serious scientific pursuit.
Provide more support for using PDSF and HPSS for outside users.
- Better interactive services:
better debugging & development support - interactive jobs with fast response time, [...]
Improve interactive response of the machines. Even better, provide some facility for large-scale interactive jobs that would allow direct user control during the run. [...]
[...] Your [PVP] interactive stuff is pretty good, but trying to actually use it from here (on the east coast) is not really practical- it is just too slow to use e asily. I now try to do all code development locally. [...]
- Longer batch queues:
[...] Lengthen the wall-clock time-limit for jobs.
Expand real time limits.
Work aggressively in getting a checkpointing scheduler on the SP similar to that on the T3e. This should allow for longer queues to be run and make more efficient use of the machine.
- No need for change:
Can't think of anything.
Nothing I can think of now.
More of the same.
- Accounting / allocations improvements:
[...] The NERSC allocation process needs to be streamlined. The ERCAP form asks for too much overlapping information. It's often difficult for reviewers to evaluate what is written because of too much detail, and information overload over too many proposals. Perhaps all of this is needed to justify NERSC resources to the outside world, but it seems like it uses up more human time (which is also costly) than is productive.
Not charge 2.5X on seaborg for a "regular" job.
- Authentication / password improvements:
Sort out the mess where I need to rememeber 3 or 4 different passwords and change them at different times on different machines. [...]
The expiry of passwords did create some problem.
- Networking improvements:
Does not interface PC's and Mac's with Nersc. Too concerned with perfect security to support network access. Need to have a windows interface to NERSC. [...]
Offer a 6to4 gateway? Not much NERSC specific can do... I'm rather disatisfied with the general state of high-performance computing, but that's why I'm in research.
[...] Have a spell checker for the comments boxes.
A bug was discovered in the CRAY FFT subroutines (scfft2d/csfft2d), this is not very good. The bug is not fixed up till now (i.e. 11/1/2001, it's more than 6 months), this is bad. As far as I know, there seems to be no announcement to warn users about the bug, this is ridiculous!
How does NERSC compare to other centers you have used? 49 responses
- NERSC is the best / better than:
The best. Rarely do I bother with others anymore.
I'm using sdsc, texac acc, ornl (ccs) and they are all top-notch but I think the nersc nim is a great tool and I think the nersc web site is more timely. also, the motd on the nersc machines is more informative.
Top of the heap. SDSC is the main other, but I've used other NPACI centers.
NERSC is better than NPACI in the following regards. Better access to more competent consultants. Listens more to users in making policies. Supplies more permanent disk space on its systems. Allows remote access to HPSS.
More up-to-date webpages than the ASCI platform webpages.
You guys are infinitely better than [name omitted]. Everytime I visit [that center], I want to send you flowers.
In recent years I have computed at the San Diego Supercomputer Center, The National Center for Supercomputer Applications, The Pittsburgh Supercomputer Center, the Cornell Theory Center, Oak Ridge National Laboratory, Los Alamos National Laboratory, and smaller centers at Boston University and the University of New Mexico. I rate NERSC at the very top of this list.
The mass store system is excellent compared to NCAR's.
It is the best of all centers I have used so far.
Allocation process and utility in NERSC are better than NCSC.
NERSC is more egalitarian than LLNL as one has to have the right connections to get any sizeable allotment, whereas NERSC will throw you in with everybody else and if you are persistent you will get enough resources.
Compared to [name omitted], NERSC is a superior place.
It's the best. My group is using SDSC (BH), UGA's UCNS (IBM SP and O2000) and several french centers (e.g., PSMN). NERSC compares very well with all of those. NERSC also has improved in all aspects during the last few years by a HUGE factor, it is now by far the best supercomputer center that we are using. This is in part due to the hardware it has, but also in other respects it has dramatically improved comapred to, say, 5 years ago.
The best I have experienced in 20 years of experimental physics work (am I that old?). The CERN Computer Center is a close second, however.
Los Alamos (1985) NERSC is much more user friendly. I can actually talk to a consultant. Fermilab (1988-1998) NERSC is much more user friendly. The consultants contact me before terminating my jobs or stalling them.
The up time seems to be greater than at RCF, and the network is more reliable.
NERSC compares well with respect to NPACI centers we have interacted with, and substantially better compared to DoD supercomputing Centers.
The npaci center (www.npaci.edu) has a rather limited web site compared to yours.
Much better than any I've used in the past. SDSC, Cornell, OU
For quantum Monte Carlo, NERSC is the absolute ideal center . It is much more efficient than any of the unclassified machines at Lawrence Livermore National Labs. Both the wait time and efficiency are much better. The charging of time, though is too aggressive
I work previously only at one large computer center (IDRIS in France -) I found that NERSC is much more confortable for computing.
The best! Compared to [name omitted] (a true horror) and NPACI (medium to okay). LLNL has good machines but documentation can be highly elusive.
# 1. I have used supercomputing facilities at Eagan Falls Cray Centre, Mn, USA, IBM, Kingston, N.Y, U.S.A,etc.
by far the best
NERSC compares well to most other centers
Best all-round. NCAR has capacity (now), but capability has been slipping. ORNL is informal and responsive, but unreliable.
My initial impression is that working at NERSC is going to be much better - more user friendly, better documentation, etc. - than using the ASCI machines at LLNL. Those were my prior experience with large parallel platforms.
- NERSC is the same as / mixed response:
User services are of comparable excellence to those provided by DoD supercomputing centers.
I can compare NERSC only to the NIC in Juelich, Germany. NERSC allows me to work on the same systems and use practically the same type of resources. Maybe only the elapsed time for jobs is longer.
Seems more powerful, but also more complicated, more waiting time, slower interactive response, than the local supercomputer center (at University of Texas at Austin, "TACC") that I also use.
NERSC generally does a good job of serving computational users. Clearly better than [name omitted], probably better than NCSA, and maybe not quite as good as PSC.
NERSC and LLNL LC compare favorably.
NERSC has very powerful hardware compared to other centers, such as NCSA and NPACI San Diego. But the very long wallclock limit in ncsa is useful for some of our calculations.
ARSC is good because until recently it had no restriction on computation time.
I worked for a little while on the computers at SDSC. I would say that both centers are comparable, although some people at SDSC may be doing more exploratory work (for example, they tried out the beta version of the 64-bit MPI on the SP more than a year ago).
- NERSC is good / only use NERSC:
NERSC seems very well organised!
i only use NERSC so I can't comment
NERSC is a unique resource and provides an essential environment of MPP simulation.
Sorry, you're the only one!
- NERSC is less good:
Acceptance of new hardware seems to be a slower process; even when it is accepted it is still sometimes not too stable. Allocation process is too complicated. Takes up too much human time.
i didn't use the maspar much when it was at lbl, but it least it was simple to use if one just wanted the default fortran parallelization. we need something like that for nersc in at least fortran and c/c++
UCSD has the same IBM SPII and it has better communication between nodes. Our program runs faster there beyond 16 nodes (faster by ~ 50% on 64 nodes.)
The resources at NPACI center (SDSC) was much more useful than the ones at NERSC. Its not about how powerful the resources are, but about how much a wide variety is available, from vector computers to massively parallel. The reason i switched from vector computers was the limitation on the # of CPU hrs i could get, but now with SP seaborg, i'm facing a bigger problem of time limit.
- No comparison made:
BNL, CERN, JINR, IHEP, LBNL
EPCC MSCF @ PNNL