NERSCPowering Scientific Discovery for 50 Years

2003 User Survey Results

Comments about NERSC

What does NERSC do well?

[Read all 119 responses]

69 Good hardware management, good uptime, access to HPC resources
62 User support, good staff
51 Generally happy, well run center
17 Job scheduling / batch throughput
11 Documentation
10 Software / user environment
6 Good network access
5 Allocations process

What should NERSC do differently?

[Read all 75 responses]

The area of greatest concern is job scheduling; 14 users expressed concerns with favoring large jobs at the expense of smaller ones; six wanted more resources devoted to interactive computing and debugging. Next in concern is the need for more hardware: more compute power overall, different architectures, mid-range computing support, vector architectures. Seven users pointed out the need for better documentation and six wanted more training.

24 Seaborg job scheduling / job policies
16 Provide more/new hardware; more computing resources
7 Better documentation
6 General center policies
6 More/better training
6 PDSF improvements
5 Seaborg software improvements
4 Other Seaborg improvements
3 Network improvements
3 No need for change
2 HPSS improvements
1 Shorter survey

How does NERSC compare to other centers you have used?

[Read all 65 responses]

Reasons given for preferring NERSC include good hardware, networking and software management, good user support, and better job throughput. The most common reason for finding dissatisfaction with NERSC is job scheduling.

41 NERSC is the best / overall NERSC is better / positive response
11 NERSC is the same as / mixed response
7 NERSC is less good / negative response
6 No comparison made

 

What does NERSC do well?   119 responses

Note: individual responses often include several response categories, but in general appear only once (in the category that best represents the response). A few have been split across response categories (this is indicated by ...). The response categories are color-coded:

  • Good hardware management, good uptime, access to HPC resources   69 responses
  • User support, good staff   62 responses
  • Generally happy, well run center   51 responses
  • Job scheduling / batch throughput   17 responses
  • Documentation   11 responses
  • Software / user environment   10 responses
  • Good network access   6 responses
  • Allocations process   5 responses
  Generally happy, well run center   51 responses

Powerful and well maintained machines, great mass storage facility, and helpful and responsive staff. What more could you want?

Fast computers, infinite and accessible storage, very helpful staff. I think it is the good relationship between users and staffers that sets NERSC apart.

NERSC is a very high quality computing center with regard to hardware, available software and most important highly trained and motivated consulting staff.

Everything. Both the hardware, and the user support, as well as organization and management, are outstanding. I am very pleased with interactions with the NERSC personnel.

As Apple would put it .... "it just works". I get my work done and done fast. Seaborg is up and working nearly all the time. Network, storage, it's all there when I need it. That is what matters most and NERSC delivers.

  NERSC simply is the best run centralized computer center on the planet. I have interacted with many central computer centers and none are as responsive, have people with the technical knowledge available to answer questions and have the system/ software as well configured as does NERSC.

I am a very satisfied user of PDSF. The NERSC staff on PDSF are excellent: highly competent, very responsive to users, and forward thinking. NERSC is making major scientific contributions through PDSF. Don't mess with success!

NERSC offers a very fast and powerful computer in a professional and timely way. Uptime is excellent and the service is excellent.

Organization and accessibility.

NERSC has had a good tradition of catering to the computing needs of the scientific community. NERSC has managed to provide more than adequate archival storage. NERSC has been good in handling informal requests for supplemental allocations.

- provides reliable access to state of the art parallel computing resources
- good training opportunities

- keeps their systems well stocked with software resources

- excellent web site

- excellent consulting help

- excellent public visibility of your management decisions and broad-based involvement of user group input

So far, NERSC has stayed far less oversubscribed than the NSF centers. Deficiencies notwithstanding (see below), seaborg is a very stable platform, and the total number of available nodes is sufficient for my current needs.

Overall, I think that NERSC does an excellent job in carrying out its mission. Both hardware and support are first-rate.

NERSC is the most important source of computational time that I have, which is very important in my research. And all the options that they offer make this work easy.

- keeping the systems up and stable. NERSC is absolutely the best in this category I have seen.
- getting response from vendors on issues. (The obstacle course at system acceptance time is exasperating for the vendors, but it ultimately leads to highly usable systems for the user community.) Please continue to stay vigilant!
- procuring systems that are capable of doing computing at the forefront. Although, I have issues with the way prioritization of jobs takes place (see my previous comments), the systems are at least capable of doing leading science. This is important and sets it apart for most of its pretenders.

Consulting, storage, and basic computing capability is good. The ERCAP submission process continues to steadily improve.

I am most pleased with the quality of service offered by the support staff - they are very quick and efficient in solving problems. I was also very pleased with the consistency of pdsf and the minimal down-time. My work was never held up due to NERSC problems.

My experience with NERSC was positive in all respects.

The resources seem to be well matched to the demands on them.

Consulting services. Well maintained resource for extensive all-purpose computing. Advances in technology and current maximum limits of performance.

The PDSF cluster is well maintained, and the admins are aware of what's going on. The support staff are extremely helpful.

the large quota in the $SCRATCH system for temporary data mass storage resources (quota,performance) consulting service web pages with documentation

For my purposes, I have no complaints with any of NERSC services.

Large facility is well maintained and reliable.

I haven't had any problems with NERSC with the exception of some recent issues with a disk going down.

I am overall satisfied with NERSC.

The user service and response is excellent and the quality of the computing resources offered is special.

Very similar to home Linux environment => can immediately compile and run at NERSC.

The facility is organized in a very professional manner. This makes it highly reliable.

NERSC has an excellent organization.

I like the facility as a whole, and have been very pleased with the little use I have made of it so far, through globus and its LSF batch manager.

The operation of the high end computing resources, archival storage facilities, consulting services and allocations process are all outstanding.

NERSC provides large scale computational resources with minimal pain and suffering to the end user. These resources are vital for our research.

Reliability, consulting, speed, storage ease.

Consulting. Emphasis on capability computing is really welcomed. Interest in architectures also.

I was happy with the ease of getting set-up and starting work. The systems were always capable of running my work with a reasonable wait.

Excellent facility. Excellent consulting service. Excellent communication with users.

I am most pleased with all aspects of NERSC facility as checked in all my answers above. I am grateful that I have access ( I am from Vancouver, B.C, CANADA) to a world-class facility which is second to none! I have been using NERSC for ~ 5 years and whatever I have achieved in my research ( and that is quite substantial) is totally due to the NERSC's access to me. I am sure thousands of overseas users of NERSC feel the same way as I and we all thank you for the golden opportunity the US DOE has offered to so many scientists for research which most of us could not even dream of carrying out anywhere else but at NERSC. Thank you again DOE and NERSC.

Catering to the high end user at the expense of less resource gobbling calculations which still yield important physics results.

Excellent computing facilities. Excellent website. Excellent Fortran-95 compiler.

Consulting and user services are excellent. Overall up time and quality of facilities are excellent. NERSC attitude is excellent.

I think that NERSC has a very powerful computer with a good web page and very good consultants.

very "customer oriented" - quick creation of accounts, easy to do things on-line, good machines & good turn-around time

Consulting, hardware & software configuration

Excellent center! The center handles large jobs effectively.

The performance of the hardware is very good, the available software and support is quite good.

The machine works! Web pages are very good. Telephone help is available.

Hardware (speed, uptime, storage), consulting, allocation procedure, informative website

The consulting help and the overall management of the computing environment are very good.

Well managed for such a big system, the administrators are always responsive. Can concentrate on getting work done rather than worrying about computing issues.

powerful resources, and account services, and web announcement for any changes applied.

  Good hardware management, good uptime, access to high performance computing resources:   69 responses

... HPSS is a great tool, which really makes difference for large projects like ours (the large-scale structure of the Universe).

mass storage; tech support

NERSC makes our analysis easier by providing a well-maintained and powerful computing system for our research use.

high performance and the fact that /scratch directories are not automatically deleted, gives the user more freedom to manage files!

It is up more often than RCF at Brookhaven

Provide fairly direct access to high performance computing resources

The uptime and reliability of SP2.

mass storage

It is very good that almost there is no down time.

NERSC consistently keeps its hardware up and efficiently running. ...

lots of machines

Provides good uptime, fast response to problem reports, reliable service.

Excellent configuration and maintenance.

Computing power Turnaround time

great access to lots of data, with lots of computing power.

NERSC does well keeping its CPUs up and running. ...

Seaborg has a good uptime, and it is reliable. HPSS is excellent. ...

Performance computing. Large CPU and large RAM.

pdsf

faster than rcf and more reliable

pdsf !

Hardware operations, uptime, user support.

Hardware availability consulting services

PDSF

... Seaborg is nice hardware.

Provide computing resources for scientific work.

Ability to request specific computers for jobs. Fast--once program is running.

The powerful of the seaborg machine

The SP speed is satisfactory (much better than the old T3E)

Hardware performance, technical consulting

  User support, good staff:   62 responses

Short response time, efficiency, high professional level of staff

Service and support are great!

The staff has always been very helpful in resolving problems and redirecting me to the correct non-NERSC person when appropriate.

I am most pleased with the timely support provided by the consultants at PDSF.

Consulting has been really efficient.

NERSC staff goes out of their way to provide the necessary tools for scientists to achieve their scientific objectives. I have experienced this with both the HPSS and the PDSF groups.

help response and network connection

Quality of technical advice from consultants

running large simulation and helping finding and solving problems

Support & CPU

Consulting services and the website

Excellent responsiveness to requests and stellar uptime performance.

Consulting staff are first class and have generally benefited our group

I really appreciate the help from consulting service.

Consultants are great.

NERSC has a good connection between hardware and consulting. I have found that the consultants can usually actually solve problems for me without too much bureaucratic overhead. Its good to give the consultants enough control to actually change things. The consultants have a good attitude and I can tell that they try to help. helpful.

Consultants are the greatest.

... Also, its consulting services have always been helpful even with the dumbest questions.

interaction with customers;

Consultants are extremely helpful.

Good user support. PDSF runs very reliable.

... The users support that help with any problem.

Consulting service is very good. They reply very fast. They are very helpful to me. Can run job interactively. Fast network.

Consultants are great. ...

The consultants are quite personable and really try to help (even if you do something stupid, they are nice about it).

Especially pleased with consultants - excellent!

Reliable and fast support, problem resolution

  Job scheduling / batch throughput:   17 responses

NERSC has a very short turn-around time for small and very large jobs. This makes it easy to debug large jobs and then to run them. ...

NERSC has a very short turn-around time for small and very large jobs. The time required to run a job is quite adequate. The support staff is great too!

... I also think the queuing structure works effectively.

Big jobs starts running quickly and for a long time.

The queue structure seems nearly ideal. Short/small jobs can be run almost instantly, making debugging much easier than on some other systems. Large jobs (at least in terms of processors -- none of my jobs take very long) generally seem to start within 24 hours.

queue throughput on seaborg and long queue time limit. this is what really matters to me most.

The queuing system works very well. Very few problems when porting a program to NERSC.

We are very pleased with the ability to use several 1000 Cpu s ...

The queuing system and the waiting time before a job runs are excellent.

  Documentation:   11 responses

... I've also found that compiling my codes (F90, in my case) and getting up and running was very painless, due to the good online documentation concerning compiling and the load-leveler.

Web site

Well documented resources.

  Software / user environment:   10 responses

I was using g98 and g03 and they were running very well. It has been very useful.

... The selection of software and debugging tools is quite good.

  Good network access:   6 responses

It's easy to connect to NERSC, since there are less security hassles (in comparison to LLNL, say). ...

Openness of computing environment and network performance.

I think easy access and storage are the strongest features of NERSC.

  Other:

Too early to say

 

What should NERSC do differently?   75 responses

  Seaborg job scheduling / job policies   24 responses

NERSC's new emphasis favoring large (1024+ processor) jobs runs contrary to its good record of catering to the scientific community. It needs to remember the community it is serving --- the customer is always right. The queue configuration should be returned to a state where it no longer favours jobs using large numbers of processors. ...
[user's jobs use 16-1,152 processors; most typical use is 192]

As indicated previously, I'm not in favor of giving highest priority to the extremely large jobs on all nodes of seaborg. I think that NERSC must accommodate capacity computing for energy research that cannot be performed anywhere else, in addition to providing capability computing for the largest simulations.
[user's jobs use 16-1,600 processors; most typical use is 128]

My only concern is the steadily increasing focus on the heavy-duty "power" users --- small research groups and efforts may get lost in the shuffle and some of these could grow to moderately big users. This is more a political issue for DOE Office of Science --- the big users right now are most needed to keep Congress happy.
[user's jobs typically use 16-128 processors]

Smaller users are the lowest priority, and that can be (predictably!) frustrating for smaller users. That said, we know that NERSC exists for the larger users, and our requests for small amounts of additional time are always honored if the time is available. So things always seem to turn out OK in the end.
[user's jobs typically use 16-128 processors]

The queue is horrendously long, making the computer resources at NERSC essentially worthless. I've had to do most of my calculations elsewhere - on slower machines - due to the extremely long time spent waiting in the queue. (Over a week, on average!)
[user computes on 1 node]

queue time for "small" jobs (using a relatively small number of nodes)
[user computes on 1 node]

Alternative policy for management of batch jobs. [Wait time for short runs with many processors sometimes takes too long.]
[user's typical job uses 256 processors]

Change the queue structure to make it possible for smaller jobs to run in reasonable time. A 48 hour wait for 8 hours of computing time, which is typical for a 64 processor job on the regular queue, makes the machine extremely difficult to use. I wind up burning a lot of my allocation on the premium queue just so I can more than 24 hours of compute time per job per week.
[user's jobs typically use 32-256 processors]

differentiate the queues [waiting time for small jobs is quite frequently inadequately long; there should be also a queue for small, but non-restartable jobs with longer than 24 hours limit]
[58% of job time spent at 1,024 processors; 25% in "regular_long"]

Sometimes the only problem is the time that we have to wait to start one work; especially if we submit this as a low priority (sometimes more than one week).
[user's jobs typically use 32-80 processors]

It takes a very long time to get a queued job to run on the SP. There are many things to be taken care just in order to submit a job.
[user's typical job uses 96 processors]

keep working on job queues so that both big node uses and those who can't uses big node jobs both have fair turn around for their jobs. ...
[user's jobs typically use 16-128 processors]

Some jobs such as climate models do not scale well so it is difficult to use large numbers of processors.
[user's jobs typically use 16-224 processors]

Recently, the queue is very long and need long wait to get program run. Need more NODEs or to optimize the queue system.
[user's jobs typically use 16 (5%) to 2,048 (35%) processors]

It would be great if the interactive/debug queue response time can be improved.

... As mentioned earlier, allow more interactive use on seaborg

A higher memory limit for interactive jobs would be nice.

Queue waiting time it will be even better if the interactive job can run for 1 hour.

Estimate how long it will take for a job to be started once it is pending in the queue. Assure the interactive sessions will not "hang" so much.

more flexible user policy ... [Should reduce the waiting time for debug queue]

The time limits placed on jobs are very restrictive. The short times mean that I have to checkpoint and restart after only a few iterations. This can increase the time it takes me to get results by an order of magnitude. My program also runs more efficiently if allowed to run for more iterations (as each new iteration is a refinement of previous steps) - continual stopping (due to time limits) and restarting can cause problems, sometimes resulting in incorrect results.

... Wallclock time limit is too short.

Limit users to those that run parallel jobs.

If you want to encourage big parallel jobs, you might consider giving a discount for jobs over 1024 processors.

  Provide more/new hardware; more computing resources:   16 responses

NERSC should move more aggressively to upgrade its high end computing facilities. It might do well to offer a wider variety of architectures. For example, the large Pentium 4 clusters about to become operational at NCSA provide a highly cost effective resources for some problems, but not for others. If NERSC had a greater variety of machines, it might be able to better serve all its users. However, the most important improvement would be to simply increase the total computing power available to users.

more computing power :-)

Computational research is becoming an essential need, and our needs of computer time increase constantly, since we want to tackle increasingly complex problems. Keeping updated in hardware and software and increasing the hardware capacity will benefit all the academic community.

Get new hardware.

I would like to see NERSC have some faster processors available. Also if NERSC had test boxes of the newest hardware available for benchmark purposes, it would be useful in helping me make my own purchasing decisions for local machines.

In addition to the IBM SP a system which allows combined vector and parallel computing would enable a challenge to the Japanese Earth Simulator. The program I mostly use would benefit from vector inner loops and parallel outer loops.

Stop putting all of your eggs in the IBM basket. If you want to compete with the Earth Simulator, you won't do with it SP's. Reason: poor (relative to Crays) scalability and poor (<15% of peak, sustained) single-cpu performance.

Not much - perhaps the next generation of SP would be nice.

Having more than one mainframe of one type is wise. That way when one is down one can still get some useful work done. Native double precision mainframe would be nice. Having kept some Cray hardware would have resulted in less time wasted porting valuable codes to a less code friendly platform like the SP.

It would be great if NERSC could again acquire a supercomputer with excellent vector-processing capability, like the CRAY systems which existed for many years. The success of the Japanese "Earth Simulator" will hopefully cause a re-examination of hardware purchase decisions. Strong vector processors make scientific programming easier and more productive.

I would like to have access to machines non specifically dedicated to parallel jobs.

I'd like to see some queues and/or machines to support legacy codes that have not or can not yet efficiently utilize multiple processors.

As mentioned earlier, please find a way a getting users who are not doing true supercomputing to find a more cost-effective solution to their computing needs. (I mentioned the possibility of smaller NERSC-managed Linux clusters.) Because of the clogging that occurs with these small jobs, the turnaround time for large runs can be unacceptably long. ...

Memory/processor upgrades should be considered.

These are more hardware issues, but: - More large-memory nodes ...

Have a machine with better usability of the memory and better bandwidth

  Better documentation:   7 responses

I would like to see better web docs. Sometimes I think what is there is over kill. Simpler is better. Make the full details available (deeper down the tree) to those who want and need it ... but try to keep everything concise and straight forward. Try to anticipate the basic user questions and make the answers easy to find and easy to understand. Always keep in mind, the user just wants to do blank ... he doesn't want to be an expert with HSI or loadleveler or whatever. The look and feel of NERSC web pages is a little stark ... and the pages all blend together. It helps to remember where you where (3 months later) if the page stands out. For example, when I was looking for blank I remember finding the answer on the bright green page with big banner on top. Nearly all nersc pages look the same. Imagine trying to find your way home in a city where all the streets look nearly identical.

The web pages can use improvement. There have been recent improvements on the HPCF website, I hope that other pages will improve as well (finding information, not just be searching, but also by browsing).

Make it easier to find specific info on system commands/functionality.

I don't see any major problems, although I often have a hard time finding information on the website.

Web page. Should be more informative and easier to find out.

... Provide information to help those with models that can't use 64 node to best use MPI and Open Mp to maximize number of nodes while retaining efficiency. Using 64 nodes at 1 percent efficiency would be a big waste of computer time and nodes both.

Update the webpage! Make the information more accessible to less trained users.

  General center policies:   6 responses

Reduce the security level. Not have passwords expire.

The overhead on account managers still seems a bit much for what we're getting. I still find the ERCAP process onerous (i.e., more information requested than should be necessary). Also, most of the codes we are using are changing much more from year to year in a scientific sense than a computational sense, it becomes repetitious to have to keep evaluating them computationally each year. You need to keep in mind that most of us are being funded to do science rather than computational research.

Measure success on science output and not on size of budgets or quantity of hardware.

Can't think of anything. Satisfying both capacity and capability missions remains a challenge.

NERSC should attune more towards the individuals that ask for help and less towards the masses. I receive too many e-mails!!!

Level of functionality in data handling / data management services are quite low.

  More/better training:   6 responses

It would be nice if NERSC can provide more tutorials.

I have not yet participated in a training session - perhaps that should be more strongly encouraged. We could also use advice in tuning our codes to make better use of the facilities.

Training either more accessible and or better known as to what is available. Possibly an index of resources could help.

... grid classes.

Please offer more on-line video/on-site courses

Make training lectures more accessible to users at remote sites. (Video lectures will be a big help to users who have no access to Live DOE grid lectures)

  PDSF improvements:   6 responses

Disk vault tricky to use

more interactive nodes on pdsf, ...

Would prefer faster CPUs at PC farm.

make it faster and bigger diskspace

Better check on 'crashed' batch nodes, i.e. LSF shouldn't submit jobs to these nodes. Faster recovery from disk vault 'crashes' (= unavailability of data sitting on pdsfdvxx).

some of the disks we write to could be upgraded - it limits the number of processors I can use and my code could run a little faster

  Seaborg software improvements:   5 responses

possibly add simple visualization software to Seaborg (xv, xmgrace)

Software availability could be better, especially regarding C++ libraries and tools.

... The compilers and debuggers need to be improved.

I think the software environment should be broader, include python by default for example.

... more responsive to software bugs, etc.

  Other Seaborg improvements:   4 responses

Allow larger scratch space per user.

... Although its disk policies have improved since the days of the CRAYs, there is still room for improvement.

I would like to see the SP running for months without shutdown. Maintenace should only affect few troubled nodes, not the whole machine. ...

Reliability of seaborg has been problematic over the summer, especially with GPFS. ...

  Network improvements:   3 responses

faster connection to East coast

... Faster WAN connectivity to the external world

better interactive connectivity

  No need for change:   3 responses

As Garfield would say, "Don't change a thing.

Nothing

All is well.

  HPSS improvements:   2 responses

  access to HPSS without special utility but with migration/robot system

Hire a few programmers to write remote file system drivers to access HPSS as a native disk instead of the monstrosity that is hsi (keep the ftp access, though...it's invaluable for offsite use).

  Shorter survey:   1 response

I find this survey too detailed, especially after being told it would take "only a few minutes." You should strive to consolidate it to about 1/2 of its present length.

 

How does NERSC compare to other centers you have used?   65 responses

Parts of the responses have been color-coded using the same scheme as the What does NERSC do well? section.

  NERSC is the best / overall NERSC is better / positive response:   41 responses

NERSC is one of the best centers

I used computer facilities at the University of Western Ontario in Canada (1990-96), at Auburn University, AL (1997-99), at the University of South Florida, FL (2000), and at the Engineering Research Center at Mississippi State University, MS (2001-present). I think, NERSC is the exemplary organization from which many centers can learn how to operate efficiently, facilitating progress in science.

Very well, number one compared with: Los Alamos, Mano/ETH Switzerland !

NERSC is the best. I'm comparing to SDSC, NCSA, and a few smaller places at various Univ. Best uptime ever. Jobs get through the queues faster on NERSC. Bluehorizon is bogged down and takes for ever to get through the queue. And, it seems to be up and down a lot. Seaborg is very steady. I personally don't like SGI machines ... so I'm staying away from places that have mostly SGI's. Machines are o.k. ... but the software (i.e. compilers) are not as nice as XLF. I'm probably not the best person to survey ... since I'm nearly 99.9% happy.

superior

NERSC does better at serving the customer than NPACI which introduced an emphasis on large jobs somewhat earlier, and it has better turnaround. NERSC allows external access to its HPSS, which NPACI doesn't. Other centres I use are much smaller operations, so a comparison makes little sense.

I've used NCSA for many years. I've tried to use the Pittsburg Terra Scale system, which was not a good experience. I have a limited access to centers in Italy (CENECA) and in Germany (LRZ, Munchen). NERSC is an outstanding facility.

As compared to other centers, I like NERSC most because of the hpcf web site and the consulting service. They are making things really different.

NERSC is much better compare to Los Alamos ASCI

it's the best. Ever since NERSC got IBMs, it's on top of the list of computing centers in terms of turnaround, speed and overall setup/performance/configuration. HLRN has a nice Power4, but the setup stinks and they seem to have massive setup/configuration problems, NERSC is doing a fabulous jobs with those. The people at NERSC seem to have a real idea about supercomputing. Most supercomputer facilities don't.

See above. NERSC works well. Most computer centers limp along. I compare NERSC with SDSC, NCSA, our local supercomputer consortium (OSCER) and supercomputer centers in France, Germany, and Sweden.

The computers appear to be run more effectively than my previous experience with Maui.

I think NERSC performs a little better than the ORNL facilities at present.

NERSC is doing the right job compared to the RHIC Computing Facility, it's hard to even compare both. The staff at both facilities seem to have a vastly different approach to their job, I hope that NERSC can keep up the good work. I have tried to use a 500-node Linux farm at LSU, but gave up because of the large amount of downtime (~8hrs/wk or more) and taking the whole cluster down with little advanced warning making running long term jobs very painful. PDSF on the other hand is of similar size, yet has hardly any downtime.

Quick response, easy access. Brookhaven National Lab. (tight security access, firewall)

Better than RCF (Brookhaven lab): much better support and more CPU

usually its faster than RHIC computing at BNL

I use Eagle and Cheetah at Oak Ridge. They are smaller machines. So the throughput is not good. The jobs wait for too long. The satisfaction from NERSC is great!

NERSC is bigger (in terms of computer size) and faster (in terms of job turnaround time) than SDSC, and I always seem to receive prompt, personal service when I have a problem with my account. Very nice.

I've used "super"computers in Los Alamos, Leeds and Spain. NERSC is by far the best.

NERSC is one of the very best, both in terms of the amount of work I can accomplish and the responsiveness of the staff. Other places I have run are: PSC, SDSC, NCSA, Minnesota, and Oak Ridge.

NERSC does an excellent job. We have also had some time in San Diego.

Improvement of single cpu performance and increment of memory, if possible. Overall, SP at NERSC is the best for large scale simulations, code debugging and profiling, or try softwares, as compared with other computational facilities (ASCI Blue, Intel Linux Cluster, TC2k, MCR...) at the LLNL.

NERSC is enormously better than RCF (see previous comments about heavy-handed security at that facility). The other large facility I have worked at is CERN but these are hardly comparable. NERSC via PDSF is doing fine.

I have also used LSU's new super-cluster, they are a miserable wreck compared to NERSC, due almost entirely to poor administration. I also use a home-grown cluster, but the queuing system at NERSC is much better than the queuing system on this local cluster.

NERSC is as good and probably better than other centers I have used.

As I said in a previous section NERSC beats RCF hands down. I like working at pdsf. All the resources I need are available. Even when I submit a trouble ticket I am confident in the people who are responding to them and my issues always get resolved and followed up.

Compares very well as compared to Oklahoma State University and to commercial software vendors.

I think NERSC is doing much better compared to a number of NPACI and DoD sites.

superior to bnl/rcf.

Much better in terms of reliability and easier accessibility (no freaking out about security, which messes up the whole system). I'm comparing to RCF ant BNL.

It compares very well with other centers I have used (NCSA, ASC, ARL)

NERSC compares very favorably to the Pittsburgh Supercomputer Center, the National Center for Supercomputer Applications and the San Diego Supercomputer center.

Blows their doors off! PSC (Pitt. SC Center) was so challenging to use we spent weeks just trying to finish one simulation.

Very stable environment to work with.

The other farms I've used don't provide any consulting support at all.

I found NERSC easier to use than some of the other sites.

I mainly compare NERSC with the CERN computing facilities, and several other university computing centres. NERSC's system of sharing resources and accounting for usage seems to be logical and works well. [PDSF user]

NERSC is very good computing center, I didn't used other similar center, so I don't have comparison.

I use the -BCPL (Bergen Computational Physics Lab.) -CSC (Frankfurt Center of Scientific Computing) -GSI-Linux Cluster. NERSC performs nicely.

I have also used Mike at LSU, ( a very young system that makes me appreciate how smoothly NERSC operates) so I have little basis for comparison.

  NERSC is the same as / mixed response:   11 responses

The machines are better, faster, and seem to be well maintained (more uptime, fewer killed jobs). But the wait time in the queue is absolutely horrible. This is in comparison to the GPS and ILX clusters at LLNL and local beowulf clusters here at UC Davis. I'd love to use NERSC more, but the time I spend in the queue makes the resource impractical.

NERSC is better in almost all categories, with the exception of interactive capabilities on seaborg. The lack of disk space and scratch space that I complain about seems to be a problem almost everywhere. I am comparing NERSC to Oak Ridge, SDSC, NCSA, as well as some secure computing facilities and vendor systems.

ANL LCRC - JAZZ, APAC in Australia, VPAC in Australia. The online documentation of NERSC is better, but the others are more flexible with wall-time limits and/or queuing/prioritizing of large parallel jobs.

SDSC, The performance of NERSC and SDSC is comparable. Both are excellent!

Most of the other centers I use have POWER4 based machines (NAVO, MHPCC) which are nice. The quality of the consulting seems to be comparable. The websites at the DOD sites are worse than the NERSC site.

NERSC has more software and consulting support, also more stable than the ccsl at ORNL. However, the ccsl at ORNL has the access to the file system even though the big machine is down. Can nersc also implement this type of system? Meanwhile, for the queue system, can NERSC also let small job using a few nodes run a maximum 48 hrs even though the policy encourages big scalable jobs? This is because that not all of our codes are scalable to large number of processors under all physical conditions. You can give low priority for small long time job.

I have used the San Diego Supercomputer Center and the Texas Supercomputer Center. Compared to them, NERSC has a higher performance machine, generally better uptime, a better selection of programming and debugging tools, and a more useful help system. NERSC's queueing system, in contrast is worse because it is less flexible. While SDSC also favors large jobs in its queue, it has tools to allow users with smaller jobs to work around the large jobs. Thus, for example, there is a tool called "showbf" to allow users to determine when the next job is scheduled to start, and to fit their job into the backfill window created by processors going idle before the large job starts. Similarly, the queue system gives users an estimate of how long it will take for jobs to start, allowing them to adjust the number of nodes and amount of time requested to make jobs start sooner. The flexibility that tools like this provide makes it possible to be a small-node user without resorting to the premium queue. NERSC lacks comparable facilities.

All centers, such as PSC, NCSA, NERSC, and ORNL are managed well.

It is very complicated to start with. Once the batch files are setup and one got used to the structure, it becomes much more accessible.

NERSC and LLNL LC compare very favorably.

NERSC is on a par with Oak Ridge and NCAR, however, it does suffer from very large numbers of users. This develops very long waiting times.

  NERSC is less good / negative response:   7 responses

I think seaborg should be accompanied by a cluster of serial machines such as the ADM cluster morpheus at the University of Michigan.

NIC Juelich,Germany has got the above mentioned robot system to migrate data to tapes (in Cray cluster, planned for new IBM SP)

ok (NCSA), although the queue-ing time at NERSC seems longer

1.Eagle: ORNL, 2. IBM SP: North Carolina Supercomputing Center (decommissioned now) The only major thing I dislike at NERSC is the inability to run jobs interactively for even short period of time which can be quite frustrating if I am trying to debug a code.

I'm comparing to our IBM-SP computer centers here at Oak Ridge (Eagle and Cheetah) that require much less of a regular application process for resources. The trade-off is that I don't always get the same regular access to resources as the higher priority projects (i.e., SciDAC, etc.). But even being a spare cycle user can give me significant computing on these systems over time.

I said this a year ago: PLEASE do what the San Diego Supercomputing Center does and provide shared-access nodes for interactive use. Debugging a code on Seaborg can take an order of magnitude more time that it should be because of 'access denied due to lack of available resources' errors. SDSC discovered a simple solution -- WHY HAVEN'T YOU IMPLEMENTED IT???

HPCC of USC. No limit on walltime.

  No comparison made:   6 responses

No previous experience.

RCF, SLAC, FNAL

I haven't really used any other center, besides the OU OSCER facility here, so I can't really compare NERSC to other big centers.

Edinburgh Parallel Computing Center Pittsburg Supercomputing Center NCSA

Variety of mainframes like at Pittsburg

I have used resources at Cornell and SDSC, but that was quite a while ago.