- Survey Results
- Users are invited to provide overall comments about NERSC:
- Here are the survey results:
- Respondent Demographics
- Overall Satisfaction and Importance
- All Satisfaction, Importance and Usefulness Ratings
- All Usefulness Topics
- Hardware Resources
- Visualization and Data Analysis
- HPC Consulting
- Services and Communications
- Web Interfaces
- Comments about NERSC
Comments about NERSC
What does NERSC do well?
In their comments:
Their responses have been grouped for display as follows:
What should NERSC do differently?
In previous years the greatest areas of concern were dominated by queue turnaround and job scheduling issues. In 2004 , 45 users reported dissatisfaction with queue turnaround times. In 2005 this number dropped to 24 and this year only 5 users made such comments. NERSC has made many efforts to acquire new hardware, to implement equitable queueing policies across the NERSC machines and to address queue turnaround times by adjusted the duty cycle of NERSC systems, and this has clearly paid off. The top three areas of concern this year are job scheduling, more compute cycles, and software issues.
|10:||Change job scheduling / resource allocation policies|
|10:||Provide more / new hardware; more computing resources|
|9:||No suggestions / Satisfied|
|7:||Fix / improve hardware|
|6:||Data Management / HPSS Issues|
|5:||Improve queue turnaround times|
|4:||Provide different resources / resources for smaller jobs|
|3:||Account and Accounting issues|
|2:||Improve consulting services|
How does NERSC compare to other centers you have used?
|41:||NERSC is the best / overall NERSC is better / positive response|
|11:||NERSC is the same as / mixed response|
|4:||NERSC is less good / negative response|
|11:||No comparison made|
What does NERSC do well? 113 responses
- Provides access to multiple HPC resources / well managed center
Network connectivity is good. HPSS is reliable. Bassi and Seaborg are reliable. This makes post-processing large runs less of a headache than other places.
Fast computers with the software we need.
The computers are stable and always up. The consultants are knowledgeable. The users are kept well informed about what's happening to the systems. The available software is complete. The NERSC people are friendly.
Keeping the most advanced hardware available in a stable environment with easy access.
Availability of resources are good. Performance of computers are good. Documentations are good.
The facilities are good, queue times are shorter than at other facilities, and the administration is responsive and prompt at allocating time.
Discount charging program for large jobs is great.
Good computing infrastructure and excellent support.
NERSC runs a reliable computing service with good documentation of resources. I especially like the way they have been able to strike a good balance between the sometimes conflicting goals of being at the "cutting edge" while maintaining a high degree of uptime and reliable access to their computers.
Variety of hardware. Long term support for hardware (even if newer generation hardware is already available).
NERSC is doing a very good job. It is very important to me, since I need to analyze a large mount of data. NERSC is fast and stable.
NERSC has an excellent hardware and software resources, which are very important. I am most pleased with our request and acquisition of allocation hours, and the outstanding Help support (timeliness and accuracy).
Provides reliable hpc resources - hardware and software. Long term time allocations and sensible time allocation application process both providing a good match to ambitious long term scientific programs. Straightforward and transparent account policies and procedures.
The best part about computing at NERSC is the support and the reliability of the computers. I could use our local computers (LLNL) but the support is not nearly as good nor are the machines as stable.
One of the great benefits for us of using NERSC is the fact that the HPSS and PDSF systems are available. I think that the combination of the two is very powerful for experimental particle physics. We do not use the other resources offered by NERSC because they are not suitable for the type of analysis we do. However, being able to read a large data set from HPSS and process it on PDSF in a finite amount of time is very valuable. I also think that in general, the switch to GPFS as the filesystem of choice for NERSC has been an excellent decision.
I am also impressed by the ease with which one can request (small) resources for a start up project. I recently requested some computing resources for a new project we are planning for and was up and running in a few days. This helps us tremendously in trying to reach our scientific goals. Having worked with a number of computer centers, I have to say that NERSC does this very well.
I also think that NERSC is very sensible with the current overall computer security approach (see also below).
Furthermore, I am glad to hear that NERSC has decided to setup an open source software group. I hope that this group will work on some of the open source software that is in use at NERSC and build up detailed expertise using that software. One of the projects that I hope can be looked at is the Sun Grid Engine (SGE) - the batch queue software in use at PDSF. Perhaps this software can also be used on some of the other computer clusters.
The PDSF specific support staff are very good; they need more help.
HPSS can hold a lot of data.
Access to NERSC computing via ssh and scp is crucial for its overall usability. Please do not go to a keycard/kerberos/gridtoken etc. authentication. This would break much of the automation ability which is vital for large collaborative projects.
The computing resources are very good.
CPU-time allocation process is quick (also for additional time).
NERSC is a very well managed center. The precision and uniformity of the user environment and support is outstanding. I am fairly new to NERSC (INCITE award) but it compares very favorably indeed with NSF centers.
Our research is totally dependent on very large scale computation. I hope we will be able to work with NERSC in the future.
Excellent hardware and software and good communictions.
Computing at NERSC is reliable. Documentation is complete and any information needed can be found online.
We compute at NERSC because it has computing resources that far exceed those of our home site. NERSC's support staff has provided very timely responses to our inquiries, and has resolved the few issues we've encountered very quickly. NERSC's support staff has constantly monitored our quotas and usage, and has adjusted allocations for our project in proportion to our usage. The response time for these adjustments is very fast! NERSC's support staff has definitely added to the efficiency and productivity of our project.
NERSC is very well managed and operated.
Generally, NERSC does capacity computing very well, servicing a large community of users; it also has (or soon will have) excellent capability platforms for many jobs, both small and large.
There is no other supercomputing facility in the world where I can carry out my theoretical and computational research in the Physics and Chemistry of Superheavy elements. I have been using the facility for ~ 10 years and I am most satified with the hardware, software, consultants,etc., and my first choice would be to use the NERSC facility.
NERSC is very important for my research. Its computer power, support facilities, and the reliability are far better than those provided by other super computer centers.
Good facilities with good support. I've had good turnaround on jacquard (less good on seaborg). But since our code is better suited to jacquard, this is not a problem.
NERSC has plenty of computing power, very good software configuration, and great support and consulting staff.
Excellent management and support of integral high performance computing resources.
NERSC offers unique high performance computing capabilities that enable new avenues in scientific computing, such as our "reaction path annealing" algorithm to explore conformational changes in macromolecules in atomic detail.
NERSC is continuing to improve their computing capabilities and support to users.
Staff has been very helpful. Proposal process is efficient. These resources are a tremendous help for our research. In fact, we could not do everything that we're currently doing without these resources.
Large parallel machines, turn around time, consultant support.
I am familiar with NERSC, and I think you guys provide a good, universal service with emphasis on HPC.
NERSC provides excellent, world-class HPC resources in almost all aspects, from hardware to software. What distinguishes it most from other supercomputing centers is, in my opinion, its superior user support, in both consulting and services, although there is still room for improvement. That has made our scientific work more productive, and that's why NERSC is important to me.
NERSC is the most reliable computational center on which I ever run large parallel calculations. The systems are stable and the support people are competent and in most cases come back with an solution or they do show that they take seriously user problems. Very professional team. As I'm working on developing parallel scientific applications I allways need to test and produce data on reliable machines.
Queue managment has been greatly improved recently, and things seem to move well. Networking is very good. Consulting is very helpful in resolving issues. The machines run well.
I am extremely pleased with NERSC. The resources have always been available when I needed them, they keep me well informed of changes, the machines have been reliable and have performed well, and they have been very quick to solve my problems when I had them (usually expired passwords, which is my fault).
Excellent overall picture. People are trying really hard to satisfy users' requests.
I am most pleased with the services provided NERSC staff. NERSC is important to me because it provides computing power that we do not have at our home institution.
I have been using NERSC (or MFECC) for 26 years. It always has been and remains the best run supercomputer center in the world. The staff responds to requests and is very helpful in general.
Aside from the sheer number of CPU hours available, NERSC's strengths are its knowledgeable and responsive staff, and its comprehensive list of well-maintained and up-to-date software libraries and compilers. I also appreciate the timely updates about outages, the low numbers of such outages, and the queueing policies that make it possible to run many instances of codes that require 100s of processors for 100s of hours as well as those that use 1000s for 10s.
NERSC provides significant resources and support for those with a minimum of hassle. It is an excellent example of a "user facility," with a sense that it really serves the users, not the people that manage it.
Excellent high-performance computing access, very professionally managed. High reliability.
Good variety of computer architechtures and helpful consultants.
I remain quite satisfied with queue times and ease of use.
NERSC is a window for me for the whole world. It is part of my academic life like can not do without it. I am very greatful for evryone at NERSC for their continuing good services.
very satisfied with consulting, machine accessibility ....
Large scale computations (can not be done locally)
I trust the expertise on technical issues and the reliability of the availability of the resources (hardware, software, people).
I think NERSC is very well supported, with a very logical layout, and nearly all the tools I would need. This has allowed me to learn the system, and get useful work done in a relatively short time.
I think the NERSC machines are generally well supported and that the organization is solid. Applications are generally well handled, and the organization gives an impression of running a "tight ship".
I think nersc consult service, the act software service are the best in US.1. (a) For the robust stable computing environment .
1. (b) I compute at NERSC as certain problems require the memory of 1000 processors
2. I'm pleased with the fair batch queuing system and prompt reply to inquiries.
NERSC offers state-of-the-art computing platforms and necessary softwares for conducting scientific research. I am very satisfied with the support of NERSC in carrying out my research projects.
NERSC provides excellent computational facilities and excellent support.
Since the late 80's NERSC has provided all computational resources for my research activity.
- Provides access to powerful, stable, computing resources; parallel processing; enables science (focus on hardware and cycles)
NERSC has a lot of computational power distributed in many different platforms (SP, Linux clusters, SMP machines) that can be tailored to all sorts of applications. I think that the DaVinci machine was a great addition to your resource pool, for quick and inexpensive OMP parallelization.
NERSC systems are very stable, and are thus an excellent place for developing code.
Lots of available computing time, Easy to get nodes.
There's a lot of computing power at PDSF and the system works. I like things that work.
running climate models requires large resources. a single or couple linux boxes just does not have compute power. Bassi has much better turn around than seaborg and if your application can only use less processors effectively is much better for work/cputime ratio.
Interactive runs on Seaborg - this is the ONLY useful means of debugging my MPP code available on NERSC or NCCS supercomputers.
There are machines that fit my calculations and there is the possibility to up (time and space) quotas to perform these very large, very long jobs. I could not do these runs on any other resource that I have access to.
NERSC provides a stable computing environment for the work that I could not done somewhere else. Many of my design and analysis in the area of accelerator modeling would not have been possible without NERSC computing power.
NERSC makes possible for me extensive numerical calculations that are a crucial part of my research program in environmental geophysics. I compute at NERSC to use fast machines with multiple processors that I can run simultaneously. It is a great resource.
I use NERSC because I have access to a lot of processors on seaborg.
NERSC provides me with tremendous computing power and availability. I have been a little disappointed with problems oin Jacquard, related most likely to the MPI implementation. NERSC consulting was however able to help me with that but it was still a considerable decrease in the usability of the machine.
Computer resources are much better than other center (see below).
Just one little comment within my short experience of using NERSC: the interactive jobs for testing my own ideas are a little bit inconvenient before running longer/larger jobs.
seems to be a reliable, well-maintained system. we take advantage of the parallel processing resources at NERSC.
It is one of the few places where i can do the computatons i need to do.
Bassi is really fast, davinci's unlimited quota is my favorite.
I use nersc because Seaborg has significantly more RAM/node than other clusters I have access to.
capability of massive parallization
Highly efficient clusters.
We enjoy the sizable computing resources in multi-way SMP nodes. In particular the 8cpu and above nodes. The large number of nodes permit us to do large simulation batches. We find it possible to do considerable ammounts of code performance enhancements on this hardware thanks to acceptable queue times on debug queues and interactive queues and passable performance monitoring tools.
NERSC has the only resources available to complete my computation in a timely manner.
NERSC is important to me because I don't have enough computer resources in my group to perform the computation I need to do for my projects.
NERSC is important to me for the computation power and it is the main reason why i compute there.
NERSC is important to me because it allows me to run relatively big parallel job I can not run somewhere else.
NERSC is very important for me to accomplish important research projects.
maintenance (uptime and stable operation of computing nodes)
It is a realiable computing center, e.g., Seaborg is regularly up and by today's standards it is still a powerful parallel computing tool (we will be using Bassi more in the future though).
I do electronic structure using quantum monte carlo, so having a rubust large computer is of extreme importance to me.
Provides access to large machines.
NERSC is extremely useful for my computing needs. I can effectively run my production jobs at NERSC.
Machine is easy to access.
Very helpful and important for my research
Once on the system, I like the ability to run a job any time of day as well as how fast my program runs on the machine...
I work here (at Berkeley Lab) and computing with NERSC is my job.
Most pleased with: short job waiting time, ample CPU resources. Important to me: ample CPU resources.
Different architectures available with uniform "layout" - the huge amount of available nodes allows one to test his own codes to different amount of data.
The waiting time in seaborg has been getting worse and worse. Adding new machines such as bassi and jacquard was adequate. Currently I extensively use jacquard, which shows very reliable performance. However I feel that the home quota (5G) is rather small in jacquard even though I have an option to use NFS.
The computing resources at our university (university of Utha) is limited. We need more computing resources to finish our projects in time.
Availabiity of many processors (>64).
Large memory jobs possible (with 64 bit compilation)
"Minimal" down time
Jos 'eventually" get done
I compute at NERSC since one of my programs needs a lot of memory and nodes and runs long. NERSC is for me the next step up from the Ohio Supercomputer Center, which does not have the same machine capability. so, I am using OSC when developing code or with smaller code, and for more I need to come to NERSC. and I am happy with that.
- Good support services and staff
Consultant support at NERSC is very good - I rate it more highly than other supercomputer centers I have experienced.
Your consultants are great; without them, NERSC wouldn't be very useful to us.
User services is great!
Consulting services are very good.
Consulting is very good in Nersc, and consultants are kind, well-responding to my needs.
fairly good consultant support
As an experimentalist who requests NERSC time in order to collaborate with theorists, I am not a hands-on user (and therefore left most of this survey blank). Because my knowledge of the NERSC computer system is negligible compared to the average user, I expected it to be difficult or confusing to request time and manage an account. Yet the NERSC staff has always been very helpful and have made the process as easy and simple as possible. Thank you!
NERSC provides very good user support. I am very satisfied with the way that NERSC people handle user's questions and requests; they are very professional. Also the NIM website is probably one of the most organized online managment system I have experienced.
- Good software / easy to use environment
The preinstalled application packages are truly useful to me. Some of these applications are quite tricky to install by myself.
I really think the nersc team is doing a great job of keepin the fortran compilers and the math libraries working properly. I have used other clusters and I have had tons of headaches. While in Seaborg and Bassi, my experience compiling the codes have been really smooth.
NERSC allows for large quantum chemistry jobs to be run quickly with MOLPRO.
Implements experiment software
the software env. always works as expected. time from uploading my code and data to having a working production environment is very competitive.
NERSC provides the easiest MPP access for many of us in the DOE spehre. For those of us who do science and do not program 12+ hours a day, the NERSC "interface" is relatively easy to use once you become familiar with it.
- Used to be Good / disappointed
PDSF used to be wonderful - always up, easy to use, lots of user support, etc.
Frankly, NERSC has been an utter disappointment. I thought I would be able to run big jobs quickly and get a lot of science done but instead I've spent all my time trying to figure out how to compile stuff. I only compute at NERSC because it was easy to get the time and the queues are short.
- Survey too long
This survey is much too long. Please try to streamline it next time.
Your email promised this would take only a few minutes. I have run out of town and must leave. Sorry. Should have put these questions first if they are important.
What should NERSC do differently? (How can NERSC improve?) 72 responses
- Change job scheduling / resource allocation policies: 10 responses
Little more favorable policies for smaller jobs.
You need a much better queue structure! Not every job runs effectively on a vast number of processors, and those of us with long running jobs that need relatively few processors should be granted a way to use the time that we're allocated.
NERSC response: The best machines to run long running jobs on smaller processor configurations are Jacquard, which has a 48 hour wall limit for 1-32 processor jobs (see Jacquard Batch Queues) and Bassi, which has a 24 hour wall limit for 1-120 processor jobs (see Bassi Batch Queues).
Seaborg is configured to favor jobs using large numbers of processors, and is therefore not the best platform for long running small processor count jobs.
The CPU limit on interactive testing is often restrictive, and a faster turnaround time for a test job queue (minutes, not hours) would help a lot.
Have more processors for interactive runs
The only thing I can think off is the time allocated for debug class jobs. It should be larger.
Longer batch times on Bassi would be very helpful!
I think the best way will be to have a few nodes where the walltime is larger than 48 hrs. Specially for the 64Gb nodes in Seaborg, if a calculation needs that amoount of memoery is also likely that it needs a longer walltime!
shorten the queue or make it more consistent wait time- sometimes a job starts within 1 min of submitting, sometimes it take 24-48 hours. Its very difficult to plan jobs and choose which computer to use if the queue is so unpredictable. It would be really helpful if the computer could give me an estimate after I've submitted something for how long it will take to get through the queue. Even a crude estimate would help... 1 hour or 48 hours?...
It would be good to have queue for long jobs at PDSF.
The queue structure, which has very few running jobs per user, heavily favors people running many-node jobs, not people running embarassingly parallel problems. From a queueing perspective, it should not make a difference if I do my science through one 128-cpu job or 128 1-cpu jobs. There is nothing saying "better science" is done with large jobs, in fact, embarassingly parallel jobs likely have less overhead and better efficiencies in the queue. This can be accomplished by changing the queue structure so that it's not the number of JOBS that is limited, but the number of NODES, ie one user can have at most x nodes at a time, regardless of how he chooses to use those nodes.
- Provide more / new hardware; more computing resources: 10 responses
Better/faster supercomputers. Lower point-to-point latency message passing.
More and faster machines! ...
Upgrading to faster machines will be a nice improvement.
To improve the speed
continue to expand resources
It is always necessary to keep increasing the number and speed of processors as possible, ...
The move now is to large numbers of CPUs with relatively low amounts of RAM per CPU. My code is moving the opossite direction. While I can run larger problems with very large numbers of CPUs, for full 3-D simulations, large amounts of RAM per CPU are required. Thus NERSC should acquire a machine with say 1024 CPUs, but 16 or 32 GB RAM/CPU. This would be as much RAM as is available on Franklin!
Get more modern hardware. Seaborg, for instance, is quite old.
Get Franklin online ASAP
Seaborg is getting old and slow, it would be nice to have a new computer with a similar size as Seaborg.
- Software issues: 10 responses
It would be desirable to to get rid of the PathScale compilers and the mvapichi package on Jacquard and replace them with better options. They are bug-prone, and there is no obvious reason why they were chosen the first place. ...
More variety of compilers on machines like jacquard. It misses a Fortran 2003 compiler for example.
It would be nice to have more quantum chemistry programs available on NERSC like ACESII or QCHEM 3.0.
I would like to see more quantum chemistry support. Mostly in the form of keeping up-to -date the existing software.
Support for software in the field of molecular dynamics/ biophysical chemistry could be somewhat improved, but the existing offers definitively already provide a basis to work with.
... I sporadically have problems with F90 compilers, and the documentation on the compiler options is mostly incomprehensible
... also to keep updated the mail libraries, but I think that is done properly.
better grid support
Improved reliability - software changes break my codes or change the results too often ...
Do you have large amount of memory available to matlab on Davinci?
- No suggestions / Satisfied: 9 responses
I am already quite satisfied with NERSC.
Nothing, keep up the good work.
It is very very fine as it is.
mainly keep doing what it is doing: bringing on new systems while keeping the old ones available long enough to make the transition easy.
no suggestions - very happy
Difficult to improve an already excellent organization
I really don't know. I'm very satisfied.
Hard to say...
- Allocations issues: 7 responses
More adequate and equitable resources allocation based on what the user accomplished in the previous year.
The applications process should allow for submission of a file rather than online text boxes. ...
... NERSC could also implement a more clear and fair computing time reimbursement/refund policy. For example (Reference Number 061107-000061 for online consulting), on 11/07/2006, I had a batch job on bassi interrupted by a node failure. The loadleveler automatically restarted the batch job from beginning, overwritting all the output files before the node failure. Later I requested refund of the 1896 MPP hours wasted in that incident due to the bassi node failure. But my request was denied, which I think is unfair.
One anxiety I've had is that my project typically only requires the submission of large jobs for short periods of time, followed by periods of testing new codes and working on other projects. But I still feel obligated to find ways to use my allocation, since it will be taken away otherwise, and failing to use a requested allocation puts me at risk for not having it renewed the next year.
I understand it might require additional administrative work, but it might be nice to have single-project applications where a specific block of additional allocation hours could be requested for a particular task.
I would prefer a less formal, more flexible allocation process. I find my need for computational resources can vary significantly through the year and is not always easy to predict in advance. Being required to commit to a certail level of usage a year in advance (with the implication that if it's not used, it will be difficult to get back to that level in subsequent years) seems likely to lead to a certain level of wasteful computing when averaged over all users. This type of an allocation system has been in place at NERSC for many years and I don't claim to have a detailed suggestion as to how to change it. However, it seems timely to consider going to some type of non-allocation based approach of access to resources as is used at other computing centers.
The Allocations process should make more use of external review and be more open to researchers not currently funded by the DOE.
Make it easier to get allocated resources for people who needs to do their job on NERSC. For instance, I do not know one year in advanced whether I will get certain support on doing certain computation. The startup account just does not have enough resources for those computation jobs I need to do. NERSC would mean nothing to me if I could not use it. Also, please be fair to people who use Monte Carlo algorithm. It may be trivial to parallelize a Monte Carlo code but it does not necessarily mean it can tolerate high latency or low bandwidth on a home brewed cluster when scaling to hundreds of processors. And, in some computational problems, Monte Carlo is a way to obtain the most accurate solution.
- Fix / improve hardware: 7 responses
Seaborg needs improvement, it keeps crashing.
Fix the login problems on Bassi!
... PDSF reliability is poor (bad nodes draining jobs, periodic slowdowns, etc.) ...
I am worried that the PDSF cluster is not scaling up very well. As more experiments start running code on PDSF and potentially more machines get installed in the cluster, I think that more operational support will be necessary. I have also the feeling that in the recent year, the system admins of PDSF have become more cavalier in their approach to the whole cluster. ...
The biggest problem I have is with using the PDSF interactive nodes. They often become unresponsive or slow. I commented on this earlier in the survey.
... PDSF interactive responsiveness is poor even on a good day. It can take several seconds to start a vi session, source a script that sets env variables, etc. Login delays of 10s of seconds are common.
It is striking to me that the primary things that I am ranking poorly this year are the same things I complained about last year -- the HPSS software interface is still terrible, production accounts still don't exist, and PDSF is still understaffed/undersupported. The conversion of NFS diskvaults to GPFS based systems is the only thing I can think of that has actually improved at NERSC for me over the past year (and that was a huge improvement, to be fair).
Fix whatever is wrong with interactive use of PDSF. I cannot believe that the backup of user home directories can be having that big an effect on interactive use (what I was told when I filed a support request). I have asked if my problems are related to sl302 vs. rh8 and I am always told that I should not go backward to rh8, that everyone will be transitioned to sl302 soon. It seems like it has been ~1 year since I was first told that. I cannot even use emacs effectively in sl302, and I was advised to use rh8 instead! Conflicting advice, poor performance... if it doesn't improve soon, I'll stop using it.
- Data Management / HPSS Issues: 6 responses
Mass storage system with two servers and hsi/ftp access is unconfortable.
Migrating HOME file system to disk in back ground is easier to handle and allows faster access.
... HPSS should have better interface options. ...
I'd like to see an expanded set of hsi commands, like md5, chksum, gzip, bzip2 and pkzip. I'd also like to be able to use htar to make a compressed archive either with gzip or bzip2, like the tar -j or tar -z options on gnu tar.
... Reliability of global filesystems would reduce wasted jobs.
Increased storage resources would be very helpful. Global file systems have been started and should be continued and improved.
NERSC needs to improve on disk space that users need. Some users like me need a large disk space where daily generated huge files are stored. But HPSS seems to be quite a headache to store files, since one can't open and read files, and transferring back and forth from it to other machines seems to be quite a slow proccess.
- Improve queue turnaround times: 5 responses
the waiting time for jobs to run is still on the high side.
One problem has been the long wait in the batch queque for large jobs. This was certainly a problem in 2005 (at some point I complained with Francesca Verdier about this). This year I have not been running as much, bnut it looks like the situation has improved.
... Improved batch queue throughput for medium/large jobs
The reduction of queue time definitely will improve my productivity and hence the advancement of science itself.
Jobs using a large number of nodes tend to have too long wait times. I know this is not a simple problem to solve, but perhaps there is room for improvement.
- Provide different resources / resources for smaller jobs: 4 responses
I am a little dismayed that NERSC is replacing Seaborg with the Cray XT. I should say that I am a strong supporter of Cray in general, but many centers are moving to "cluster-like" systems that have very weak capability on a per-node basis (for example, limited per-node memory) and the XT is at quite an extreme (but with an outstanding interconnect). Personally, I would have loved to see Seaborg replaced with a large Power6 with very large per-node memory. I expect the XT4 will have superior scaling at high processor count, but I doubt we will be able to use it at all due to its limited per-node memory. Pity.
I think the emphasis on really large machines is not a good thing. The reason is that *most* of the time the large queues are not being used. I think more smaller machines should be build that focus on the queue sizes that are most used.
In the big push to petascale computing, I would urge NERSC not to abandon the needs of smaller-scale codes (those most efficient on 1000 processors or less), which do not max out the capabilities of next-generation machines, but which do good science that cannot be effectively or economically replicated at smaller facilities or clusters.
Don't suqeeze out the small (today) users in completely in favor of the "big guys" (today) --- small projects occasionally grow into big ones
- Security Issues: 4 responses
I don't understand why NERSC doesn't use one time passwords. It makes me a little nervous that the access is not controlled more like other large computer centers.
Do better job on security. Value users time and effort. ... In events of unplanned outrage, should give users time to make a backup plane. Can't just lock the whole system out without any notice.
the long SEABORG outage a month or so ago was pretty inconvenient for me; the timing was awful. ...
Notify users promptly by email if their passwords have been reset or deactivated.
- Account and Accounting issues: 3 responses
... The charge factors for newer machines should be reduced.
Having production accounts for collaborations would be quite helpful. ...
... I would like to see the possibility of having group production accounts. This is something that we have requested for the past couple of years. I fully realize that there are security and accounting implications, but there are ways of solving this issue in a way where it is fully trackable who submitted what when. I thought that there was a solution that was going to be implemented, but somehow this never happened.
- Improve consulting services: 2 responses
The consulting people should put more time to solve the customer's questions.
Would you send the plumber to fix your roof? No. Why then does NERSC force, for example, a chemist, to do all the system administration and software support he's not very good at, instead of having tech support fix the problem quickly? I really can't fathom why no one at NERSC is willing to help me get my code running. I'm four months into my work at NERSC and have yet to run anything non-trivial.
- Visualization improvements: 2 responses
Hope NERSC can have improvement on visualization software and hardware.
... So far I have been enjoying the various options of visualization softwares (mostly AVS and IDL) available in DaVinci. However, one of the major simulation code I have been recently using, the NIMROD code, has been designed to have its data output format work mostly with the visualization package Tecplot. Tecplot is a commonly used commercial visualization package that is well known for its easy accessibility and short learning curve. Unfortunately it is not available on DaVinci. I requested the consideration of installation of Tecplot on DaVinci about a year ago, based on not only the need from my own project, but also from the more important fact that the installation of Tecplot will benefit a large pool of NERSC users who are also users of the NIMROD code, which is one of the two major fusion MHD codes supported by DOE Office of Fusion Science. Yet my request is still under "evaluation" after nearly a year. I would like to take the opportunity of this annual survey to reinstate my request and concern about this request. ...
- Web improvements: 2 responses
improving web documentation for tutoring of softwares and etc.
Better information on the collection of software and promoting new tools for use in the scientific community. Help to simplify the use of computers.
- Other suggestions: 2 responses
It would be valuable to some of my applications to enable network access directly to and from the compute nodes via compute nodes from other DOE SC centers.
Enable connectivity with PCs. Allow a way to use commercial tools found on a PC with the CPU power of NERSC.
How does NERSC compare to other centers you have used? 67 responses
- NERSC is the best / overall NERSC is better / positive response: 41 responses
Your staff is more user friendly and that is crucial to success...
NERSC is more professional than other centers that I have used
NERSC has super people not only very knowledgably but also very friendly. Thank you so much for all of the people there.
We use ERDC and NAVO systems of DoD. NERSC has better websites, explanations .... for users.
NERSC is my favorite center.
I've used AHPCRC (army high performance computing research center and the super computers at the Minnesota supercomputer center). Nersc is much better about communicating problems/changes/new information.
I am also a user of RCF at BNL. I prefer to work on NERSC (PDSF), which is faster than RCF.
NERSC does very well in terms of allocation of CPU-time compared to NIC.
Mass storage system with migration is simple to use as the NERSC HPSS file system.
cf COSMS, Cambridge, UK:
NERSC has much better facilities, and is considerably more stable.
It is better than ORNL-NLCF and PSC.
Compared to ABCC, NERSC has faster job turn over rate, shorter job waiting time, much more CPUs to use.
NERSC is the best one.
NERSC has greater amount of computational resources, easier access and constant availability through the year.
NERSC is better than NCCS at ORNL.
compared to Jazz in Argonne National Lab, I find NESRC provide much better computing resources in terms of availibity and perfromance.
NERSC is the best computing center I have used. The user support and system administration is top-notch. This is compared to a big cluster I attempted to use at LSU, the SNO grid computers, and locally administered mini-clusters.
I will rank NERSC as one of the top centers among all centers that I have used. The quality of service that NERSC provides is comparable (or even better ) to that of Minnesota Supercomputer Institute or National Center of Computational Science (ORNL). In my opinion, significant expansion of the current computing facility to accommodate grand challanges in science is the probably the most imporant next step that NERSC should consider. The arrival of the new Franklin machine will definately narrow the gap and we are looking forward to testing and porting our codes the Franklin platform as existing NERSC users.
In terms of reliability and user support it is very good comparative with NCCS at ORNL.
Hands down the best. Much better than SDSC, OSCER (our local center), or HLRN
Easy to get large jobs run. PSC,ARSC
In addition to NERSC, I have made use of the NCCS at Oak Ridge as well as local clusters at my home institution. Compared to the NCCS, NERSC has far better reliability and software support, a less overtaxed and better-informed support-and-consulting staff, and much more informative web resources, with timelier information about system outages, upgrades, and similar issues. I also find NERSC's queueing policies more congenial, and its security measures less of an impediment to productivity.
NERSC is generally better than most other centers
very good center, very reliable. ran at NCEP they had stability problems.
It is as good if not better than all facilities I utilize.
Good. ERDC MRSC
See response to the first question in this section. [NERSC provides excellent, world-class HPC resources in almost all aspects, from hardware to software. What distinguishes it most from other supercomputing centers is, in my opinion, its superior user support, in both consulting and services, although there is still room for improvement. That has made our scientific work more productive, and that's why NERSC is important to me.]
NERSC is the best of all.
Compared to Juelich and DESY: NERSC's documentation is far more complete. I also find the queing system on Seaborg (debug class etc.) much more efficient.
LLNL, SDSC, PSC, NCSA, TACC.
NERSC is by far the best compared to ORNL NCCS, ARSC, SDSC, etc.
An excellent facility is indeed NERSC, compared to the Okridge Supercomputing center.
NERSC compares very favorably to other centers (eg ORNL, LANL).
NERSC is very good compared to ORNL, OSC, PSC, SDSC.LCF at ORNL. It compares well in consulting, although ORNL is getting better with the years.
Very favorably, in particular as far as a development of a long term research program is concerned unlike some others supercomputing centers often aiming at quick benefits at low costs (short allocation period, difficulties with extensions etc). E.g. MareNostrum at Barcelona SC.
NERSC is provides by far the best support. Compared to LLNL.
I have used the RHIC Computing Facility at BNL and I believe that NERSC compares very favorably with that cluster. One of the reasons for this is that NERSC appears to have a more pragmatic approach towards the security burden. Most users realize that in these times we need to be careful with computer security, on the other hand, this should not overburden the user. I know that at RCF, some users can no longer do their work because of the security situation there. I think that NERSC has mostly solved this by careful network monitoring and isolation of machines.
I have used Jazz in ANL. The cluster there is smaller, ~300 CPUS. However, the math libraries (i.e. SCALAPACK, FFTW, etc) and the fortran compilers are not as well integrated as it is on Seaborg. As a physicists that is more worried about the science rather than software issues; therefore, the experience of porting codes to Seaborg has been smoother
- NERSC is the same as / mixed response: 11 responses
as good as the best compute centers I've used
NERSC is among the top best.
We have used LBL's SCS (scientific cluster support) service in the past. NERSC support's response time and quality of cluster maintenance (both hardware and software-wise) is definitely in a higher league. This is likely due to the fact that SCS has many different cluster setups with different hardware and customized software, so issues are more complex. The only thing I can think of is that SCS's clusters seem to have a more secure gateway, namely that you need a secure key provided by a handheld device to log onto the system.
In terms of production run, NERSC is doing very good, especially with the machine Bassi, compared to SDSC. However, data visualization is still need improvement.
I also use NCCS at ORNL. NERSC hardwares are more stable and perform better. On the other hand, NCCS staff would assist in more direct ways with users to improve the performance of applications on their machines.
NERSC machines are up more reliably than NCAR & ORNL
We have fewer hardware and software problems with NCAR machines.
We get better turnaround on ORNL and NCAR machines
ORNL has much more responsive consultants.
I have used computing clusters at Fermilab, SLAC, CERN, and BaBar experiment clusters around the world. Generally the computing power at NERSC is better on paper, but the ease of use (production accounts, HPSS software, usable uptime/stability, etc.) seems worse at NERSC. Other centers allow production accounts for processing of collaboration data.
The seaborg processors are quite a bit slower (~factor of 3-4) than the current intel-Xeon processors we have on our local parallel clusters. The availability of large number of processors at seaborg is atttractive.
Local clusters can have extenive down times ( several days).
I have been using NCSA, PNNL, OSC and the Cornell facilities. NERSC compares well with all other centers. Its strength is reliability and access to large numbers of processors. Its weaknesses are long wait time on seaborg and the missing Fortran 2003 compiler.
The TJNAF computer farm (>100 Linux boxes) has a much faster turnaround for small (test) jobs, but there are no real consultants available.
I have been extremely pleased with NERSC and it compares with the top centers I have used. I have used resources at LLNL, LANL, Sandia National Laboratory.
- NERSC is less good / negative response: 4 responses
RCF used to be terrible and PDSF was the model site. They have switched places, in my opinion. I also use Livermore Computing, and that system is also better managed and operates more smoothly than PDSF these days.
Hands down the worst I've seen. Other centers give their users timely and effective tech support, even if it means actually spending some time on support requests. If I struggle for more than week installing a code at PNNL, they install it for me so I can get on with my project.
The NASA advanced supercomputing division Columbia machine, which has much more flexible queueing policies than NERSC and quicker turnaround. (It also has less users, which isn't NERSC's fault.)
The allocation process at other centers (such as NCSA) is simpler.
- No comparison made: 11 responses
NERSC is the main center I use
No other centers.
I have allocations also at San Diego and Pittsburgh. San Diego has a queuing policy that favors large jobs. Both of them operate strategic user programs to help the large users develop efficient codes.
OSC has more different architectures around to test things out. that is great -- though I would not recommend NERSC doing the same -- NERSC needs to concentrate on big machines and run them well.
ucsd has smaller number of processors available
Minnesota Supercomputing Institute, Minneapolis, MN
Not really sure.
I only use NERSC facilities.no knowledge