NERSCPowering Scientific Discovery Since 1974


Minutes of the ERSUG/EXERSUG Meeting 

UCLA Faculty Center
Jan 12-13, 1994


Wednesday, Jan 12, 1994  

Morning Session

Tom Kitchens: View from Washington

Staff of OSC:
Dave Nelson, John Cavallini, Fred Howes, Dan Hitchcock,
George Seweryniak, Greg Chartrand, Bob Aiken, Wally Ermler,
Gary Johnson, Jim Mcgraw, Linda Twenty, Melea Fogle, Jane Hiegel.
Gary Johnson leaves in October.

Gloomy expectations were presented of yet further cuts, and recisions
in OSC budget (worse yet in out years). In Kitchens' words,"In these times
if you're flat, you're fat":

Budget cuts in ER and OSC effect:
triming 2%
general reduction 4%
direction costs .1%
recision 3%
congressional mandates 2%

John Cavallini plans to use management rather than across the board to apply
cuts. These three items have the highest priority to keep flat-- ie no cuts:

Access program (Nersc)

The 94 hpcc report is just out (11.5) months late. The briefer 95 report
(called the timely report) is what has been requested from us for input.
It will be out in about 1 month!

Reinventing government (7% cut in FY96):
National Science and Technology Council
-- replaces at least 3 councils for 3 problem areas, rolled up into one.
Federal Coordinating Council Science Engineering and
Technology (FCCSET)
Space Council etc.
-- members from major agencies (e.g., O'Leary)
-- President himself chair, and Gore vicechair-- raises visibility a lot!
-- Kitchens worried that politics will render ineffective

There still exists the HPCCIT (High Performance Computing, Communications,
and Information Technology)
-- now 10 agencies. NSA now a full member
-- 5 topics
1) NREN (internet in here)
4) IITA (National Information Infrastructure/IITA)
-- evidently this is a very confused arrangement with several players:
-Brown, Dept of Commerce
-White House
-DOE role still not as certain as other agencies

The OFE request to move their funding of Nersc over to OSC budget has
been denied, by some high-level congressional committee(?). This puts
a premium on exersug lobbying OFE to act to protect this piece of Nersc's


1) There is a need for ERSUG to show its needs and requirements.
Exersug needs to get out an annual report. It must be presented
to the right people or else it is not worth the paper it is written
on. What goes into such report?
--budget for Nersc in one section
--push advanced applications of Nersc hardware/software
-- use of C-90 as development platform for MPP machines, e.g.
Sydora(UCLA), Williams(LLNL) MPP PIC gyrkinetic codes for
grand challenge Numerical Tokamak Project.
To whom is it addressed?
-- OSC, most extensive detailed report
-- Martha Krebs, director of Energy Research (?), less detailed
-- even up to Crumbley (?) level, very brief, only highlights
There is a need to find/make the opportunity to go to Washington and
do a presentation to these people.

2) SAC (Supercomputing Access Committee) has real discretion on computer
access, allocation-- it really controls it. More contact is needed
with them and they should be invited to our meetings. SAC can help get
access to important individuals such as Krebs, Gribled ,MA Scott (she
uses input as another channel) and Lewiss(?).


Bruce Griffing

Bruce introduced Moe Jette as group leader in production computing.
Steve Louis is now responsible for advanced storage to replace CFS.
Keith Fitzgerald now manages CFS.


Moe Jette - Production Computing Advances

--Centralized User Bank (CUB) installed in Oct 1993.
--Portable Batch System is under development.
--Secondary Data Storage is now directly available with 103 megawords of
solid state disk.
--NQSTAT now estimates job turnaround times.
--Batch execution is less costly than interactive - results in more
efficient memory use and less idle time.
--Multitasking jobs have grown more common, thanks in part to the SPP
program, NERSC multitasking assistance and the recognition of
improved throughput.

Portable Batch System:
--POSIX compliant batch system - portable to any UNIX platform
--Cooperative development effort with NASA Ames and Livermore
Computer Center
--Scheduled for beta testing in Spring of 1994
--Improved resource controls
--Routing of jobs between hosts permitted
--Site specified policy modules give tremendous flexibility
--Based upon client - server model

Idle Time (as a percentage of all CPU time)
June 1993 Dec 1993
Cray/A 14% 3.7%
Cray/C 11% 2.9%
Cray/F 11% 0.6%

--Available in Spring of 1994
--Asynchronous Swapper - improves interactivity
--Multi-threaded Kernal - reduces system overhead
--Unified Resource Manager - improves control of resources
--POSIX compliant shell and utilities - improves interoperability
--Kerberized clients and utilities (klogin,krsh,kcp,kftp,etc)

Interactive Checkpoint
--Checkpoints interactive session upon graceful shutdown
--Interactive session restart possible at login time
--Available UNICOS 8.0, but NOT in initial release (fall 1994)
--NERSC will beta test in January
--Need to assess disk space use and effectiveness of restart

On Disk Space
We purge the oldest files first and only purge enough to
keep adequate disk space available. There are two problems
with this: data migration and a minimum purge time. Data
migration is our first course of action to free disk space.
When disk space gets low enough for the purging to begin,
files are purged mostly from CFS. This fails to free much
disk space and , in practice, the purge will eliminate all
files with an age over the minimum purge time at one time
with little effect upon the available disk space. The
minimum purge time is 30 days for users with no active
or queued NQS jobs, 90 days otherwise. To make disk space
available in large quantities, a drastic reduction in purge
time would be required. Some of our clients will touch their
files daily, have some program keep them open continually or
do whatever is necessary to prevent having their files purged.
These actions take place now and a reduction in the purge time
or charging only for "old" files may encourage more abuse.


Steve Louis - Solutions for Storage - Present and Future

Where We Are Now

A December CFS Statistical Snapshot
- 2,837,745 files
- 10.689 terabytes
- 5,090 root directories
- over 60,000 cartridges
Additional Personnel for Production Storage Environment
- new group leader: Keith Fitzgerald
- additional systems administration support
- additional operations support
Impact of Quota Introduction
- 300,000 files deleted
- growth rate of CFS slowed "for a while"
- notification at login if over 90% of qroup quota
- plans for user basis quota system later in year
CFS Version 61
- tape bitmap removed for unlimited tape capacity
- will use 512 MB/tape to start, 2 BG/tape eventually
- block ID searching for faster positional access
- running now on data migration machine
MVS/XA Operating System Changes
- better addressing to allow more tasks to run
- new software to utilize 3490E device types
Cray Client Error Recovery Improvements
- additional retry logic in CFS interface
- enhanced communication protocols
- better handling of segmented files
FTP Gateway Status
- limited local availability on SAS machine
- still evaluating usefulness and performance
StorageTek 36-Track Upgrade
- in progress: installed on migration system
- in complete production by end of January
- all 22 drives and 3 controllers to be upgraded
Use of Test System for Data Migration
- fully dedicated to DMF for all machines (August '93)
- has helped reduce direct CFS load
- files less than 72 hours old not migrated
- files less than 0.5 megabyte not migrated

Where We Will Be Soon

National Storage Laboratory Update
- New Participants and Directions
- Argonne/Cal Tech (Scalable I/O initiative)
- U.S. Patent and Trademark Office
- NSL-UniTree/OpenVision Software Merge
- Eventual goal is one commercial version of UniTree
- High Performance Storage System (HPSS) Status
- Almost 20 FTEs working on HPSS
- Delivery 1 - July '94
- Delivery 2 - July '95
- Delivery 3 - 1996
- New Related Activities
- National Storage Industry Consortium (NSIC)
- National Storage System Foundation (NSSF)
- National Information Infrastructure Testbed (NIIT) Work Group

NERSC NSL Technology Base System ("Mini-NSL")

- Plans and Schedules
- acquisition in FY '94 - monies allocated
- Benefits and Advantages to NERSC
- off-load large users and large files
- Base System Description
- utilize NSL-proven technologies
- UNIX workstation server
- 50-70 GB disk cache
- 1-2 TB archive capacity
- HIPPI network attachments
- Relationship to HPSS
- planned as interim production solution
- provides natural transition to HPSS software
- provides base for HPSS supported peripherals

Newly Expanded Production AFS Capacity

- Plans and Schedules
- C-90 and SAS clients available now
- expansion planned in FY'94 - monies allocated
- Benefits and Advantages to NERSC
- off-load from CFS of smaller users and files
- helps integrate host disk systems and archival storage
- supports distributed collaborative research
- Barry Howard will describe AFS in detail later

Where We Will Be in 1995-1996...

- Growth Paths toward HPSS
- New Disk Subsystem Acquisitions
- HIPPI-speed arrayed disk subsystems
- Silo Conversion Possibilities
- silos may be moved to Mini-NSL and HPSS
- additional performance upgrades possible
- CFS Phase-Out Tasks
- elimination of 3380 disk farm
- elimination of 4381 mainframes
- directory transferred to Mini-NSL or to HPSS
- CFS formatted tape reading capabilities
- New Archival Device Possibilities
- high-performance helical-scan tape technologies
- high-performance linear-recording tape technologies
- high-performance optical disk/tape technologies

- HPSS Future Deliverables and NERSC User Requirements
- Enforcement of storage quotas by user/group
- Integration with NERSC AFS capabilities
- Security audit records and security alarms
- Effective data/file migration and caching
- Comprehensive statistics and accounting
- Management of system by operations staff
- Efficient repacking of physical volumes
- Sophisticated data recovery utilities/tools
- Conversion paths from CFS and MINI-NSL
- User input needed by mid-February '94

Into the Next Millennium

- Multiple gigabyte data rates to/from storage
- Multiple petabytes of data in archival storage
- Highly-distributed, geographically-spread devices
- Non-file-oriented data access (e.g., digital libraries)
- Storage and I/O recognized as key to production

Steve Louis on storage solutions, near and long-term solutions.
--Shorter term.
-Lokke: NSL, previously announced with a finite time plan, now getting
fuzzed in a positive way, something at the NSL will continue to exist.
-HPSS is part of NSL; many labs planning to run as replacement for CFS
or UniTree, as more commercial version becomes available July 95-> 96.
Louis needs input from users on future storage requirements for HPSS.
-Mini NSL, 1-2 TB (small compared to 30-40TB in CFS). Money allocated.
-Newly expanded AFS capability. Offload smaller files.
-Kendall: many of his PNL users and also those of Argonne use workstations
for production. But then big jobs increase pressure on NERSC, especially
for more storage.

--Midterm, 95-96
If $ available, a new archival subsystem.-- would give an order of magnitude
improvement in capacity and transfer rate
-- silo replacement or upgrade is a possibility. Depends on technology

--Longterm, year 2000+. Steve thinks the advanced technology will be there.
Not clear whether the $ for NERSC to buy this stuff will be there.
What is clear to all: Storage and I/O are the keys to productivity.
This point needs great attention and lobbying efforts in Washington.

--Much discussion with Potter on storage needs of climate modelling group,
especially on their data bottlenecks.
-Silos the problem?
-A big issue is how to get data from the archival devices.
-Louis said Potter could get much better data rates on a system with an
adequate disk caching capability. CFS does not really use disk as
a caching device, mostly just a temporary space prior to writing to tape.
If active data is on a disk cache, delays are much less.
-$ a concern for buying very fast tape devices (15-30 MB/sec) compared
to the current data rates of CFS 3480 drives (1-3 MB/sec).
Costs $200K for a large helical-scan (DD-2) drive, even
without any robotics.
-If a program can buy an archival device for limited set of users,
NERSC would be able to incorporate if device drivers exist.


Moe Jette - Centralized User Bank

Centralized User Bank Progress
-Full production use began on Oct 1, 1993
-Good reliability, but some problems found
-Development continuing, resulting in some instability

Central Use Bank Advantages
-Centralizes allocations, accounting and user management
-Suspends interactive jobs (no longer killed)
-Restarts NQS job automatically when resources become
-Restores user shell when resources become available
-Principal investigator controls resource infusions

Central User Bank Future Plans
-Complete X window interface for clients
-Implement CFS quota controls by user
-Complete centralized resource accounting (report generation)
-Support Kerberos v5 for unified logins (secure remote commands
without additional passwords)
-Port client daemons to other machine types (T3D, HP workstations)
-Implement disk charging
-Support users being in multiple repositories
-Permit users to change their login name
-Provide resource accounting by process
-Support sub-repositories

-Others have expressed interest in collaborating on the Central
User Bank, which can leverage our efforts
-Livermore Computer Center
-Sab Diego Supercomputer Center
-NASA Ames


Moe Jette - File System Reconfiguration Proposal

Local File Storage Dilemma
-Cray disks are almost always filled to capacity, with the exception
of /workbig
-Additional Cray disks are very expensive
-Unnecessary files are not being deleted by their owners
-Many files migrate to CFS and are purged after 90 days
-Large file systems provide for more efficient disk utilization
-Large file systems are more vulnerable to failure

Local File Storage Solution
-Provide disk space where it is most valuable, the Cray/A C90
-Develop other storage systems: AFS, CFS, NSL and HPSS
-Charge for disk space to encourage efficient use
-Purge the /tmp/wk# file systems more aggressively
-Establish a large file system which can be very aggressively purged

Current Configuration
% of resources
Machine CPUs Memory User Disk CRUs disk
Cray/F 4 128 MW 76 GB 9 38
Cray/C 8 128 MW 48 GB 16 24
Cray/A 16 256 MW 76 GB 75 38
totals 28 512 MW 200 GB 100 100

User disk includes the /tmp and /tmp/wk# file systems.
User disk also includes the /workbig file system on the Cray/F
and Cray/A at 19 GB per machine.

Proposed Disk Reconfiguration
-Remove the /workbig and /tmp/wk3 file systems from the Cray/F
-Establish a 38 GB /usr/tmp file system on the Cray/A
-The /usr/tmp file system will be purged aggressively
-TMPDIR is purged upon job termination
-Purge files over 24 hours old as space is needed
-No data migration from /usr/tmp to CFS
-Programs will require minor modification to use /usr/tmp
-The /usr/tmp file system is ideal for scratch files, symbolic links
can provide easy access to input files
-Only the Cray/A will have a large /usr/tmp file system
-NQS disk limits will be increased substantially, but the additional
disk space will be in /usr/tmp

Proposed Configuration
% of resources
Machine CPUs Memory User Disk CRUs disk
Cray/F 4 128 MW 39 GB 9 20
Cray/C 8 128 MW 48 GB 16 24
Cray/A 16 256 MW 113 GB 75 56
totals 28 512 MW 200 GB 100 100

User disk includes the /tmp and /tmp/wk# file systems.
User disk also includes the /workbig file system at 19 GB and
the /usr/tmp file system at 38GB on the Cray/A.

File Purging in /tmp/wk#
-Files in the /tmp/wk# file systems are purged as space is needed
-No file under 30 days old is purged
-If a user has an active or queued NQS job, none of his files
under 90 days old are purged
-A reduction in these lifespans make it easier to maintain free disk
-We propose reducing file lifespans to 14 days if no NQS jobs are
present and 30 days otherwise

Future Options
-The /workbig file system on the Cray/A could be merged to the
/usr/tmp file system
-Effective utilization of the new /usr/tmp file system must first
be established
-We will reexamine the Session Reservable File System

Other Accounting Issues - Disk Space
-Disk space is in critically short supply
-Many users do not delete unnecessary files
-Disk space charges could result in more efficient use
-NERSC proposes reducing the CPU charge by 10% and instituting disk
space charges to recove those charges
-Disk space charge uniform across all Crays on the the /usr/tmp and
/usr/wk# file systems
-The rate for disk space would be 1.08 CRU per gigabyte hour. The CPU
speed factor on Cray/A changes from 3.50 to 3.15

Other Accounting Issues-Memory Charges

-Large memory jobs without multitasking cause idle CPUs
-Multitasking results in improved throughput, but increases the CPU
time and charge
-Memory charges are one method of providing an incentive to multitask
-Multitasking jobs have grown more common, thanks in part to the SPP
program, NERSC multitasking assistance and the recognition of improved
-Interactive execution discouraged by charging algorithm - batch job
scheduling more efficient than interactive
-NERSC does not believe that further alterations in the charging
algorithm as necessary at this time

Idle Time
June 1993 Cray/A 14% ; Cray/C 11% ; Cray/F 11%
Dec 1993 Cray/A 3.7% ; Cray/C 2.9% ; Cray/F 0.6%
Idle time as a percentage of all CPU time.

The Hard Choices
-Moving disks from Cray/F to Cray/A
-Establishing /usr/tmp file system on Cray/A
-Reducing purge time on /tmp/wk#
-Charging for disk space


On disk space-- 4 proposals
1)Move some of fma disks to ama
2)create a new TMPDIR, very aggressively purged,
NO data migration here, take place of workbig
3)reduce purging time of /tmp/wk? from 30days to 14 days
4)implement charging for disk space.

Sense of the participants was that 1) and 2) should be done.
Big differences of opinion on 3) vs 4).
-- LLNL(Crotinger), also GA (Turnbull)
strongly against 3), support 4).
-- JNLeboeuf strongly against 4).

McCurdy suggested default for upcoming few months:
-- small, (5% ?) disk charges to be implemented.
-- No changes yet in purging algorithms.
-- Gather data on impact, report back, make judgements.

On disk space-- 4 proposals
1)Move some of fma disks to ama
2)create a new TMPDIR, very aggressively purged,
NO data migration here, take place of workbig
3)reduce purging time of /tmp/wk? from 30 days to 14 days
4)implement charging for disk space.

There was, to say the least, a lively discussion of charging for disk
space and/or reducing the lifetime of files on the work filesystems.

Sense of the participants was that 1) and 2) should be done.
Big differences of opinion on 3) vs 4).
-- LLNL(Crotinger), also GA (Turnbull)
strongly against 3), support 4).
-- JNLeboeuf strongly against 4).

McCurdy suggested default for upcoming few months:
-- small, (5% ?) disk charges to be implemented.
-- No changes yet in purging algorithms.
-- Gather data on impact, report back, make judgements.

A committee was formed to discuss 3)and 4),
gather opinion via e-mail from user community and report back,
Jean-Noel Leboeuf
Rick Kendall
Rick Nebel
Alan Turnbull
Keith Fitzgerald
Moe Jette
Mike Minkoff
Jerry Potter
A discussion of this will follow on the exersug and ersug reflectors.

AFS should help preserve the small files

Moe Jette: There is a problem with the proposal to never purge small files:
you then even more seriously fragment the disk.


Adjourn for lunch

EXERSUG-only lunch-hour meeting:

--Concluded exersug needs to change its member selection process to something
closer to what it used to be: exersug members nominate new members and then
ask for that program's SAC member approval. The point is to help ensure that
SAC has continuing confidence in exersug. Recent follow up with SAC via
Tom Kitchens indicates that it is not necessary to change our charter.
SAC is willing (and has even started) to make suggestions for new
members. SAC's main concern is that EXERSUG not give the appearance
of acting in anyway as "advisory" to DOE/SAC.
--Concluded that next meeting, June-July 94, needs to be in Washington,
near or in Germantown. The point is to invite SAC members, to encourage
closer interaction and building of trust between Exersug and SAC.
We can consider the next meeting a SAC site visit!
--JNLeboeuf said we will need to get help on setting up meeting-- the hotels
don't do much for you. He said ESSC have secty go beforehand to make
sure of arrangements, etc. ?
--McCurdy made a strong case for more help from exersug on the lobbying job
for Nersc in DOE. The need is especially great in these times of flat
and declining budgets.
--Byers will remain EXERSUG chair one more year and Hingerty will succeed
him at that point.
--McCurdy on ongoing Sprint-ATT battle and Nersc contract to get T3 lines:
Upshot: Sprint chosen again, story still not finished. He concluded by
saying that one way or another expect to get T3 by end of this year.
--Byers noted an annual report is needed and that it should be seen by the
right people in Washington.


Resume meeting after lunch

Bill McCurdy - The Center for Computational Science and Engineering (CCSE)
A short presentation was made describing this new center at LLNL and how
it will collaborate with NERSC on various computer projects. It will also
be involved with external funding such as CRADAs and grants.

ESNET status: Last year ATT objected to the award to US Sprint. Presently,
the contract was won by US Sprint and it is under protest. DOE and
Congressional Committees are communicating on it.

T3D Machine: This is a CRADA involving LLL, LANL, Cray, and TMC. It has
128 nodes and both labs are to get a T3D. While the award is for specific
projects 15% of the time will be available through NERSC. Initial
access will begin in April/May.


Mike McCoy - Status of the Procurement of a Massively Parallel Processor
Team Leaders: Brent Gorda - Hardware
Tammy Welcome - Software
Bruce Curtis - Performance

The Foundation
Recommendation of the user workshop (August 1992) incorporated into the
OSF Blue Book (February 1993):
-Recommended procurement of an MPP capable of 200-300 GF (peak) by 3rd
quarter of 94
-Integration of MPP with appropriate mass storage facilities and
distributed computing environment
-R&D of components of software environment required for production
-Integration of software achievements of HPCRCs into the production
environment and user access to the HPCRCs for transition

The Objective - The central focus of the Blue Book is the production
environment. In point of fact, this exits nowhere today.

Production Environment Criteria
Machine must support multiple (many) users simultaneously
-robust scheduler (time and space shared)
-advanced visual tools... debugger and performance analysis
-variety of programming models, efficiently implemented
-applications base
-single system image
-tools, GNU, applications, etc
A balanced architecture
-High speed communications to disk and the network
-Flexible dynamic partioning of the resources
-Balance between cpu peak speed, communications, memory to
minimize bottlenecks
-Flexible hardware, minimal down time
Vendor Support:
Is he real?
-Can he be responsive to NERSC user requirements/complaints
-Does our vision of a production environment interest him?
-Does he have depth in the ranks?
-Can he supply experienced analysts?
-What are the odds the vendor still exists in 1996? 1997? 1998?
-Does he have an integrated business and technical plan?
-Does Arthur Anderson believe his controller?

The Strategy
The procurement drives the production environment. RFP written for 2 systems.
-1995 Pilot Early Production (PEP) System
-1996 Fully Configured Machine (FCM)
PEP System
- >256 PE's, >59 MB usable memory/PE
- must support an initial production environment
- Milestones must be met in the year after delivery
FCM System
- >1024 PE's, >250MB/PE
- an optional upgrade

Structure of the RFP
Divided into 5 major areas
-Company Profile and Management Structure
-Training and Technical Services

Vendors have the draft now or soon
Presolicitation Conference at the end of January 94
RFP out in March
Responses back by June
On site visits by June 15
Evaluation complete by July 10
Award by October (?)
Delivery of PEP before Spring, 1995
Delivery of FCM one year later
Release of FCM.... 1999


Mike McCoy - The High Performance Parallel Processing (HPPP) Initiative

CRI,TMC,LLNL/LANL submitted a proposal to DOE (DP).
-Laboratory scientists to develop applications in collaboration
with industrial partners (for US competitiveness)
-Two CRI 128 PE T3Ds: one at LANL and one at LLNL (NERSC)
-There will be significant access to NERSC users
Award: $26,500,000 over 3 years (tentative)

The Configuration
T3D to be placed logically at Center for Computational Sciences and
Engineering (CCSE) and physically at NERSC.
Each T3D:
-128 PEs (150 MH alpha)
-64 MB/PE
-90 GB disk
-NERSC users (only) will have access to CFS
System Administrator and installation paid by HPPP

Access to High Performance Computing Research Centers
As was recommended in the Blue Book, the OSC has negotiated some
access to Oak Ridge Paragon.
Access to ACL-CM5 and CCSE T3D to occur somewhat later
PI's must:
-Submit a short proposal asking for access to one or more sites
-Users with MPP Grand Challenge allocations should not apply
-Access will be limited, due to limited resources
-OSC (with limited NERSC involvement) will arbitrate access
NERSC will provide consulting support

MP Access Path For NERSC User Base
Oak Ridge allocation has been used (to date) for benchmarks
-Paragon time could be available as early as March
ACL CM5 should be available by Spring 1994 (must be negotiated)
T3D available by mid-Summer 1994
PEP available by early 1995
FCM available by early 1996


Bruce Griffing - Status of Special Parallel Processing (SPP) Effort

SPP's Inaugural Year

Approximately 20,000 CRUs were allocated to 9 accounts beginning
in mid-April,1993. By early December over 20,000 CRUs were charged.
This was probably more the result of a few users moving additional
time into their SPP accounts (a practice which is prohibited),
a change in charge rates, an occasional rebate, rather than the
codes achieving 100% efficiency. Much credit goes to Bruce Curtis
who helped refine the algorithms and scheduled the jobs during nights
and weekends. Also credit Bruce Kelly, Clark Streeter, Moe Jette,
and others who helped resolve NQS and related system issues.

SPP in 1994 or "What's different from 1993"

More time is being allocated. The target is 66,000 CRUs to be delivered
between February and October. The schedule is being expanded to accommodate
the allocations. We are looking at three 11 hour shots each week.
Last year it was two 8 hour shots each week. So far, 19 users have been
preparing entries using the test queues. Graceful shutdown routines
are being provided to reduce the amount of manual scheduling required.
Initially we will terminate an SPP job at the end of the day's run.
Later this may be relaxed somewhat if we can.

What Issues Remain?

SPP jobs are vulnerable to delays in Data Migration Service. We intend
to put DMGETs into Stage-In scripts to reduce potential delays. This
problem is also partially eased by the graceful shutdown process.
Interference from interactive jobs can still occur. The Fair Share
Scheduler from CRI did not solve this problem. We haven't given up
trying to find ways to reduce the potential impact. Disk space can
still be a problem, but you have seen the plans for helping ease
that on the A machine. I/O necessary in the programs and I/O
necessary for graceful shutdown will continue to affect performance.
CRI's FFT multi-CPU performance was a problem for Hammett. He
solved it by writing his own. CRI is aware of the problem.

meeting adjourned for dinner

Thursday, January 13, 1994

Note: Thanks go to Kirby Fong for preparing additional notes for this
part of the meeting that I was unable to attend (Brian Hingerty)

Barry Howard's AFS and distributed computing presentation
Where WE Are With AFS
Andrew File System
-Enables use of appropriate CPU/OS for each task
-Provides a single repository of data which
-is available to every host with an AFS client
-is addressed using a single path (afs/
-is accessed using standard UNIX file system commands

Two Main Thrusts
-Integrate host disk systems and archival storage
-Support a Distributed Collaborative Research Environment

Integration of Storage Policies
-Disk management policies on NERSC hosts are changing
-local disk used for temporary work space
-AFS used for home directory, active source trees and smaller files (<2GB)
-Archival storage used for inactive source trees and larger files (>2 GB)

Integration of Storage Activities
-AFS clients on NERSC hosts
-C90 and SAS clients available now Cray-2 client coming
-CFS gateway available for testing
-type "cfs" on SAS
-supports both CFS and FTP commands
-handles 210 MB files now, 500 MB later
-Multi-resident AFS server from PSC
-Adds archive to distributed file system
-currently being evaluated using NSL
-AFS expansion planned to accommodate additional users
-DFS test cell coming soon
-Distributed File Service in OSF/DCE

Collaboration Support
-Trial licenses being used for evaluation by ER sites
-NERSC provides consulting support
-On-line AFS users documentation
-Installation tools developed for client and server
-Follow-up with AFS users
-Shared atmospheric chemistry data
-DlllD equilibrium code data
-shared source code tree
-We are looking for documentation projects
-Collaborators with shared data requirements
-Floating AFS licences for selected projects


Barry Howard

Rational for the Formation of a
Distributed Computing Committee for Energy Research

-Formation Task Force formed in June 1993
-Task: Define goals, approach and structure for a joint DC committee
-One meeting in July 1993:
EXERSUG - Barry Howard and Bill Herrmannsfeldt
SCIE - Mark Wiesenberg
ESSC/ESCC - Jim Leighton and Roy Whitney
DOE/OSC - Bob Aiken
-"Reduce the user's perception of disjointedness in the resources
required to complete the task"
-Distributed Collaborative Research Environment
-Generate preliminary list of interest areas which potentially offer
best return on investment
-Form committee to refine this list and start pilot projects
-Use pre-defined metrics to assess impact on collaborators
-Committee structure proposed

Comments on Barry Howards presentations:

Two years ago an ESNET committee recommended developing a distributed file
system. NERSC brought up the Andrew File System (AFS) as a pilot project. The
system worked well enough that NERSC decided to move it into production use.
AFS is proprietary, but the cost is modest. Members of the ERSUG audience
were confused by Barry's VUgraph about CFS with FTP commands on SAS. Barry
was trying to say that SAS as well as the Crays is moving toward a three level
storage system: (1) local files, (2) AFS files, (3) archival files. The
remarks about CFS were addressing the archival level. People thought he was
saying that AFS client commands would be able to access CFS files, but this is
not so. Archival files and the commands for reading and writing them are and
will continue to be separate from AFS. This caused another diversion where
Jean Shuler asked whether the document about CFS on SAS should even mention
that it supports FTP style commands as well as the one and only style it
supports on the Crays. It would be a lot of work to install FTP style
commands in the Cray version of CFS; therefore, it is unlikely to be done.
The SAS CFS is not really the same CFS that runs on the Crays; it is a program
that talks to a gateway machine that runs the real CFS program. Jean's
question revolved around the issue of whether we should be encouraging the
use of CFS commands on SAS that don't exist in the Cray version of CFS. There
was no answer. Mary Ann Scott asked when a DFS (this is the Open Software
Foundation's Distributed File System) implementation would be available. Barry
said a DFS implementation would be available soon. Steve Louis asked if there
were any truth to the rumor that OSF was going to give up on DFS in favor of
a new version of NFS (Sun Microsystem's Network File System). Barry said OSF
has denied the rumor. Transarc, who markets AFS, will market OSF/DFS as well.
Moe Jette said Cray Research will include DFS in UNICOS 9. NERSC has arranged
trial copy licenses from Transarc so NERSC can help users install and try AFS
for evaluation. AFS does have some problems (such as random access) which DFS
handles better. Another problem is that AFS is not available for VAX VMS
systems. Rick Nebel asked where AFS files actually reside. At NERSC they
reside on storage devices managed by three Sun server machines. Maureen
McCarthy asked if NERSC AFS has the 100MB volume limit. No answer. Barry
said some pieces of third party software like C++ don't work with AFS files.
Barry went on to say there is a need for a user group to focus on distributed
computing. Such a group was formed and met in July. The presentation was
unclear here, but it sounded like the group backed Jim McGraw's effort to hold
a workshop on collaborative computing. Getting back to AFS, Jack Byers pointed
out that AFS required major modifications to UNICOS; therefore, it is clear
that NERSC is capable of performing such surgery upon UNICOS. Moe responded
that Pittsburgh Supercomputing Center had done all the work and that while the
work was major, it was localized in a few places of UNICOS. Jack's point was
raised in connection to an earlier statement that NERSC was reluctant to
implement NASA Ames' session reservable file system (SRFS) to cure the problem
of getting enough disk space before starting big batch jobs due to the
extensive changes needed in UNICOS and because CRI did not want to support
SRFS. Moe said SRFS modifications to UNICOS are not localized; they are
spread throughout the system and are therefore time consuming to retrofit into
future releases of UNICOS. Changing the subject, Moe pointed out to people
that jobs which have AFS files open cannot currently be checkpointed. NERSC
learned this when it tried to bring up the interactive checkpointing
capability. NERSC has pointed this out to CRI. NERSC does not know whether
CRI has thought about how to checkpoint jobs that use DFS, DFS being the
successor to AFS that CRI plans to support. Victor Decyk asked whether AFS
files survive if the job using them dies in a system crashes. Rick Kendall
said the system can do a cache recovery upon restart.

Additional Comments from Byers:

being touted as best thing since sliced bread.
should help a lot preventing loss of source files via purging.
Moe Jette puts essential files on AFS and uses symbolic links..(?)
Rick Kendall(PNL) also very high on AFS.
-- in contrast to Nersc, he reports they have had NO trouble with CVS.
-- PNL been using AFS for 2 years, a big saving grace,
both for admin and for users.
- Admin backup is a lot more reliable. They had more problems
with disk space on local machines than with disk space on AFS.
- User advantage: can login to any machine onsite and it looks the same,
can have same environment across platforms.
(What extra is needed to really achieve this?)
-- PNL reported that they are now demoing the SGI version of AFS.

Crotinger says still trouble with C++ and CVS in AFS. Howard says C++ fixed?
Barry Howard says nersc is working with Transarc (?) to get vendors to fix
the problems. Can he really expect resolution this way?
Evidently the problem with C++ and CVS is that they were designed to work
with NFS and some problems getting them to work with AFS.

Shared data files on AFS-- users report ease of use: Bill Meyer (LLNL)
brings data up from VAX at GA; he has a better response than users at GA
trying directly from the VAX. (?)
Evidently though, there are some basic problems in that VMS and AFS don't mix,
so many fusion sites with Vaxes will have trouble. (?)

Jette on Checkpointing:
There are problems checkpointing jobs with open AFS files. This
can be addressed by copying files from AFS to on-line disks when
the job begins and reversing the process when the job completes.
Jobs without open AFS files will checkpoint without difficulty.

Chris Anderson (NERSC) and Rick Kendall (PNL) had a lot to say about
what x-window stuff was available for users at home. This information
could be very useful to a wide spectrum of our users. Even our national
labs users, could use help here-- many of them have no employer-supplied
hardware for use at home. We need detailed information.

Jette on AFS:
Any AFS client can access any AFS server (given appropriate
authorization). NERSC clients and servers can access ANL clients
and servers transparently (other than network delays for I/O).

MultiTasking taking off:

--not only in SPP but also even in daytime can see 8-16 processor jobs running.
--JNLeboeuf--with lots of Multitasking, system calls can cause jobs to slow to
a crawl. Moe thought this caused by system calls being single-threaded thru
the system. Unicos 8.0 should help this a lot.

Portable Batch System:

--beta test.. designed to solve problems of NQS
--hopefully not so many problems of jobs running out of disk space....?

Unicos 8.0

--Asynchronous swapper improves interactivity-- swapping can helped a lot.
--Multithreaded--many system calls will be processed in parallel; some won't.
Won't be on Cray 2's.
--Turnbull asked about AFS problems. Moe thought his problems were
caused by the single-threading-- a lot of system calls, with lots of I/O
delays going to disk.
--Posix Compliant. This is a big deal, will cause users pain, will
affect everybody.


Byers Chairs Site Concerns:

Rick Kendall's Site Report (Pacific Northwest Laboratory)

-Computing Environment at PNL
-Offsite systems
-CSCC (Intel Touchtone Delta, Paragon upgrade)
-Argonne National Laboratory (IBM SP1)
-Los Alamos - CM5
-ORNL - Paragon
-Molecular Science Computing Facility (MSCF)
-High Performance Computing Laboratory (HPCL)
-Experimental Computing Laboratory (ECL)
-High Performance Visualization Laboratory (VisLab)

-16 node Intel iPSC/860, 16 Mbytes Memory, 5.5 GB disk


-80 nodes
-32 Mbytes Memory per node
-50 Gbytes disk

-IBM RS6000 980
-8GB SCSi-2, 30GB, HIPPI-RAID (summer 1994)
-250 GB XByte Robot system (54 tapes)
-700 GB VHS Metrum Robot (48 tapes)

-SGI Stereo
-AVS Visualization Package

Why the KSR-2?
-Procurement based purchase and KSR won!
-System with most available programming models
-"mature" OS
-collaborative agreement

Workstation Environment at PNL (EMSL/MSRC)
-50+ UNIX Workstations
NCD X-stations
HP cluster
DEC cluster (March 1994)
-70+ Macintosh PCs
-30+ IBM Compatible PCs
-Support Staff ftes
-2.0 Unix
-2.0 Mac
-2.0 IBM

-~20GB cell with 5 Sun system managers
-exported to SUN, IBM, HP, beta test for SGI
-Gaussian, Hondo, and other Chemistry Codes
-software development in CVS repositories
-X11R4, R5
-Mosaic (Reach out and touch the World)

-Disk space utilization
-No way to reserve disk space
-Queue structure does not always match job mix

-NERSC staff and management very helpful
-Open dialog for change
-Changes pending


PNL is collaborating in a consortium to work on scalable parallel I/O for
working files. They're leaving it to the National Storage Laboratory to work
on scalable I/O for tertiary storage devices. PNL got a KSR-2 in their
procurement. PNL has used AFS for over two years and has not had any problem
with CVS dealing with AFS files. (Barry had mentioned that CVS (a source code
maintenance system) had been observed at NERSC to be unable to handle AFS


Rick Sydora's Site Report (UCLA)

Their codes generate a large amount of output which in compressed ASCII form
is still 10 to 20GB. They down load the compressed output to workstations
at UCLA where they use IBM Data Explorer or AVS to visualize the output. It
can take hours, or even most of a day, to move the compressed output from NERSC
to UCLA. They've tried using the CM-5 at Los Alamos for their calculations, but
the machine is not in full production status yet. The IBM SP-1 at UCLA does
not have enough disk space for a restart file much less for output files. The
C-90 at NERSC is the only usable parallel machine for their code at this time.
They have found X Windows based debugging tools to be essential for parallel
code development.

Byers comments:

Sydora from ucla site report:
-- very impressive on ease of parallelizing large gk pic code
on the c-90
-- can typically get 12+ processors
-- thinks he should get 15-16?
-- 10 usec/p-dt
Requirements for useful run:
133hrs single proc -> 133/12 = 11 hrs for 12 proc

This says even the charge scatter-gather is parallelized.
Sydora said even this was ***easy*****

Sydora and others experience emphasizes how good a machine the C-90 is
especially in this dedicated mode, and many urged that the amount of time
for this be expanded.


Mike Minkoff's Site Report (Argonne National Laboratory)
ANL's HPCC Program
-A consortium of Labs, industry and universities
-Mission: to enable scientific and industrial use of massively parallel
(MPP) computing systems
-Grand Challenge applications focus areas
-software technologies and methods
-parallel computing facilities

ANL/IBM High-Performance Research Consortium
-focus on 5 application areas
-computational chemistry
-computational biology
-materials science
-global change
-stellar astrophysics
-development of software tools and systems software
-CRADA with IBM on MPP computers systems
-design, implementation and deployment of educational programs

Software Technologies and Methods
-emphasis on portability
-parallel programming tools
-parallel I/O systems
-parallel algorithms, numerical methods
-parallel libraries and templates
-scientific visualization systems
-performance modeling tools and display systems
-large-scale object-oriented scientific database software

MPI:Message Passing Interface Standards
-portable standard for message passing systems
-support existing functionality
-easy to implement, but provide a growth path
-vendor supported
-designed rather than evolved
-ANL is producing the reference implementation
-available via FTP

Current Application Focus Areas
-computational chemistry
-computational biology
-global change, regional weather modeling
-materials science (superconductivity modeling)
-stellar astrophysics
-vehicular industry modeling (structural mechanics, cfd ...)

GC Example - Computational Chemistry
-highly accurate electronic structure computations
-computational studies of gas, liquid and solid chemistry
-focus on environmental chemistry problems
-models of toxic waste combustion
-bioremediation in soils and water
-CFC substitute modeling
-collaboration: ANL, PNL, Exxon, DuPont,
Allied Signal, Amoco, Philips Petroleum

ADIFOR: Automatic Differentiation in Fortran
-tools for computing derivatives of fortran codes
-used in gradient evaluation
-can be used for automatically producing "adjoint codes"
-applications: aerodynamics, structures, ground water,
climate modeling, etc
-ANL, Rice, NASA, LaRC

Nearly everything he said is covered in his VUgraphs with the additional
comment that he views favorably the association of the the Center for
Computational Sciences and Engineering with NERSC as a step toward having
applied math and numerical methods people working directly with users on
their scientific computing problems (as he does at ANL). His division
(Mathematics and Computer Science) is collaborating with IBM to develop
parallel numerical subroutines.


Jean Shuler's Report - Software Issues

Strategies for Engineering and Scientific Applications and UNIX Software
-UNIXWARE Program - Jean Shuler
-Coordination, installation and support of the LATEST commercial and
free software on NERSC UNIX machines.
-Response to some user concerns
-Policies fore adding and removing software
-Applications Software Usage - Jean Shuler
-Report on monitoring and usage of application codes by our users
-Applications Software - Kirby Fong
-Proposed procedure for evaluation and selection of software

-Software that runs on UNIX platforms
-Vendor supplied
-Public domain
-NERSC supplied
-We are most interested in non-commercial, public domain, user-supported

Why was the UNIXWARE Program established?
-Offer LATEST software to NERSC customers
-Establish method for requesting software
-Establish policy for users to follow based on resources (disk space,
personnel, maintenance, funding)
-Eliminate unnecessary efforts
-Assign software responsibility
-Systematic approach to requesting, acquiring, installing, maintaining,
documenting and monitoring software

What has been accomplished?
-UNIX ware program charter wrutten and presented at previous ERSUG meeting
-MAN Page documentation requirements established
-Introductory MAN page for UNIXware
-MAN page for each UNIXware product
-Directory structures reorganized
-Personnel assigned to investigate requested software
-Libg++, TeX, BASH, ZIP
-Many software products installed:
-Free Software Foundation (GNU tools), Tk/Tcl, Mosaic
-NERSC.RFC.SOFTWEARE bulletin board established

What remains to be done?
-Utilize NERSC.RFC.SOFTWARE bulletin board
-Add Mosaic form for users to request software
-Modify/enhance the MAN Page for UNIXware
-Setup database for maintenance of software, owner, makefiles,
source, history, revision information
-Streamline and automate process

Applications Software Acquisition
(Greg Tomaschke and Jean Shuler)

Monitoring and Usage of Applications Software
-Software monitoring plan developed
-Policies and procedures established
-Statistics and usage charts generated
-Standard application interface installed
-Application usage tracking activated
-Database of software usage established
-Future plans charted

Software Monitoring Plan Developed
-To determine support level for applications
-Full, limited, no support
-Manpower, documentation, examples
-To base decisions on facts
-To eliminate multiple versions of codes
-To manage cost of software

Policies and Procedures Established
-Responsibilities of applications group assigned
-Directory structure reorganized
-Standard interface to codes provided
-Software updates scheduled monthly
-MAN page documentation required
-Quality assurance testing required

Standard Application Interface Installed
-Provides standard interface to nearly all applications
-Implements additional features/options
-Help package
-Examples package
-Gives access to old, new and production versions
-Fortran program replaces korn shell script
-Easily maintained
-Greater flexibility and control
-Reduces entries in ACCTCOM file

Applications Usage tracking Activated
-Application logging in place on A,C,F machines
-User number, application, machine, CPU usage (expand to SAS in future)
-Library logging in place on A,C,F machines
-User number, library, machine load time logging of libraries
(expand to SAS in future)
-Performance logging enabled on A machine
-Uses hardware performance monitor (available on A machine only)
-Targets applications requiring optimization

Database of Software Usage Established
-Currently: separate files maintained
-Applications logging: 4 files (A,C,F,SAS)
-Library logging: 4 files (A,C,F,SAS)
-Performance logging: 1 file (A)
-Flat, ascii, difficult to manipulate
-Solution: establish database
-Applications and performance usage database

Future Plans Charted
-Identify application codes and libraries for enhanced support
-Work with vendors and NERSC users to optimize, enhance or port
libraries and applications
-Convert to run on different platforms



The point of the UNIXWARE effort is to address user requests for having the
same UNIX tools on the Crays that users have on their own UNIX workstations.
The NERSC UNIXWARE program intends to deal primarily with free or user
supplied software rather than the expensive or licensed software. The moderated bulletin board for requesting UNIXWARE was just
installed so users have not started posting requests yet. After Jean showed
the charts on applications usage that revealed how much Gaussian is used,
Maureen McCarthy asked whether NERSC is getting the DFT version of Gaussian.
We didn't know at the time of the ERSUG meeting, but after the meeting we
verified that Alan Riddle already has the source for the DFT version and has
been installing and testing it. NERSC is not aware of any parallel version of
Gaussian coming along. It would be pretty much up to Gaussian, Inc. which has
close relations with the Pittsburgh Supercomputing Center to come up with such
a version on the available PSC machines. Jean-Noel LeBoeuf wanted to know if
the source for PDE2D plotting routines was available for modification. We
found out after the ERSUG meeting that PDE2D uses the NCAR Plotting library
for its drawings.


Jean Shuler's added comments for the minutes

Jean Shuler re GNU-licensed software

The LLNL legal interpretation of the GNU-license for NERSC is that
should NERSC distribute any portion of a GNU-licensed program
or software derived from a program received under this license,
NERSC must do so at no charge; however, NERSC should not set itself
up as a point of distribution of this software.

This is how NERSC is handling the GNU issue. We get software off the net,
and port it to Unicos. If we make changes, regardless of where it is from,
it is given back to the authors and we request they incorporate our changes.
Thus, these sources are distributed. If any of our users request this
software, we will give it to them.

Jean Shuler on Unixware and applications codes:

We are developing tools to organize the large number
of products that NERSC has installed on the CRAYS and SAS.
ls /usr/local/pub
gives a list of all applications codes NERSC has installed.
(i.e, MAFIA, NIKE, DYNA, etc)

ls /usr/local/bin
gives a list of the unix tools that Nersc has installed
on the CRAYS and SAS :

This is software installed by NERSC staff. Some of it was
written at NERSC (cfscost, cgmfix, ctou, cvtv80, f2cgm, nqstat,
vplot, etc), some was ported from other platforms and computer
centers (SDSC, NCSA).

Status of libg++, math libraries for GNU C++:
Steve Buonincontri(NERSC) and Jim Crotinger(MFE,LLNL) tried porting
libg++ without success. NERSC would be happy to install it if a user
could successfully compile and test it for us. Our C++ person says he
can get equivalent and better functionality from third party sources,
already running under Unicos and better maintained. NERSC has the option
of installing (on a 90 day trial) C++ math libraries from Rogue Wave.
The cost will be $16,000 for the two libraries so the information will be
posted on the RFC Board to see how many NERSC customers are interested.
This is where the real power of the board is realized. If many users
request it, then it will be up to NERSC to decide if the resources are

There is an old version of the C++ compiler on the CRAY 2's;
the purchase order for the updated version
has been stalled for sometime but anticipated to
be processed soon.

Additional requests are:
* We have been asked to put on as many GNU tools as appropriate (>100).
* Netware tools currently on SAS (10 to 100)
* TeX as appropriate (?). We are investigating the need for TeX on the Cray.
* Postscript tools for converting text:
LPTOPS and NENSCRIPT are examples of postscript tools to be installed


Byers comments:

Jean Shuler on unixware

Jean's presentation showed news group RFC request for comments
mechanism that they have set up for general unix requests.
--this is evidently a standard unix technique, so this aspect
I would think the UT people or others would not complain about.
--But, it puts another layer in between the requests and anybody actually
taking action, even on what should be fairly obvious high-priority items.
And in order to get the best feedback, the most knowledgeable people
in our user community have to be reading this news group-- many probably
dont have the time.
--It sounds to me nersc needs some additional mechanism to get feedback
to check the RFC "votes".

Jean also said that the users are now satisfied with the state of
the C++ compiler--
both Kirby Fong and I replied that we knew of users that were not satisfied.
Kirby and I were referring to Crotinger. I think I could
include Haney and possibly others like Bill Meyer(ask him) in this.

I suggest that Nersc provide for the minutes at least a list of the Unixware
that has been adopted to date. And also a list of major software that is not
yet working, along with its status.


Kirby Fong's Report- Third Party Applications Software

-Draft of proposed policy on 3rd party applications software advertised
after last ERSUG meting
-ERDPs reopened in September for late software requests
-Context of responses not always clear, some interpretation used in
tabulating the results

Next Steps
-Contact PIs who have requested software we already have
-Sort requests by category and have groups of users and NERSC personnel
discuss priorities by e-mail
-Chemistry: UNICHEM, DGAUSS, DMOL, etc
-Math: MINOS, Simulink, Axion, etc
-Structures: ABAQUS, ADINA, etc
-EXERSUG and NERSC merge prioritized requirement lists
-Refine policy and process for next year's cycle

Appendix - List of current third part application codes supported
-------- at NERSC (appendix at end of minutes).


The first VUgraph pretty much speaks for itself. The interpretive problems
in reading the ERDP responses are that PIs are not always clear about asking
for software at NERSC. Sometimes it is clear that they are referring to
Florida State University or their own workstations, and sometimes it is
ambiguous. On the second VUgraph, the second and third bullets describe a
slightly different proposal than the one made at the Oak Ridge ERSUG meeting.
NERSC is proposing to organize e-mail evaluation teams for the requested
software without EXERSUG involvement. EXERSUG is brought in after the review
groups have looked over and prioritized the requests. There was no objection
to the change. Jack Byers said it might be difficult for EXERSUG to merge
prioritized lists; however, the consensus was that users, not NERSC, should
play the major role in deciding what software is most important to acquire
and that perhaps the Supercomputer Access Committee should at least be informed
of the choices being made if they do not wish to participate in the process


Chris Anderson's Presentation - Developments in Graphics Applications

NCAR Graphics 3.2
-Available on all Crays and SAS
-Direct X window support
-Many features overhauled
-New and improved documentation
-NCAR Graphics Fundamentals 3.2
-Contouring and Mapping Tutorial

Q:How to get rid of GRAFLIB?
A:Just do it.

FY 1994:
-Convert NERSC applications
-Develop conversion tools and strategy

FY 1995:
-Assist users in converting codes

GRAFLIB will be available on;y as long as it compiles and runs
(no maintenance)

X Windows Support
-X Windows will be the preferred mode for graphics:
-Widely accepted means for transport of graphics
-Graphical User Interface (GUIs)
-Majority of new products use X Windows (some exclusively)
-Recommended System:
-Monochrome (B&W) Useful for majority of tools
-Color - preferred but more costly - AVS requires color capabilities
-Now available on PC's and Mac's
-Use of X Remote & modem for those without a network connection
_How can NERSC help you to obtain X Window capabilities?

Future Plans
-A Paradigm shift in computing and visualization
Stand Alone to Distributed
-Data and Dimensions Interface (DDI)
-Real Time Visualization
-MPP Visualization efforts

DDI is a tool designed to save time in visualizing large data files:
-Reads & writes DRS, GrADS, HDF & NetCDF files - may be used to transfer
variables between files and formats
-Provides attribute editing & dimension operations - Allows you to
extract only the data you need
-Has interfaces into AVS, Collage, Explorer, IDL, and PVWave -
introduces data into package with little effort
-Runs in distributed mode - DDI may run on the Cray and send data to a
visualization package on a workstation
-Available for Cray, Hewell-Packard, Silicon Graphics, and Sun computers
(requires X windows)

Real Time Visualization
The follow-on to DDi will be a package that allows data from an application
to be sent directly to a visualization package.
-NetCDF style routines for writing attributes and data, uses style network
interfaces to visualization package on other platforms.
-Data is buffered external to the application and sent directly to a
visualization package - allows your application to run faster
-To be developed in FY 1994-95

MPP Visualization Efforts
MPP computers will provide the power to create GIGANTIC data sets -
where will they go?
-Into (NetCDF style) files -0 used with DDI - Where will they be stored?
-Real Time Visualization - the data lands on a disk. Good for
applications with low volume or data output rate.
-Sustained high volume output will be visualized on the MPP computer.
We are entering into collaboration with CCSE to acquire/develop
visualization capabilities for Gigabyte data sets on MPP computers.


While Chris was talking about shifting away from Tektronix toward X Windows
based graphics, Bill McCurdy asked if ERSUG had any problem with NERSC going
to an entirely X Windows based documentation system. Victor Decyk said he
works at home and needs the current ASCII based documentation. Rick Kendall
then mentioned that NCD makes reasonably priced X terminals using their
proprietary X Remote protocol to compress data. With 14.4kb modems in
addition to compression, the X terminals work fairly well over telephone lines.
Mike McCoy said that if the dial-up X terminal is used only for black & white
ASCII text, 9600 baud is adequate. Jerry Potter asked how many modems NERSC
would have for dial-up X terminals. NERSC has 48 USRobotics 14.4 modems.

Byers comments:

Chris Anderson-
--replacing tymnet, should give 14.4k, expect to be really popular
--x-windows default for future for transferring graphics
--want to get rid of Tektronix emulation progs on pcs and macs
--there are progs that allow x-window to run on pcs and macs
--kendall had lots to say on this
--even x-based terminals are very useful
--mike mccoy using what from home? has x, can run emacs, can get 2D plots

Anderson and Kendall had a lot to say about what x-window stuff was available
for users at home. This information could be very useful to a wide
spectrum of our users. Even our national labs users, could use help here--
many of them have no employer-supplied hardware for use at home. Again,
detailed information is necessary here.


Other Items (Byers)

Jack Byers said the next ERSUG meeting will be near DOE headquarters this
summer so that SAC members could attend. Bill McCurdy wants to broadcast
the next ERSUG meeting over MBONE so that those not able to attend the
meeting can still participate in it. Jack also said EXERSUG would like to
return to the former method of selecting EXERSUG members, namely that nominee
names be submitted to appropriate SAC officers for approval.


The meeting was adjourned


Concluding Comment by Byers:

In my opinion the just concluded Ersug meeting at UCLA was very definitely
a success. The level of information from the users and Nersc response
was far better than we achieved at the last meeting at Oak Ridge. The credit
is mainly shared by Nersc staff and by the users who volunteered site reports.
The PNL-Nersc interaction and back-and-forth feedback prior to the meeting
helped considerably in exposing issues and most importantly in having
concrete suggestions for changes that were presented at the meeting.



List of application codes available at NERSC

This is a list of application codes available at NERSC for the general
scientific and/or engineering field. Most codes are sufficiently general
purpose to be of interest to the general user in chemical, nuclear,
physics, electrical engineering, and mechanical engineering fields. Each
one of these codes is described in more detail via their individual man
page. This list was obtained from "man applications".

Status of application codes. KEYS:
NAME : Name of the executable code.
A : Implemented on A-machine (Cray-C90)
C : Implemented on C-machine (Cray-2)
F : Implemented on F-machine (Cray-2)
S : Implemented on SAS ( Supercomputing Auxiliary Service )
- : Not implemented on the machines whose names are missing.
e.g., A-F- Code implemented only on A and F machines.
ACFS Code implemented on all machines, A, C, F, and SAS.

BRIEF DESCRIPTION: Short description of the code.
N-XXX : The level of support offered by NERSC is indicated at the
end of the code name line by N-XXX where N indicates:
? : Not classified, see the man page for code for more information
1 : Full support, one or more local experts available.
2 : some limited support is offered.
3 : no support, use at your own risk.
and XXX is the designated support person:
ACP : Arthur C. Paul
AR : Alan Riddle
BC : Bruce Curtis
GT : Greg Tomaschke
JM : Jose Milovich
MAN : See the man page for code for more information on support
SM : Susarla Murty
VD : Vera Djordjevic

The following codes are available from /usr/local/pub on the A, C, F, and SAS
machines as indicated by the AVAIL key. Further information about a given
code is available from the "man pages" via the standard UNIX
man code_name
command. See the end of this man page for a detailed description of the
use of the applications code driver script.


ACM ---- ACM-CALGO collected algorithms. 3-
AGX ---S C callable graphics library. 1-
AMBER -CF- Force-field molecular dynamics application code. 2-AR
Solves molecular dynamics of complex molecules.
AIM ---- AMPX module for BCD-Binary conversion.
AJAX ---- Combine selected nuclides in master interface format.
ALPO ---- Produce ANISN libraries from working libraries.
BONMI ---- Bondarenko resonance self-shielding calculation.
CHOX ---- Prepare coupled neutron/gamma ray interface.
DIAL ---- Edit master cross section interface.
MALOC ---- Collapse cross sections.
NITWL ---- Resonance self-shielding and working library production.
RADE ---- Check cross sections for errors.
ANISN ACF- A one dimensional multi-group discrete ordinates 3-GT
code for neutron transport.
ANSYS A--- A general purpose structural analysis code. 2-SM
Driver ansys for 4.4A, Driver ansys5.0 for 5.0
ARGUS ACF- Family of 3D codes for particle in cell simulation 2-SM
in transient or steady state, time domain and
frequency domain electromagnetics and electrostatics.
AVS ---S Application Visualization system. 1-
COG ---- Particle transport code designed to solve deep ( )
penetration problems.
CPC ---- Computer Physics Communications (CPC) modules 1-VD
are extracted from CFS via the getcpc script.
DANT - Series 2-SM
ONEDANT ---- 1D neutron/photon transport.
TWODANT ---- 2D neutron/photon transport.
THREEDANT ---- 3D neutron/photon transport.
DISCOVER A--- Force field molecular dynamics application code 2-AR
DYNA2D ACF- 2D hydrodynamic finite element code. See MDG 2-SM
DYNA3D ACF- 3D finite element code. See MDG 2-SM
EBQ - Series Driver EBQ 1-ACP
EBQ A--- Transport of space charge beams in axially symmetric
devices. Allows external electric and magnetic fields.
EBQPLOT A--- Plotting post processor.
EFFI - Series Driver EFIPCK 3-GT
EFFI ACF- Calculates magnetic flux lines, fields, forces,
inductance for arbitrary system of coils of rectangular
cross section conductor.
EIG ACF- Pre-processor, break magnet geometries into basic 1-VD
arcs and straight line segments for input to EFFI.
EFIBRK ACF- Post-processor for EIG and pre-processor for OHLGA. 1-VD
OHLGA ACF- Plots brick elements picture of magnet set from EIG 1-VD
EGUN - Series Driver EGUN 1-ACP
EGUN ACF- Charged particle trajectory program for electrostatic
and magnetostatic focusing systems including space
EGUN10K ACF- Larger dimensional problems.
EGUN50K ACF- Larger dimensional problems. Input magnetic vec-potential
EGUNPLOT ACF- Plotting post processor.
ESME - Series Driver ESME. 2-ACP
ESME ACF- Beam dynamics code for synchrotrons and storage
rings tracking longitudinal phase space.
ESMEPLOT ACF- Plotting post processor.
EXPLORER ---S Visualization system on order from IBM 1-
FORGE - ACFS Parallelization tool, fortran analysis utility 3-BC
FORORL - Series Driver FORORL 3-GT
FORIG ACF- (old name ORIGEN2) calculates radio nuclide generation
and depletion in fusion and fission reactors.
ORLIB ACF- Code to produce one energy group time and spatially
average neutron cross-section from the TART output.
FRED3D - A--- LLNL 3D free-electron laser simulation program ?-MAN
written to analyze and design optical free-electron
laser amplifiers. Runs both xf3d and xplt3d codes.
GAMDAT ACF- Data base, gamma ray cross sections for TART 3-GT
GAMESS A--- General Atomic and Molecular Electronic Structure 2-AR
System code - a quantum chemistry code.
GAUSSIAN 90 -CF- A connected system of programs for performing semi- 2-AR
empirical and ab-inito molecular orbit calculations.
GAUSSIAN 92 ACF- A connected system of programs for performing semi- 2-AR
empirical and ab-inito molecular orbit calculations.
GEANT3 ---- Monte Carlo detector design program. 3-SM
GEMINI ACF- 2 and 3 dimensional linear static and seismic 3-GT
structural analysis code.
GFUN3D ---- Calculates magnetic fields for a system of -ACP
conductors and non-linear (variable permeability)
magnetic materials in 3 dimensions.
HARWELL ACFS Harwell sparse matrix library as distributed by 1-
NAG. (We also have most of the Harwell Subroutine
library in source form without support).
HDF ACFS Hierarchical Data Format Library. 2-
IDTS - Series Driver IDTS. 3-GT
ALC ACF- Provides updating and editing of libraries
BNDRYS ACF- Selects boundary fluxes for subsequent use as internal
boundary sources.
DORT ACF- 2-D neutron and photon transport code. Ver 2.1.19
DRV ACF- Coordinates execution of problems
GIP ACF- Prepares cross section input for DORT from card or tape
GRTUNCL ACF- Prepares first collision source due to a
point source in RZ geometry (on or off axis).
RTFLUM ACF- Edits flux files and converts between various
flux file formats
TORT ACF- 3D discrete ordinates code to determine the flux of
neutrons generated as a result of particle interactions.
IMCONV ---S Convert between image file formats 3-
IMSL ACFS IMSL mathematics, statistics, special functions 1-
fortran libraries.
IMSLEGC ---S IMSL C callable Exponent Graphics Library in 1-
IMSLCMATH ---S IMSL C callable math library in 1-
IMSLEXP ACFS IMSL Fortran Callable Exponent Graphics Library in 1-
IMSL.IDF ---S IMSL Interactive Documentation Facility in Fortran 1-
Mathematics, statistics, special functions libraries.
IMSLIDL ---S IDL with IMSL enhanced math capabilities. Will 1-
possibly be upgraded to PV-WAVE/ADVANTAGE. in
ISLAND ---S IslandDraw, IslandWrite, IslandPaint. Located in 1-
ITS - Series Driver ITS. I)ntegrated T)iger S)eries of codes for 3-GT
coupled electron/photon Monte Carlo transport
calculations. The TIGER series is a group of
multimaterial and multi-dimensional codes designed to
provide the state-of-the art description of the
production and transport of the electron/photon cascade.
TIGER ACF- 1D multilayer code.
ACCEPT ACF- 3D transport code using combinatorial geometry.
GEN ACF- Cross section generation program.
TIGERP ACF- Includes ionization/relaxation model from SANDYL.
ACCEPTP ACF- 3D with ionization/relaxation model.
GENP ACF- Cross section generation code.
ACCEPTM ACF- Ionization/relaxation with macroscopic electric and
magnetic fields.
ITS3.0 - Series Driver ITS3.0 Same as ITS series but version 3.0. 3-GT
TIGER ACF- 1D multilayer code.
CYLTRAN ACF- 3D particle trajectory axisymmetric cylindrical code
for electron or photon beams.
ACCEPT ACF- 3D transport code using combinatorial geometry.
GEN ACF- Cross section generation program.
TIGERP ACF- Includes ionization/relaxation model from SANDYL.
ACCEPTP ACF- Enhanced ionization/relaxation (p-codes).
CYLTRANP ACF- Includes ionization/relaxation model from SANDYL.
GENP ACF- Cross section generation program.
CYLTRANM ACF- Combine collisional transport of CYLTRAN with the
transport in macroscopic electric and magnetic fields
of arbitrary spatial dependence..
ACCEPTM ACF- Includes macroscopic fields..
JASON ---- Solves general electrostatic problems having -ACP
either slab or cylindrical symmetry.
KHOROS ---S Image and signal processing system. 2-
LAPACK ACFS New public domain linear algebra package replacing 3-
LINPACK and EISPACK. Parts of LAPACK are supported
LATTICE -CF- Program calculates the first order characteristics of
synchrotrons and beam transport systems using matrix
MACSYMA ---S Symbolic algebra system 1-
MAFIA - Series MAFIA is a family of codes. Solves Maxwell's 2-SM
equations for static resonant and transient fields.
The mesh generator, postprocessor, and the units that
handle the physics are separate modules within the
family of codes
XE31 ACF- XE31.150K and XE31.1M. Eigen value solvers.
XE32 ACF- XE32A.1M and XE32B.1M. Eigen value solvers.
XM3 ACF- and XM3.1M. Mesh generator.
XP3 ACF- XP3.150K and XP3.1M.
XR3 ACF- XR3.150K and XR3.1M. Frequency domain solver.
XT3 ACF- and XT3.1M. Time domain solver.
XW3COR ACF- Post processor for the above solvers.
XURMEL ACF- and XURMEL.1M. Finds symmetric and asymmetric resonant
modes in cavities and frequencies of longitudinally
homogeneous fields in waveguides for cylindrically
symmetric accelerating structures.
XURMELT ACF- and XURMELT.350K. Similar to URMEL, includes ferrite
and dielectrics. Calculates the TE0 modes.
XTBCI ACF- and XTBCI.1M. 2D time domain version of T3. Interaction
between bunched beams of charged particles and symmetric
structures. Beams may be off axis.
MAFCO - Series Driver MAFCO 1-ACP
MAFCO ACF- Magnetic field code for handling general current
elements in three dimensions.
MAFCOPLOT ACF- Plotting post processor.
COILPLOT ACF- Plot coils from the MAFCO data.
MAFCOIL ---- Cosine wound coil generator for MAFCO.
MAPLE ---S Version V of the MAPLE symbolic algebra and numeric 1-
computation package.
MATHCAD ---S Mathematical scratchpad and documentation tool. 1-
MATHEMATICA ---S Symbolic algebra and numeric computation package. 1-
MATLAB ---S Interactive numerical computation and graphics 1-
MATXS ---- Generalized material cross section library. 2-SM
MCNP ACF- Neutron/photon transport code by Monte-Carlo method 3-GT
ENDF5P1 ACF- Neutron photon cross section data base libraries.
ENDL851 ACF- "
MDG - Series Driver MDG. 2-SM
DYNA2D ACF- 2D hydrodynamic finite element code with interactive
rezoning and graphical display.
DYNA3D ACF- 3D finite element code for analyzing the large
deformation dynamic response of inelastic solids and
NIKE2D ACF- Vectorized implicit, finite deformation finite element
code for analyzing the static and dynamic response of
2D solids with interactive rezoning and graphics.
NIKE3D ACF- Nonlinear, implicit 3D finite element code for solid
and structural mechanics.
TOPAZ2D ACF- 2D finite element code for heat transfer analysis,
electrostatics, and magnetostatics.
TOPAZ3D ACF- 3D finite element heat transfer code.
INGRID ACF- 3D mesh generator for modeling nonlinear systems
MAZE ACF- Input generator for DYNA2D and NIKE2D
ORION ACF- Interactive post-processor for 2D finite element codes.
TAURUS ACF- Interactive post-processor for the analysis codes
FACET ---- A Radiation View factor computer code for axisymmetric,
2D planer, and 3D geometries with shadowing.
MESA A--- Chemistry code 3-
MODSAP -CF- Modified version of structural analysis program 2-SM
SAP IV for static and dynamic response of linear and
localized nonlinear structures.
MOPAC ---- Semi-empirical quantum chemistry application code 2-AR
to study chemical structures and reactions. Used in
electronic part calculation to obtain molecular
orbitals, head of formation and its derivative with
respect to molecular geometry.
MORSE ---- Neutron/photon transport code by Monte-Carlo method. ( )
NAG ACFS Numerical Algorithms Group Math Library 1-
NASTRAN A--- A-Machine only. A large scale general purpose code 1-AR
to solve wide variety of engineering problems by finite
element method for general purpose structural analysis.
NCAR ACFS NCAR plotting library and utilities. 1-
NETCDF ACFS Network Common Data Format library. 1-
NIKE2D ACF- finite deformation finite element code. See MDG 2-SM
NIKE3D ACF- Nonlinear, 3D finite element code. See MDG 2-SM
NJOY ---- Complete nuclear cross section data processing 2-SM
OI (oi) ---S GUI class library. 1-
ORACLE ---S Relational data base manager 1-
ORION ACF- Interactive post-processor codes. See MDG. 2-SM
PARMELA - ---- Drift tube linac Electron beam dynamics code. -ACP
PARMILA - ACF- Drift tube linac ION beam dynamics code. 1-ACP
PATRAN3 - ---S Finite element analysis pre- and post-processor. 1-VD
PDE2D ACFS General purpose 2-dimensional partial differential 1-
equation solver.
POIFISH - Series Driver POIFISH. LAACG-Los Alamos Accelerator Code Group. -ACP
AUTOMESH ACF- Auto mesh generator for LATTICE code.
LATTICE ACF- Generates irregular triangular mesh and problem
POISSON ACF- Solves Poisson's or Laplaces equation by successive
over-relaxation with nonlinear isotropic iron
(dielectric) and electric current (charge) for problems
with 2D cartesian or 3D cylindrical symmetry.
PANDIRA ACF- For problems with permanent magnets, similar to
POISSON above.
PSFPLOT ACF- Plots physical mesh generated by lattice.
FORCE ACF- Calculates forces and torques on coils and iron regions
from POISSON and PANDIRA solutions of the potential.
MIRT ACF- Optimizes magnet profiles, coil shapes, and current
densities based on a field specification defined by user
SUPERFISH ACF- Solves for the TM and TE resonant frequencies and field
distributions in an RF cavity with 2D cartesian or 3D
cylindrical symmetry.
SF01 ACF- Calc auxiliary quantities in cavities and drift tubes.
SHY A--- Calc and prints magnetic and electric fields in a
specified region for cylindrical problems.
PANT ACF- Uses SUPERFISH output to calculate temperatures on a
cavity surface and internally.
POISSON - Series (LBL) Lawrence Berkeley Laboratory. Driver POISSON 1-ACP
AUTOMESH ACF- See above.
LATTICE ACF- See above.
POISSON ACF- See above.
TEKPLOT ACF- See above.
SAM-CE ---- Neutron/photon transport by Monte- Carlo ( )
method using combinatorial geometry.
SANDYL ---- Calculates combined photon and electron transport ( )
in complex systems.
SAP4 ---- Structural analysis code. ( )
SASSI - Series Driver SASSI. 3-GT
COMBIN ACF- A system for analysis of soil-structure interactions.
ANALYS ACF- Modules consist of house, analys, point2, point3,
HOUSE ACF- site, combin, stress, motion, and motor.
SLATEC ACFS SLATEC Common Math Library. 1-
SOTRM ACF- Code to generate first and second order matrix 1-ACP
elements by tracking charged particles in a
specified magnetic field.
SPICE - Series Driver SPICE.
SPICE ACF- General purpose circuit simulation code Ver. 2G.5 3-ACP
for nonlinear DC, transient, and linear AC analyses.
Circuits may contain resistors, capacitors, inductors,
current and voltage sources, semiconductor devices, etc.
SPICE3 A--- Similar to SPICE, Version 3C1, April 1989.
NUTMEG A--- Spice post-processor
SCONVERT A--- Convert spice formats.
STANSOL -CF- Axisymmetric solenoid structural code. Solves for 1-VD
stress for Lorentz, thermal and pressure loadings.
TART ACF- Monte-Carlo neutron/photon transport code. Files 3-GT
cross, gamdat, tartnd, and tartppd. See FORORL.
TAURUS -CF- Interactive post-processor for NIKE,DYNA,TOPAZ, see MDG
TOPAZ2D ACF- 2D finite element code for heat transfer analysis, 2-SM
electrostatics, and magnetostatics.
TOPAZ3D ACF- 3D finite element heat transfer code. 2-SM
TOSCA - Series Driver TOSCA, version 4.0. 2-ACP
TOSCA ACF- Version 4.0. Finite element program for solution
of magnetostatic and electrostatic fields in 3D.
XMESH ACF- Cray version of Pre-processor for TOSCA 4.0.
PE-2D ---- 2D axisymmetric static and dynamic electromagnetic
analysis package.
SCARPIA ---- Pre-processor for TOSCA. VAX Ver. Superseded by xmesh
OPERA2 ---S Pre and Post processor for TOSCA.
TRAJ ACF- Calculates orbits in a given two dimensional 1-ACP
magnetic field in polar or rectangular coordinates
simultaneously integrating the differential equations
for the first order ion optic transfer matrix
TRANS -CF- Program for designing charged particle beam
transport system including third order optics.
TRAN2 ACF- Program for designing charged particle beam
transport system including space charge.
TRAN50K ACF- TRAN2 with large data array 50,001 numbers, for
12,501 magnetic elements.
TRAN22 ACF- Program for designing charged particle beam
transport system including second order optics.
TRANPLOT ACF- Plot post processor.
GLOBAL ACF- Plot post processor.
ELLIPSE ACF- Post processor, generates phase space graphics
TURTLE ACF- Ray trace code for generating histograms and 1-ACP
scatter plots for the "TRANS" series data
TRANSX ---- Neutron/photon cross section table production, 2-SM
read MATXS libraries.
TRIDENT-CTR ---- 2-D X-Y and R-Z geometry multi-group neutral 2-SM
particle transport code for toroidal calculations.
UEDGE A--- Unified Tokamak Edge Physics Modeling Code ?-MAN
Solves fluid equations describing transport and source
and sinks of charged and neutral species in edge region
of a axisymmetric tokamak plasma.
URMEL -CF- see MAFIA group
URMELT -CF- see MAFIA group
WAVE ---S Precision Visuals, Inc.'s PV-WAVE visualization 1-
and data analysis system. Command Language and Point-
and-Click version both installed.

| |
| |
| D D D D D D D D D D |
| D D D D D D D D D D |
| D D D D D D D D D D |
| D D D D D D D D D D |
| |
| |
| D D D D D D D D D D |
| D D D D D D D D |
| D D D D D D D |
| D D D D D D D D D |
| |

All application codes are called into execution via a driver script.
This script provides for 1) file name substitutions, 2) help on usage,
3) accounting, 4) code history, 5) version substitution, 6) examples,
and 7) directory cleanup. These will now be explained in more detail.
The generic driver invocation is via the name of the application code
to be run. This name is the column one name given in this man page.
This driver may in fact be used to run the one or more codes listed.
The code to be run is specified by the -m (for module) parameter.

name [-vers][-history][-examples][-m module][-i input][-o output] ....

or the older (obsolete and discouraged) form

name [-vers][-history][-examples][m=module][,][i=input][,][o=output] ....

where module is the name of the application to be executed and name is the
name of the driver script. This execution line may extend over several
lines by the use of the UNIX continuation symbol - the back slash, as

name -new -m tran2 \
-i inname -o outname -t8 t8name -t99 t99name .....

The script allows for file name substitution. The default input file name
for the transport code, tran2, is intran. If you have a data input file
in your local directory by the name of data_case1, you may run it by

transport -m tran2 -i data_case1

generating the default output file names outran, tape8 and tape99.
The output file names may also be changed via the script, for example

transport -m tran2 -i inname -o outname -t99 tape99_name

will generate the output file by the name outname, and the tape99
post-processing plotting file by the name tape99_name.

If the script can not find the given input file, or if the code has been
passed incomplete options and/or parameters, the script will write limited
help obtained from the appropriate man page.

The vers parameter instructs the script to run the module from the
indicated /usr/local/ directory. The default directory is pub, executing
the codes from the directory /usr/local/pub. vers may take the values new,
pub, or old. For example, to run the module tran2 from /usr/local/new a
user would execute:

transport -new -m tran2

The history parameter instructs the script to display the appropriate
history file if it exists. For the transport code the invocation would be

transport -history -m tran2

generating output to your screen giving the modification history of the
transport code package as implemented here at NERSC.

Many of the application codes have example data available. For some of the
codes the data is stored on CFS as explained in the appropriate man page.
For other codes the example data is part of the applications directory
and is immediately available on line. These examples are extracted via
the driver script parameter "examples". For the transport code this would

transport -examples -m tran2

You then would be given a list of the available examples and the option to
interactively extract desired examples into a local file or to list the
example to your terminal screen.

The script will on termination, examine your local directory and remove
any empty files that have been generated by this application.

In order to help the NERSC staff concentrate on the most frequent and
heavily used applications, an accounting file is maintained by the script
on the code usage.

CONTACT 1-800-66-NERSC (1-800-666-3772)
National Energy Research Supercomputer Center
Lawrence Livermore Nation Laboratory
7000 East Ave, P.O.Box 808, Livermore, CA. 95440