NERSCPowering Scientific Discovery Since 1974

Open Issues

Fortran buffered I/O with Intel compilers is no longer enabled by default on Edison

March 16, 2017

We used to have the following environment variable set on Edison to enable the buffered I/O for Fortran codes that are built with Intel compilers.

Read the full post

The /scratc3 performance degradation after GridRaid upgrade

April 7, 2016

The Edison /scratch3 file system was upgraded to Grid Raid from MDRaid during the move to the CRT building (Dec, 2015). This upgrade was recommended by the vendors because it greatly enhances the drive rebuild time and write performance when compared to the traditional MDRaid that was deployed on Edison /scratch3 file system before the move (The /scratch1 and /scratch2 file systems are still in MDRaid). Unfortunately, we observed about two times I/O bandwidth degradation after the upgrade. We opened a bug against the vendors (Cray/Xyratex), and a bug fix has been worked out and will be released soon. We will apply the bug fix to the /scratch3 file system in one of the coming maintenances as soon as the fix is released. 

Read the full post

MPI-3 Atomic Performance Degradation since cray-mpich/7.3.0

March 7, 2016 by Thorsten Kurth

Apparently, after the integration of MPI-3.2 into cray-mpich, Cray also had to implement a workaround which significantly degrades the performance for short message MPI-3 atomics.

Read the full post

Link error from craype/2.5.0

January 13, 2016 by Woo-Sun Yang

If you build a code using a file called 'configure' with craype/2.5.0, Cray build-tools assumes that you want to use the 'native' link mode (e.g., gcc defaults to dynamic linking), by adding '-Wl,-rpath=/opt/intel/composer_xe_2015/compiler/lib/intel64 -lintlc'. This creates a link error:

Read the full post

MPI errors from cray-mpich/7.3.0

January 6, 2016 by Ankit Bhagatwala

A change in the MPICH2 library that now strictly enforces non-overlapping buffers in MPI collectives may cause some MPI applications that use overlapping buffers to fail at runtime. As an example, one of the routines affected is MPI_ALLGATHER. There are several possible fixes.

Read the full post