Jeff,
I've tried moving the backing file and it doesn't matter. I can say that
PGI 14.7 + Open MPI 1.8.1 does not show this issue. I can run that on 96
cores just fine. Heck, I've run it on a few hundred.
As for the 96, they are either on 8 Westmere nodes (8 nodes with 2 6-core
sockets) or 6 Sand
Have you tried moving your shared memory backing file directory, like the
warning message suggests?
I haven't seen a shared memory file on a network share cause correctness issues
before (just performance issues), but I could see how that could be in the
realm of possibility...
Also, are you r
Open MPI Users,
I work on a large climate model called GEOS-5 and we've recently managed to
get it to compile with gfortran 4.9.1 (our usual compilers are Intel and
PGI for performance). In doing so, we asked our admins to install Open MPI
1.8.1 as the MPI stack instead of MVAPICH2 2.0 mainly beca