Re: [OMPI users] build failure with NAG Fortran

2016-01-26 Thread Gilles Gouaillardet
Dave, This is a known issue, and it is being discussed at https://github.com/open-mpi/ompi/issues/1284 and was initially reported at http://www.open-mpi.org/community/lists/devel/2016/01/18470.php for the time being, you can refer to the blog post for a workaround Cheers, Gilles On Wednesday,

[OMPI users] Fortran features and interfaces (was: Strange behaviour OpenMPI in Fortran)

2016-01-26 Thread Dave Love
"Jeff Squyres (jsquyres)" writes: > The following from the v1.10 README file may shed some light on your question: > > https://github.com/open-mpi/ompi-release/blob/v1.10/README#L370-L405 Thanks; I should have remembered this. However, it's not generally true, as that says, that a non-GNU F

Re: [OMPI users] build failure with NAG Fortran

2016-01-26 Thread Nick Papior
Try and add this flag to the nagfor compiler. -width=90 it seems may be related to line-length limit? 2016-01-26 16:26 GMT+01:00 Dave Love : > Building 1.10.2 with the NAG Fortran compiler version 6.0 fails with > > libtool: compile: nagfor -I../../../../ompi/include > -I../../../../ompi/in

Re: [OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-01-26 Thread Dave Love
Rob Latham writes: > We didn't need to deploy PLFS at Argonne: GPFS handled writing N-to-1 > files just fine (once you line up the block sizes), so I'm beholden to > PLFS communities for ROMIO support. I guess GPFS has improved in that respect, as I think it benefited originally. Is it known w

[OMPI users] build failure with NAG Fortran

2016-01-26 Thread Dave Love
Building 1.10.2 with the NAG Fortran compiler version 6.0 fails with libtool: compile: nagfor -I../../../../ompi/include -I../../../../ompi/include -I. -I. -I. -I../../../../ompi/mpi/fortran/use-mpi-tkr -c mpi_comm_spawn_multiple_f90.f90 -PIC -o .libs/mpi_comm_spawn_multiple_f90.o NAG For

[OMPI users] many return codes not checked in the source

2016-01-26 Thread Dave Love
If you build with gcc -Wall, e.g. with the default RHEL rpm flags, you'll see a lot of warnings about ignored return values of functions, and a good fraction of the ones I've checked look as if they should be fixed. The most frequently-reported is asprintf, where the code checks for errors by look

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-26 Thread Dave Love
Jed Brown writes: > It would be folly for PETSc to ship with a hard dependency on MPI-3. > You wouldn't be able to package it with ompi-1.6, for example. But that > doesn't mean PETSc's configure can't test for MPI-3 functionality and > use it when available. Indeed, it does (though for differe

Re: [OMPI users] Error building openmpi-v2.x-dev-1020-ge2a53b3 on Solaris

2016-01-26 Thread Gilles Gouaillardet
when you make a pr, I will be happy to build it on Solaris (I downloaded a vm from oracle, and installed oracle studio compilers) Cheers, Gilles On Tuesday, January 26, 2016, Edgar Gabriel wrote: > you are probably right, the code in io_ompio was copied from fs_lustre > (and was there for a lo

Re: [OMPI users] Error building openmpi-v2.x-dev-1020-ge2a53b3 on Solaris

2016-01-26 Thread Edgar Gabriel
you are probably right, the code in io_ompio was copied from fs_lustre (and was there for a long time), but if the solaris system does not support Lustre, it would not have shown up. The generic ufs component actually does not have that sequence. I will prepare a patch, just not sure how to tes

Re: [OMPI users] Error building openmpi-v2.x-dev-1020-ge2a53b3 on Solaris

2016-01-26 Thread Gilles Gouaillardet
Paul Hargrove builds all rc versions on various platforms that do include solaris. the faulty lines were committed about 10 days ago (use romio instead of ompio with lustre) and are not fs specific. I can only guess several filesytems are not available on solaris, so using a Linux statfs never caus

Re: [OMPI users] Error building openmpi-v2.x-dev-1020-ge2a53b3 on Solaris

2016-01-26 Thread Edgar Gabriel
I can look into that, but just as a note, that code is now for roughly 5 years in master in *all* fs components, so its not necessarily new (it just shows how often we compile with solaris). Based on what I see in the opal/util/path.c the function opal_path_nfs does something very similar, but

Re: [OMPI users] Error building openmpi-v2.x-dev-1020-ge2a53b3 on Solaris

2016-01-26 Thread Gilles Gouaillardet
Thanks Siegmar, recent updates cannot work on Solaris. Edgar, you can have a look at opal/util/path.c, statfs "oddities" are handled here Cheers, Gilles On Tuesday, January 26, 2016, Siegmar Gross < siegmar.gr...@informatik.hs-fulda.de> wrote: > Hi, > > yesterday I tried to build openmpi-v2.

[OMPI users] Error building openmpi-v2.x-dev-1020-ge2a53b3 on Solaris

2016-01-26 Thread Siegmar Gross
Hi, yesterday I tried to build openmpi-v2.x-dev-1020-ge2a53b3 on my machines (Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with gcc-5.1.0 and Sun C 5.13. I was successful on my Linux machine, but I got the following errors on both Solaris platforms. tyr openmpi-v2.x-dev

Re: [OMPI users] openmpi-1.10.2 cores at mca_coll_libnbc.so

2016-01-26 Thread Gilles Gouaillardet
I am not aware of any other reason. please send a program that can evidence the issue and i will have a look at it Cheers, Gilles On 1/26/2016 3:44 PM, Eva wrote: No. I didn't use MPI_Type_free Is there any other reason? 2016-01-26 13:35 GMT+08:00 Eva >: op

Re: [OMPI users] openmpi-1.10.2 cores at mca_coll_libnbc.so

2016-01-26 Thread Eva
No. I didn't use MPI_Type_free Is there any other reason? 2016-01-26 13:35 GMT+08:00 Eva : > openmpi-1.10.2 cores at mca_coll_libnbc.so > > My program is transferred from 1.8.5 to 1.10.2. But when I run it, it > cores as below. > > Program terminated with signal 11, Segmentation fault. > #0 0x0

Re: [OMPI users] openmpi-1.10.2 cores at mca_coll_libnbc.so

2016-01-26 Thread Gilles Gouaillardet
Hi, Are you using some derived datatypes that are freed (MPI_Type_free) *before* the non blocking communication completes ? this is a known issue we are currently working on (but that was already present in 1.8.5) can you write and post a simple program that evidences this issue ? Cheers, G

[OMPI users] openmpi-1.10.2 cores at mca_coll_libnbc.so

2016-01-26 Thread Eva
openmpi-1.10.2 cores at mca_coll_libnbc.so My program is transferred from 1.8.5 to 1.10.2. But when I run it, it cores as below. Program terminated with signal 11, Segmentation fault. #0 0x7fa3550f51d2 in ompi_coll_libnbc_igather () from /home/work/wuzhihua/install/openmpi-1.10.2rc3-gcc4.8/