Re: [OMPI users] Randomly long (100ms vs 7000+ms) fulfillment of MPI_Ibcast

2014-11-05 Thread Steven Eliuk
OpenMPI: 1.8.1 with CUDA RDMA… Thanks sir and sorry for the late response, Kindest Regards, — Steven Eliuk, Ph.D. Comp Sci, Advanced Software Platforms Lab, SRA - SV, Samsung Electronics, 1732 North First Street, San Jose, CA 95112, Work: +1 408-652-1976, Work: +1 408-544-5781 Wednesdays, Cell: +

Re: [OMPI users] ipath_userinit errors

2014-11-05 Thread Friedley, Andrew
Hi Michael, >From what I understand, this is an issue with the qib driver and PSM from RHEL >6.5 and 6.6, and will be fixed for 6.7. There is no functional change between >qib->PSM API versions 11 and 12, so the message is harmless. I presume you're >using the RHEL sourced package for a reaso

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Jeff Squyres (jsquyres)
On Nov 5, 2014, at 12:23 PM, Dave Love wrote: > Is the issue documented publicly? I'm puzzled, because it certainly > works in a simple case: There were several commits; this was the first one: https://github.com/open-mpi/ompi/commit/d7eaca83fac0d9783d40cac17e71c2b090437a8c A few more were re

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Dave Love
"Jeff Squyres (jsquyres)" writes: > Yes, this is a correct report. > > In short, the MPI_SIZEOF situation before the upcoming 1.8.4 was a bit > of a mess; it actually triggered a bunch of discussion up in the MPI > Forum Fortran working group (because the design of MPI_SIZEOF actually > has some

Re: [OMPI users] change in behaviour 1.6 -> 1.8 under sge

2014-11-05 Thread Dave Love
Ralph Castain writes: > I confirmed that things are working as intended. I could have been more explicit saying so before. > If you have 12 cores on a machine, and you do > > mpirun -map-by socket:PE=2 > > we will execute 6 copies of foo on the node because 12 cores/2pe/core = 6 > procs. For

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Jeff Squyres (jsquyres)
Meh. I forgot to attach the test. :-) Here it is. On Nov 5, 2014, at 10:46 AM, Jeff Squyres (jsquyres) wrote: > On Nov 5, 2014, at 9:59 AM, > wrote: > >> In my sharedmemtest.f90 coding just sent to you, >> I have added a call of MPI_SIZEOF (at present it is deactivated, because of >

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Jeff Squyres (jsquyres)
On Nov 5, 2014, at 9:59 AM, wrote: > In my sharedmemtest.f90 coding just sent to you, > I have added a call of MPI_SIZEOF (at present it is deactivated, because of > the missing Ftn-binding in OPENMPI-1.8.3). FWIW, I attached one of the tests I put in our test suite for SIZEOF issues af

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Michael.Rachner
Dear Mr. Squyres, In my sharedmemtest.f90 coding just sent to you, I have added a call of MPI_SIZEOF (at present it is deactivated, because of the missing Ftn-binding in OPENMPI-1.8.3). I suggest, that you may activate the 2 respective statements in the coding , and use yourself the program

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
Dear Mr. Squyres, Dear MPI-Users and MPI-developers, Here is my small MPI-3 shared memory Ftn95-testprogram. I am glad that it can be used in your test suite, because this will help to keep the shared-memory feature working in future OPENMPI-releases. Moreover, it can help any MPI User (in

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Jeff Squyres (jsquyres)
Yes, I would love to have a copy of that test program, if you could share it. I'll add it to our internal test suite. On Nov 5, 2014, at 5:08 AM, wrote: > Dear Gilles, > > My small downsized Ftn-testprogram for testing the shared memory feature > (MPI_WIN_ALLOCATE_SHARED, MPI_WIN_SHARED

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Jeff Squyres (jsquyres)
Yes, this is a correct report. In short, the MPI_SIZEOF situation before the upcoming 1.8.4 was a bit of a mess; it actually triggered a bunch of discussion up in the MPI Forum Fortran working group (because the design of MPI_SIZEOF actually has some unintended consequences that came to light w

Re: [OMPI users] OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Gilles Gouaillardet
Michael, Did you recompile with gfortran compiler or relink only ? You need to recompile and relink Can you attach your program so i can have a look ? You really need one mpi install per compiler, and more if compilers versions from the same vendor are not compatible. modules are useful to make

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Michael.Rachner
Sorry, Gilles, you might be wrong: The error occurs also with gfortran-4.9.1, when running my small shared memory testprogram: This is the answer of the linker with gfortran-4.9.1 : sharedmemtest.f90:(.text+0x1145): undefined reference to `mpi_sizeof0di4_' and this is the answer with int

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Gilles Gouaillardet
Michael, the root cause is openmpi was not compiled with the intel compilers but the gnu compiler. fortran modules are not binary compatible so openmpi and your application must be compiled with the same compiler. Cheers, Gilles On 2014/11/05 18:25, michael.rach...@dlr.de wrote: > Dear OPENMPI

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
Dear Gilles, Sorry, the source of our CFD-code is not public. I could share the small downsized testprogram, not the large CFD-code. The small testprogram uses the relevant MPI-routines for the shared memory allocation in the same manner as is done in the CFD-code. Greetings Michael Rachner

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
-Ursprüngliche Nachricht- Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von michael.rach...@dlr.de Gesendet: Mittwoch, 5. November 2014 11:09 An: us...@open-mpi.org Betreff: Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Gilles Gouaillardet
Hi Michael, bigger the program, bigger the fun ;-) i will have a look at it. Cheers, Gilles On 2014/11/05 19:08, michael.rach...@dlr.de wrote: > Dear Gilles, > > My small downsized Ftn-testprogram for testing the shared memory feature > (MPI_WIN_ALLOCATE_SHARED, MPI_WIN_SHARED_QUERY, C_F_PO

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
Dear Gilles, My small downsized Ftn-testprogram for testing the shared memory feature (MPI_WIN_ALLOCATE_SHARED, MPI_WIN_SHARED_QUERY, C_F_POINTER) presumes for simplicity that all processes are running on the same node (i.e. the communicator containing the procs on the same node is just MPI_

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Gilles Gouaillardet
Michael, could you please share your test program so we can investigate it ? Cheers, Gilles On 2014/10/31 18:53, michael.rach...@dlr.de wrote: > Dear developers of OPENMPI, > > There remains a hanging observed in MPI_WIN_ALLOCATE_SHARED. > > But first: > Thank you for your advices to employ

[OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Michael.Rachner
Dear OPENMPI developers, In OPENMPI-1.8.3 the Ftn-bindings for MPI_SIZEOF are missing, when using the mpi-module and when using mpif.h . (I have not controlled, whether they are present in the mpi_08 module.) I get this message from the linker (Intel-14.0.2): /home/vat/src/KERNEL/mpi_ini.