[OMPI users] Bug in OpenMPI-1.8.1: missing routines mpi_win_allocate_shared, mpi_win_shared_query called from Ftn95-code

2014-06-05 Thread Michael.Rachner
Dear developers of OpenMPI, I found that when building an executable from a Fortran95-code on a LINUX cluster with OpenMPI-1.8.1 (and INTEL-14.0.2 Ftn-compiler) the following two MPI-3 routines do not exist: /dat/KERNEL/mpi3_sharedmem.f90:176: undefined reference to `mpi_win_allocate_shared_' /

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Michael.Rachner
Dear people, As a continuation of the hint of Damien , who suggested using MPICH on WIN7 : MPICH has already stopped supporting WINDOWS in the past. MPICH recommends using MS-MPI for WINDOWS, which is a derivative from MPICH2. You may download the binary (for free) from the landing page fo

Re: [OMPI users] latest stable and win7/msvc2013 and shared memory feature

2014-07-18 Thread Michael.Rachner
Dear Mr. Tillier and other MPI-developers, I am glad to hear that MS-MPI development is still active and interested in User feature requests. You want User feature requests for your further MS-MPI development? Here is my request (I am doing Fortran CFD-code development for decades now under

[OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-10-24 Thread Michael.Rachner
Dear developers of OPENMPI, I am running a small downsized Fortran-testprogram for shared memory allocation (using MPI_WIN_ALLOCATE_SHARED and MPI_WIN_SHARED_QUERY) ) on only 1 node of 2 different Linux-clusters with OPENMPI-1.8.3 and Intel-14.0.4 /Intel-13.0.1, respectively. The program sim

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-10-27 Thread Michael.Rachner
Dear Mr. Squyres. We will try to install your bug-fixed nigthly tarball of 2014-10-24 on Cluster5 to see whether it works or not. The installation however will take some time. I get back to you, if I know more. Let me add the information that on the Laki each nodes has 16 GB of shared memory (t

[OMPI users] WG: Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-10-27 Thread Michael.Rachner
Dear developers of OPENMPI, We have now installed and tested the bugfixed OPENMPI Nightly Tarball of 2014-10-24 (openmpi-dev-176-g9334abc.tar.gz) on Cluster5 . As before (with OPENMPI-1.8.3 release version) the small Ftn-testprogram runs correctly on the login-node. As before the program abort

Re: [OMPI users] WG: Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-10-27 Thread Michael.Rachner
-Ursprüngliche Nachricht- Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von Gilles Gouaillardet Gesendet: Montag, 27. Oktober 2014 14:49 An: Open MPI Users Betreff: Re: [OMPI users] WG: Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHAR

Re: [OMPI users] WG: Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-10-27 Thread Michael.Rachner
Dear Gilles, This is the system response on the login node of cluster5: cluster5:~/dat> mpirun -np 1 df -h Filesystem Size Used Avail Use% Mounted on /dev/sda31 228G 5.6G 211G 3% / udev 32G 232K 32G 1% /dev tmpfs32G 0 32G 0% /dev/shm /dev/sda11

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-10-31 Thread Michael.Rachner
Dear developers of OPENMPI, There remains a hanging observed in MPI_WIN_ALLOCATE_SHARED. But first: Thank you for your advices to employ shmem_mmap_relocate_backing_file = 1 It indeed turned out, that the bad (but silent) allocations by MPI_WIN_ALLOCATE_SHARED, which I observed in the past

[OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Michael.Rachner
Dear OPENMPI developers, In OPENMPI-1.8.3 the Ftn-bindings for MPI_SIZEOF are missing, when using the mpi-module and when using mpif.h . (I have not controlled, whether they are present in the mpi_08 module.) I get this message from the linker (Intel-14.0.2): /home/vat/src/KERNEL/mpi_ini.

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
Dear Gilles, My small downsized Ftn-testprogram for testing the shared memory feature (MPI_WIN_ALLOCATE_SHARED, MPI_WIN_SHARED_QUERY, C_F_POINTER) presumes for simplicity that all processes are running on the same node (i.e. the communicator containing the procs on the same node is just MPI_

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
-Ursprüngliche Nachricht- Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von michael.rach...@dlr.de Gesendet: Mittwoch, 5. November 2014 11:09 An: us...@open-mpi.org Betreff: Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
Dear Gilles, Sorry, the source of our CFD-code is not public. I could share the small downsized testprogram, not the large CFD-code. The small testprogram uses the relevant MPI-routines for the shared memory allocation in the same manner as is done in the CFD-code. Greetings Michael Rachner

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Michael.Rachner
Sorry, Gilles, you might be wrong: The error occurs also with gfortran-4.9.1, when running my small shared memory testprogram: This is the answer of the linker with gfortran-4.9.1 : sharedmemtest.f90:(.text+0x1145): undefined reference to `mpi_sizeof0di4_' and this is the answer with int

Re: [OMPI users] Bug in OpenMPI-1.8.3: storage limition in shared memory allocation (MPI_WIN_ALLOCATE_SHARED) in Ftn-code

2014-11-05 Thread Michael.Rachner
Dear Mr. Squyres, Dear MPI-Users and MPI-developers, Here is my small MPI-3 shared memory Ftn95-testprogram. I am glad that it can be used in your test suite, because this will help to keep the shared-memory feature working in future OPENMPI-releases. Moreover, it can help any MPI User (in

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-05 Thread Michael.Rachner
Dear Mr. Squyres, In my sharedmemtest.f90 coding just sent to you, I have added a call of MPI_SIZEOF (at present it is deactivated, because of the missing Ftn-binding in OPENMPI-1.8.3). I suggest, that you may activate the 2 respective statements in the coding , and use yourself the program

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-06 Thread Michael.Rachner
Dear Mr. Squyres, a) When looking in your mpi_sizeof_mpifh.f90 test program I found a little thing: You may (but need not) change the name of the integer variable size to e.g. isize , because size is just an intrinsic function in Fortran (you may see it already, if you have an edi

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-06 Thread Michael.Rachner
Dear Mr. Squyres, I agree fully with omitting the explicit interfaces from mpif.h . It is an important resort for legacy codes. But, in the mpi and mpi_f08 module explicit interfaces are required for all(!) MPI-routines. So far, this is not fulfilled in MPI-versions I know. I want to poi

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-06 Thread Michael.Rachner
Dear Mr. Squyres, Thank you for your clear answer on the state of the interfaces in the mpi modules of OPENMPI. A good state! And I have coded sufficiently bugs myself, so I do not become too angry about the bugs of others. If I should stumble upon missing Ftn-bindings in the future, I will sen

Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-18 Thread Michael.Rachner
It may be possibly a bug in Intel-15.0 . I suspect it has to do with the contains-block and with the fact, that you call an intrinsic sbr in that contains-block. Normally this must work. You may try to separate the influence of both: What happens with these 3 variants of your code: variant a

Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-18 Thread Michael.Rachner
Tip: INTEL-Ftn-compiler problems can be communicated to INTEL there: https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux-and-mac-os-x Greetings Michael Rachner Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von John Bray Gesendet: Dienstag, 18. November 2014 11:0

Re: [OMPI users] Open MPI SC'14 BOF slides: mpif.h --> module mpi

2014-11-21 Thread Michael.Rachner
Dear community, Slide 92 of the OpenMPI Sc'14 slides describes the simple migration from mpif.hto use mpiin a Fortran application code. However the description is not correct. In a Fortran routine, the use-stmts (if there are) must come before (!) any other stmts, i.e. you cannot

[OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with Intel-Ftn-compiler

2015-11-19 Thread Michael.Rachner
Dear developers of OpenMPI, I am trying to run our parallelized Ftn-95 code on a Linux cluster with OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler. In the code I use the module MPI ("use MPI"-stmts). However I am not able to compile the code, because of compiler error messages like this: /

Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with Intel-Ftn-compiler

2015-11-19 Thread Michael.Rachner
Sorry, Gilles, I cannot update to more recent versions, because what I used is the newest combination of OpenMPI and Intel-Ftn available on that cluster. When looking at the list of improvements on the OpenMPI website for OpenMPI 1.10.1 compared to 1.10.0, I do not remember having seen this

Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with Intel-Ftn-compiler

2015-11-19 Thread Michael.Rachner
Thank You, Nick and Gilles, I hope the administrators of the cluster will be so kind and will update OpenMPI for me (and others) soon. Greetings Michael Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von Gilles Gouaillardet Gesendet: Donnerstag, 19. November 2015 12:59 An: Open MP

Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with Intel-Ftn-compiler

2015-11-23 Thread Michael.Rachner
Dear Gilles, In the meantime the administrators have installed (Thanks!) OpenMPI-1.10.1 with Intel-16.0.0 on the cluster. I have tested it with our code: It works. The time spent for MPI-data transmission was the same as with OpenMPI-1.8.3&Intel-14.0.4, but was ~20% higher than with IMPI-5.1.