Volker,

thanks, i will have a look at it

meanwhile, if you can reproduce this issue on a more mainstream
platform (e.g. linux + gfortran) please let me know.

since you are using ifort, Open MPI was built with Fortran 2008
bindings, so you can replace
include 'mpif.h'
with
use mpi_f08
and who knows, that might solve your issue


Cheers,

Gilles

On Wed, Jul 26, 2017 at 5:22 PM, Volker Blum <volker.b...@duke.edu> wrote:
> Dear Gilles,
>
> Thank you very much for the fast answer.
>
> Darn. I feared it might not occur on all platforms, since my former Macbook
> (with an older OpenMPI version) no longer exhibited the problem, a different
> Linux/Intel Machine did last December, etc.
>
> On this specific machine, the configure line is
>
> ./configure CC=gcc FC=ifort F77=ifort
>
> ifort version 17.0.4
>
> blum:/Users/blum/software/openmpi-3.0.0rc1> gcc -v
> Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr
> --with-gxx-include-dir=/usr/include/c++/4.2.1
> Apple LLVM version 8.1.0 (clang-802.0.42)
> Target: x86_64-apple-darwin16.6.0
> Thread model: posix
> InstalledDir:
> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
>
> The full test program is appended.
>
> Compilation:
>
> mpif90 check_mpi_in_place.f90
>
> blum:/Users/blum/codes/fhi-aims/openmpi_test> which mpif90
> /usr/local/openmpi-3.0.0rc1/bin/mpif90
>
> blum:/Users/blum/codes/fhi-aims/openmpi_test> which mpirun
> /usr/local/openmpi-3.0.0rc1/bin/mpirun
>
> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpirun -np 2 a.out
>  * MPI_IN_PLACE does not appear to work as intended.
>  * Checking whether MPI_ALLREDUCE works at all.
>  * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
>
> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpirun -np 1 a.out
>  * MPI_IN_PLACE does not appear to work as intended.
>  * Checking whether MPI_ALLREDUCE works at all.
>  * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
>
> Hopefully, no trivial mistakes in the testcase. I just spent a few days
> tracing this issue through a fairly large code, which is where the issue
> originally arose (and leads to wrong numbers).
>
> Best wishes
> Volker
>
>
>
>
>> On Jul 26, 2017, at 9:46 AM, Gilles Gouaillardet
>> <gilles.gouaillar...@gmail.com> wrote:
>>
>> Volker,
>>
>> i was unable to reproduce this issue on linux
>>
>> can you please post your full configure command line, your gnu
>> compiler version and the full test program ?
>>
>> also, how many mpi tasks are you running ?
>>
>> Cheers,
>>
>> Gilles
>>
>> On Wed, Jul 26, 2017 at 4:25 PM, Volker Blum <volker.b...@duke.edu> wrote:
>>> Hi,
>>>
>>> I tried openmpi-3.0.0rc1.tar.gz using Intel Fortran 2017 and gcc on a
>>> current MacOS system. For this version, it seems to me that MPI_IN_PLACE
>>> returns incorrect results (while other MPI implementations, including some
>>> past OpenMPI versions, work fine).
>>>
>>> This can be seen with a simple Fortran example code, shown below. In the
>>> test, the values of all entries of an array “test_data” should be 1.0d0 if
>>> the behavior were as intended. However, the version of OpenMPI I have
>>> returns 0.d0 instead.
>>>
>>> I’ve seen this behavior on some other compute platforms too, in the past,
>>> so it wasn’t new to me. Still, I thought that this time, I’d ask. Any
>>> thoughts?
>>>
>>> Thank you,
>>> Best wishes
>>> Volker
>>>
>>>    ! size of test data array
>>>    integer :: n_data
>>>
>>>    ! array that contains test data for MPI_IN_PLACE
>>>    real*8, allocatable :: test_data(:)
>>>
>>>        integer :: mpierr
>>>
>>>    n_data = 10
>>>
>>>    allocate(test_data(n_data),stat=mpierr)
>>>
>>>    ! seed test data array for allreduce call below
>>>    if (myid.eq.0) then
>>>       test_data(:) = 1.d0
>>>    else
>>>       test_data(:) = 0.d0
>>>    end if
>>>
>>>    ! Sum the test_data array over all MPI tasks
>>>    call MPI_ALLREDUCE(MPI_IN_PLACE, &
>>>         test_data(:), &
>>>         n_data, &
>>>         MPI_DOUBLE_PRECISION, &
>>>         MPI_SUM, &
>>>         mpi_comm_global, &
>>>         mpierr )
>>>
>>>    ! The value of all entries of test_data should now be 1.d0 on all MPI
>>> tasks.
>>>    ! If that is not the case, then the MPI_IN_PLACE flag may be broken.
>>>
>>>
>>>
>>>
>>>
>>>
>>> Volker Blum
>>> Associate Professor
>>> Ab Initio Materials Simulations
>>> Duke University, MEMS Department
>>> 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
>>>
>>> volker.b...@duke.edu
>>> https://aims.pratt.duke.edu
>>> +1 (919) 660 5279
>>> Twitter: Aimsduke
>>>
>>> Office: 1111 Hudson Hall
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=
>
> Volker Blum
> Associate Professor
> Ab Initio Materials Simulations
> Duke University, MEMS Department
> 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
>
> volker.b...@duke.edu
> https://aims.pratt.duke.edu
> +1 (919) 660 5279
> Twitter: Aimsduke
>
> Office: 1111 Hudson Hall
>
>
>
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to