Thanks!

I tried ‘use mpi’, which compiles fine.

Same result as with ‘include mpif.h', in that the output is

 * MPI_IN_PLACE does not appear to work as intended.
 * Checking whether MPI_ALLREDUCE works at all.
 * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.

Hm. Any other thoughts?

Thanks again!
Best wishes
Volker

> On Jul 26, 2017, at 4:06 PM, Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com> wrote:
> 
> Volker,
> 
> With mpi_f08, you have to declare
> 
> Type(MPI_Comm) :: mpi_comm_global
> 
> (I am afk and not 100% sure of the syntax)
> 
> A simpler option is to
> 
> use mpi
> 
> Cheers,
> 
> Gilles
> 
> Volker Blum <volker.b...@duke.edu> wrote:
>> Hi Gilles,
>> 
>> Thank you very much for the response!
>> 
>> Unfortunately, I don’t have access to a different system with the issue 
>> right now. As I said, it’s not new; it just keeps creeping up unexpectedly 
>> again on different platforms. What puzzles me is that I’ve encountered the 
>> same problem with low but reasonable frequency over a period of now over 
>> five years.
>> 
>> We can’t require F’08 in our application, unfortunately, since this standard 
>> is too new. Since we maintain a large application that has to run on a broad 
>> range of platforms, Fortran 2008 would not work for many of our users. In a 
>> few years, this will be different, but not yet.
>> 
>> On gfortran: In our own tests, unfortunately, Intel Fortran consistently 
>> produced much faster executable code in the past. The latter observation may 
>> also change someday, but for us, the performance difference was an important 
>> constraint.
>> 
>> I did suspect mpif.h, too. Not sure how to best test this hypothesis, 
>> however. 
>> 
>> Just replacing 
>> 
>>> include 'mpif.h'
>>> with
>>> use mpi_f08
>> 
>> did not work, for me. 
>> 
>> This produces a number of compilation errors:
>> 
>> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpif90 
>> check_mpi_in_place_08.f90 -o check_mpi_in_place_08.x
>> check_mpi_in_place_08.f90(55): error #6303: The assignment operation or the 
>> binary expression operation is invalid for the data types of the two 
>> operands.   [MPI_COMM_WORLD]
>>   mpi_comm_global = MPI_COMM_WORLD
>> ----------------------^
>> check_mpi_in_place_08.f90(57): error #6285: There is no matching specific 
>> subroutine for this generic subroutine call.   [MPI_COMM_SIZE]
>>   call MPI_COMM_SIZE(mpi_comm_global, n_tasks, mpierr)
>> ---------^
>> check_mpi_in_place_08.f90(58): error #6285: There is no matching specific 
>> subroutine for this generic subroutine call.   [MPI_COMM_RANK]
>>   call MPI_COMM_RANK(mpi_comm_global, myid, mpierr)
>> ---------^
>> check_mpi_in_place_08.f90(75): error #6285: There is no matching specific 
>> subroutine for this generic subroutine call.   [MPI_ALLREDUCE]
>>   call MPI_ALLREDUCE(MPI_IN_PLACE, &
>> ---------^
>> check_mpi_in_place_08.f90(94): error #6285: There is no matching specific 
>> subroutine for this generic subroutine call.   [MPI_ALLREDUCE]
>>   call MPI_ALLREDUCE(check_success, aux_check_success, 1, MPI_LOGICAL, &
>> ---------^
>> check_mpi_in_place_08.f90(119): error #6285: There is no matching specific 
>> subroutine for this generic subroutine call.   [MPI_ALLREDUCE]
>>      call MPI_ALLREDUCE(test_data(:), &
>> ------------^
>> check_mpi_in_place_08.f90(140): error #6285: There is no matching specific 
>> subroutine for this generic subroutine call.   [MPI_ALLREDUCE]
>>      call MPI_ALLREDUCE(check_conventional_mpi, aux_check_success, 1, 
>> MPI_LOGICAL, &
>> ------------^
>> compilation aborted for check_mpi_in_place_08.f90 (code 1)
>> 
>> This is an interesting result, however … what might I be missing? Another 
>> use statement?
>> 
>> Best wishes
>> Volker
>> 
>>> On Jul 26, 2017, at 2:53 PM, Gilles Gouaillardet 
>>> <gilles.gouaillar...@gmail.com> wrote:
>>> 
>>> Volker,
>>> 
>>> thanks, i will have a look at it
>>> 
>>> meanwhile, if you can reproduce this issue on a more mainstream
>>> platform (e.g. linux + gfortran) please let me know.
>>> 
>>> since you are using ifort, Open MPI was built with Fortran 2008
>>> bindings, so you can replace
>>> include 'mpif.h'
>>> with
>>> use mpi_f08
>>> and who knows, that might solve your issue
>>> 
>>> 
>>> Cheers,
>>> 
>>> Gilles
>>> 
>>> On Wed, Jul 26, 2017 at 5:22 PM, Volker Blum <volker.b...@duke.edu> wrote:
>>>> Dear Gilles,
>>>> 
>>>> Thank you very much for the fast answer.
>>>> 
>>>> Darn. I feared it might not occur on all platforms, since my former Macbook
>>>> (with an older OpenMPI version) no longer exhibited the problem, a 
>>>> different
>>>> Linux/Intel Machine did last December, etc.
>>>> 
>>>> On this specific machine, the configure line is
>>>> 
>>>> ./configure CC=gcc FC=ifort F77=ifort
>>>> 
>>>> ifort version 17.0.4
>>>> 
>>>> blum:/Users/blum/software/openmpi-3.0.0rc1> gcc -v
>>>> Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr
>>>> --with-gxx-include-dir=/usr/include/c++/4.2.1
>>>> Apple LLVM version 8.1.0 (clang-802.0.42)
>>>> Target: x86_64-apple-darwin16.6.0
>>>> Thread model: posix
>>>> InstalledDir:
>>>> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
>>>> 
>>>> The full test program is appended.
>>>> 
>>>> Compilation:
>>>> 
>>>> mpif90 check_mpi_in_place.f90
>>>> 
>>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> which mpif90
>>>> /usr/local/openmpi-3.0.0rc1/bin/mpif90
>>>> 
>>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> which mpirun
>>>> /usr/local/openmpi-3.0.0rc1/bin/mpirun
>>>> 
>>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpirun -np 2 a.out
>>>> * MPI_IN_PLACE does not appear to work as intended.
>>>> * Checking whether MPI_ALLREDUCE works at all.
>>>> * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
>>>> 
>>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpirun -np 1 a.out
>>>> * MPI_IN_PLACE does not appear to work as intended.
>>>> * Checking whether MPI_ALLREDUCE works at all.
>>>> * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
>>>> 
>>>> Hopefully, no trivial mistakes in the testcase. I just spent a few days
>>>> tracing this issue through a fairly large code, which is where the issue
>>>> originally arose (and leads to wrong numbers).
>>>> 
>>>> Best wishes
>>>> Volker
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Jul 26, 2017, at 9:46 AM, Gilles Gouaillardet
>>>>> <gilles.gouaillar...@gmail.com> wrote:
>>>>> 
>>>>> Volker,
>>>>> 
>>>>> i was unable to reproduce this issue on linux
>>>>> 
>>>>> can you please post your full configure command line, your gnu
>>>>> compiler version and the full test program ?
>>>>> 
>>>>> also, how many mpi tasks are you running ?
>>>>> 
>>>>> Cheers,
>>>>> 
>>>>> Gilles
>>>>> 
>>>>> On Wed, Jul 26, 2017 at 4:25 PM, Volker Blum <volker.b...@duke.edu> wrote:
>>>>>> Hi,
>>>>>> 
>>>>>> I tried openmpi-3.0.0rc1.tar.gz using Intel Fortran 2017 and gcc on a
>>>>>> current MacOS system. For this version, it seems to me that MPI_IN_PLACE
>>>>>> returns incorrect results (while other MPI implementations, including 
>>>>>> some
>>>>>> past OpenMPI versions, work fine).
>>>>>> 
>>>>>> This can be seen with a simple Fortran example code, shown below. In the
>>>>>> test, the values of all entries of an array “test_data” should be 1.0d0 
>>>>>> if
>>>>>> the behavior were as intended. However, the version of OpenMPI I have
>>>>>> returns 0.d0 instead.
>>>>>> 
>>>>>> I’ve seen this behavior on some other compute platforms too, in the past,
>>>>>> so it wasn’t new to me. Still, I thought that this time, I’d ask. Any
>>>>>> thoughts?
>>>>>> 
>>>>>> Thank you,
>>>>>> Best wishes
>>>>>> Volker
>>>>>> 
>>>>>>  ! size of test data array
>>>>>>  integer :: n_data
>>>>>> 
>>>>>>  ! array that contains test data for MPI_IN_PLACE
>>>>>>  real*8, allocatable :: test_data(:)
>>>>>> 
>>>>>>      integer :: mpierr
>>>>>> 
>>>>>>  n_data = 10
>>>>>> 
>>>>>>  allocate(test_data(n_data),stat=mpierr)
>>>>>> 
>>>>>>  ! seed test data array for allreduce call below
>>>>>>  if (myid.eq.0) then
>>>>>>     test_data(:) = 1.d0
>>>>>>  else
>>>>>>     test_data(:) = 0.d0
>>>>>>  end if
>>>>>> 
>>>>>>  ! Sum the test_data array over all MPI tasks
>>>>>>  call MPI_ALLREDUCE(MPI_IN_PLACE, &
>>>>>>       test_data(:), &
>>>>>>       n_data, &
>>>>>>       MPI_DOUBLE_PRECISION, &
>>>>>>       MPI_SUM, &
>>>>>>       mpi_comm_global, &
>>>>>>       mpierr )
>>>>>> 
>>>>>>  ! The value of all entries of test_data should now be 1.d0 on all MPI
>>>>>> tasks.
>>>>>>  ! If that is not the case, then the MPI_IN_PLACE flag may be broken.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Volker Blum
>>>>>> Associate Professor
>>>>>> Ab Initio Materials Simulations
>>>>>> Duke University, MEMS Department
>>>>>> 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
>>>>>> 
>>>>>> volker.b...@duke.edu
>>>>>> https://aims.pratt.duke.edu
>>>>>> +1 (919) 660 5279
>>>>>> Twitter: Aimsduke
>>>>>> 
>>>>>> Office: 1111 Hudson Hall
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> users@lists.open-mpi.org
>>>>>> 
>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users@lists.open-mpi.org
>>>>> 
>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=
>>>> 
>>>> Volker Blum
>>>> Associate Professor
>>>> Ab Initio Materials Simulations
>>>> Duke University, MEMS Department
>>>> 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
>>>> 
>>>> volker.b...@duke.edu
>>>> https://aims.pratt.duke.edu
>>>> +1 (919) 660 5279
>>>> Twitter: Aimsduke
>>>> 
>>>> Office: 1111 Hudson Hall
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
>>>>  
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
>> 
>> Volker Blum
>> Associate Professor
>> Ab Initio Materials Simulations
>> Duke University, MEMS Department 
>> 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
>> 
>> volker.b...@duke.edu
>> https://aims.pratt.duke.edu
>> +1 (919) 660 5279
>> Twitter: Aimsduke
>> 
>> Office: 1111 Hudson Hall
>> 
>> 
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
>>  
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=

Volker Blum
Associate Professor
Ab Initio Materials Simulations
Duke University, MEMS Department 
144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA

volker.b...@duke.edu
https://aims.pratt.duke.edu
+1 (919) 660 5279
Twitter: Aimsduke

Office: 1111 Hudson Hall




_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to