Folks,

I am able to reproduce the issue on OS X (Sierra) with stock gcc (aka clang) and ifort 17.0.4


i will investigate this from now


Cheers,

Gilles

On 7/27/2017 9:28 AM, George Bosilca wrote:
Volker,

Unfortunately, I can't replicate with icc. I tried on a x86_64 box with Intel compiler chain 17.0.4 20170411 to no avail. I also tested the 3.0.0-rc1 tarball and the current master, and you test completes without errors on all cases.

Once you figure out an environment where you can consistently replicate the issue, I would suggest to attach to the processes and: - make sure the MPI_IN_PLACE as seen through the Fortran layer matches what the C layer expects
- what is the collective algorithm used by Open MPI

I have a "Fortran 101" level question. When you pass an array a(:) as argument, what exactly gets passed via the Fortran interface to the corresponding C function ?

  George.

On Wed, Jul 26, 2017 at 1:55 PM, Volker Blum <volker.b...@duke.edu <mailto:volker.b...@duke.edu>> wrote:

    Thanks! Yes, trying with Intel 2017 would be very nice.

    > On Jul 26, 2017, at 6:12 PM, George Bosilca <bosi...@icl.utk.edu
    <mailto:bosi...@icl.utk.edu>> wrote:
    >
    > No, I don't have (or used where they were available) the Intel
    compiler. I used clang and gfortran. I can try on a Linux box with
    the Intel 2017 compilers.
    >
    >   George.
    >
    >
    >
    > On Wed, Jul 26, 2017 at 11:59 AM, Volker Blum
    <volker.b...@duke.edu <mailto:volker.b...@duke.edu>> wrote:
    > Did you use Intel Fortran 2017 as well?
    >
    > (I’m asking because I did see the same issue with a combination
    of an earlier Intel Fortran 2017 version and OpenMPI on an
    Intel/Infiniband Linux HPC machine … but not Intel Fortran 2016 on
    the same machine. Perhaps I can revive my access to that
    combination somehow.)
    >
    > Best wishes
    > Volker
    >
    > > On Jul 26, 2017, at 5:55 PM, George Bosilca
    <bosi...@icl.utk.edu <mailto:bosi...@icl.utk.edu>> wrote:
    > >
    > > I thought that maybe the underlying allreduce algorithm fails
    to support MPI_IN_PLACE correctly, but I can't replicate on any
    machine (including OSX) with any number of processes.
    > >
    > >   George.
    > >
    > >
    > >
    > > On Wed, Jul 26, 2017 at 10:59 AM, Volker Blum
    <volker.b...@duke.edu <mailto:volker.b...@duke.edu>> wrote:
    > > Thanks!
    > >
    > > I tried ‘use mpi’, which compiles fine.
    > >
    > > Same result as with ‘include mpif.h', in that the output is
    > >
    > >  * MPI_IN_PLACE does not appear to work as intended.
    > >  * Checking whether MPI_ALLREDUCE works at all.
    > >  * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
    > >
    > > Hm. Any other thoughts?
    > >
    > > Thanks again!
    > > Best wishes
    > > Volker
    > >
    > > > On Jul 26, 2017, at 4:06 PM, Gilles Gouaillardet
    <gilles.gouaillar...@gmail.com
    <mailto:gilles.gouaillar...@gmail.com>> wrote:
    > > >
    > > > Volker,
    > > >
    > > > With mpi_f08, you have to declare
    > > >
    > > > Type(MPI_Comm) :: mpi_comm_global
    > > >
    > > > (I am afk and not 100% sure of the syntax)
    > > >
    > > > A simpler option is to
    > > >
    > > > use mpi
    > > >
    > > > Cheers,
    > > >
    > > > Gilles
    > > >
    > > > Volker Blum <volker.b...@duke.edu
    <mailto:volker.b...@duke.edu>> wrote:
    > > >> Hi Gilles,
    > > >>
    > > >> Thank you very much for the response!
    > > >>
    > > >> Unfortunately, I don’t have access to a different system
    with the issue right now. As I said, it’s not new; it just keeps
    creeping up unexpectedly again on different platforms. What
    puzzles me is that I’ve encountered the same problem with low but
    reasonable frequency over a period of now over five years.
    > > >>
    > > >> We can’t require F’08 in our application, unfortunately,
    since this standard is too new. Since we maintain a large
    application that has to run on a broad range of platforms, Fortran
    2008 would not work for many of our users. In a few years, this
    will be different, but not yet.
    > > >>
    > > >> On gfortran: In our own tests, unfortunately, Intel Fortran
    consistently produced much faster executable code in the past. The
    latter observation may also change someday, but for us, the
    performance difference was an important constraint.
    > > >>
    > > >> I did suspect mpif.h, too. Not sure how to best test this
    hypothesis, however.
    > > >>
    > > >> Just replacing
    > > >>
    > > >>> include 'mpif.h'
    > > >>> with
    > > >>> use mpi_f08
    > > >>
    > > >> did not work, for me.
    > > >>
    > > >> This produces a number of compilation errors:
    > > >>
    > > >> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpif90
    check_mpi_in_place_08.f90 -o check_mpi_in_place_08.x
    > > >> check_mpi_in_place_08.f90(55): error #6303: The assignment
    operation or the binary expression operation is invalid for the
    data types of the two operands.   [MPI_COMM_WORLD]
    > > >>   mpi_comm_global = MPI_COMM_WORLD
    > > >> ----------------------^
    > > >> check_mpi_in_place_08.f90(57): error #6285: There is no
matching specific subroutine for this generic subroutine call. [MPI_COMM_SIZE]
    > > >>   call MPI_COMM_SIZE(mpi_comm_global, n_tasks, mpierr)
    > > >> ---------^
    > > >> check_mpi_in_place_08.f90(58): error #6285: There is no
matching specific subroutine for this generic subroutine call. [MPI_COMM_RANK]
    > > >>   call MPI_COMM_RANK(mpi_comm_global, myid, mpierr)
    > > >> ---------^
    > > >> check_mpi_in_place_08.f90(75): error #6285: There is no
matching specific subroutine for this generic subroutine call. [MPI_ALLREDUCE]
    > > >>   call MPI_ALLREDUCE(MPI_IN_PLACE, &
    > > >> ---------^
    > > >> check_mpi_in_place_08.f90(94): error #6285: There is no
matching specific subroutine for this generic subroutine call. [MPI_ALLREDUCE]
    > > >>   call MPI_ALLREDUCE(check_success, aux_check_success, 1,
    MPI_LOGICAL, &
    > > >> ---------^
    > > >> check_mpi_in_place_08.f90(119): error #6285: There is no
matching specific subroutine for this generic subroutine call. [MPI_ALLREDUCE]
    > > >>      call MPI_ALLREDUCE(test_data(:), &
    > > >> ------------^
    > > >> check_mpi_in_place_08.f90(140): error #6285: There is no
matching specific subroutine for this generic subroutine call. [MPI_ALLREDUCE]
    > > >>      call MPI_ALLREDUCE(check_conventional_mpi,
    aux_check_success, 1, MPI_LOGICAL, &
    > > >> ------------^
    > > >> compilation aborted for check_mpi_in_place_08.f90 (code 1)
    > > >>
    > > >> This is an interesting result, however … what might I be
    missing? Another use statement?
    > > >>
    > > >> Best wishes
    > > >> Volker
    > > >>
    > > >>> On Jul 26, 2017, at 2:53 PM, Gilles Gouaillardet
    <gilles.gouaillar...@gmail.com
    <mailto:gilles.gouaillar...@gmail.com>> wrote:
    > > >>>
    > > >>> Volker,
    > > >>>
    > > >>> thanks, i will have a look at it
    > > >>>
    > > >>> meanwhile, if you can reproduce this issue on a more
    mainstream
    > > >>> platform (e.g. linux + gfortran) please let me know.
    > > >>>
    > > >>> since you are using ifort, Open MPI was built with Fortran
    2008
    > > >>> bindings, so you can replace
    > > >>> include 'mpif.h'
    > > >>> with
    > > >>> use mpi_f08
    > > >>> and who knows, that might solve your issue
    > > >>>
    > > >>>
    > > >>> Cheers,
    > > >>>
    > > >>> Gilles
    > > >>>
    > > >>> On Wed, Jul 26, 2017 at 5:22 PM, Volker Blum
    <volker.b...@duke.edu <mailto:volker.b...@duke.edu>> wrote:
    > > >>>> Dear Gilles,
    > > >>>>
    > > >>>> Thank you very much for the fast answer.
    > > >>>>
    > > >>>> Darn. I feared it might not occur on all platforms, since
    my former Macbook
    > > >>>> (with an older OpenMPI version) no longer exhibited the
    problem, a different
    > > >>>> Linux/Intel Machine did last December, etc.
    > > >>>>
    > > >>>> On this specific machine, the configure line is
    > > >>>>
    > > >>>> ./configure CC=gcc FC=ifort F77=ifort
    > > >>>>
    > > >>>> ifort version 17.0.4
    > > >>>>
    > > >>>> blum:/Users/blum/software/openmpi-3.0.0rc1> gcc -v
    > > >>>> Configured with:
    --prefix=/Applications/Xcode.app/Contents/Developer/usr
    > > >>>> --with-gxx-include-dir=/usr/include/c++/4.2.1
    > > >>>> Apple LLVM version 8.1.0 (clang-802.0.42)
    > > >>>> Target: x86_64-apple-darwin16.6.0
    > > >>>> Thread model: posix
    > > >>>> InstalledDir:
    > > >>>>
    
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
    > > >>>>
    > > >>>> The full test program is appended.
    > > >>>>
    > > >>>> Compilation:
    > > >>>>
    > > >>>> mpif90 check_mpi_in_place.f90
    > > >>>>
    > > >>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> which mpif90
    > > >>>> /usr/local/openmpi-3.0.0rc1/bin/mpif90
    > > >>>>
    > > >>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> which mpirun
    > > >>>> /usr/local/openmpi-3.0.0rc1/bin/mpirun
    > > >>>>
    > > >>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpirun -np
    2 a.out
    > > >>>> * MPI_IN_PLACE does not appear to work as intended.
    > > >>>> * Checking whether MPI_ALLREDUCE works at all.
    > > >>>> * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
    > > >>>>
    > > >>>> blum:/Users/blum/codes/fhi-aims/openmpi_test> mpirun -np
    1 a.out
    > > >>>> * MPI_IN_PLACE does not appear to work as intended.
    > > >>>> * Checking whether MPI_ALLREDUCE works at all.
    > > >>>> * Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
    > > >>>>
    > > >>>> Hopefully, no trivial mistakes in the testcase. I just
    spent a few days
    > > >>>> tracing this issue through a fairly large code, which is
    where the issue
    > > >>>> originally arose (and leads to wrong numbers).
    > > >>>>
    > > >>>> Best wishes
    > > >>>> Volker
    > > >>>>
    > > >>>>
    > > >>>>
    > > >>>>
    > > >>>>> On Jul 26, 2017, at 9:46 AM, Gilles Gouaillardet
    > > >>>>> <gilles.gouaillar...@gmail.com
    <mailto:gilles.gouaillar...@gmail.com>> wrote:
    > > >>>>>
    > > >>>>> Volker,
    > > >>>>>
    > > >>>>> i was unable to reproduce this issue on linux
    > > >>>>>
    > > >>>>> can you please post your full configure command line,
    your gnu
    > > >>>>> compiler version and the full test program ?
    > > >>>>>
    > > >>>>> also, how many mpi tasks are you running ?
    > > >>>>>
    > > >>>>> Cheers,
    > > >>>>>
    > > >>>>> Gilles
    > > >>>>>
    > > >>>>> On Wed, Jul 26, 2017 at 4:25 PM, Volker Blum
    <volker.b...@duke.edu <mailto:volker.b...@duke.edu>> wrote:
    > > >>>>>> Hi,
    > > >>>>>>
    > > >>>>>> I tried openmpi-3.0.0rc1.tar.gz using Intel Fortran
    2017 and gcc on a
    > > >>>>>> current MacOS system. For this version, it seems to me
    that MPI_IN_PLACE
    > > >>>>>> returns incorrect results (while other MPI
    implementations, including some
    > > >>>>>> past OpenMPI versions, work fine).
    > > >>>>>>
    > > >>>>>> This can be seen with a simple Fortran example code,
    shown below. In the
    > > >>>>>> test, the values of all entries of an array “test_data”
    should be 1.0d0 if
    > > >>>>>> the behavior were as intended. However, the version of
    OpenMPI I have
    > > >>>>>> returns 0.d0 instead.
    > > >>>>>>
    > > >>>>>> I’ve seen this behavior on some other compute platforms
    too, in the past,
    > > >>>>>> so it wasn’t new to me. Still, I thought that this
    time, I’d ask. Any
    > > >>>>>> thoughts?
    > > >>>>>>
    > > >>>>>> Thank you,
    > > >>>>>> Best wishes
    > > >>>>>> Volker
    > > >>>>>>
    > > >>>>>>  ! size of test data array
    > > >>>>>>  integer :: n_data
    > > >>>>>>
    > > >>>>>>  ! array that contains test data for MPI_IN_PLACE
    > > >>>>>>  real*8, allocatable :: test_data(:)
    > > >>>>>>
    > > >>>>>>      integer :: mpierr
    > > >>>>>>
    > > >>>>>>  n_data = 10
    > > >>>>>>
    > > >>>>>> allocate(test_data(n_data),stat=mpierr)
    > > >>>>>>
    > > >>>>>>  ! seed test data array for allreduce call below
    > > >>>>>>  if (myid.eq.0) then
    > > >>>>>>     test_data(:) = 1.d0
    > > >>>>>>  else
    > > >>>>>>     test_data(:) = 0.d0
    > > >>>>>>  end if
    > > >>>>>>
    > > >>>>>>  ! Sum the test_data array over all MPI tasks
    > > >>>>>>  call MPI_ALLREDUCE(MPI_IN_PLACE, &
    > > >>>>>>       test_data(:), &
    > > >>>>>>       n_data, &
    > > >>>>>>  MPI_DOUBLE_PRECISION, &
    > > >>>>>>       MPI_SUM, &
    > > >>>>>>  mpi_comm_global, &
    > > >>>>>>       mpierr )
    > > >>>>>>
    > > >>>>>>  ! The value of all entries of test_data should now be
    1.d0 on all MPI
    > > >>>>>> tasks.
    > > >>>>>>  ! If that is not the case, then the MPI_IN_PLACE flag
    may be broken.
    > > >>>>>>
    > > >>>>>>
    > > >>>>>>
    > > >>>>>>
    > > >>>>>>
    > > >>>>>>
    > > >>>>>> Volker Blum
    > > >>>>>> Associate Professor
    > > >>>>>> Ab Initio Materials Simulations
    > > >>>>>> Duke University, MEMS Department
    > > >>>>>> 144 Hudson Hall, Box 90300, Duke University, Durham, NC
    27708, USA
    > > >>>>>>
    > > >>>>>> volker.b...@duke.edu <mailto:volker.b...@duke.edu>
    > > >>>>>> https://aims.pratt.duke.edu
    > > >>>>>> +1 (919) 660 5279 <tel:%2B1%20%28919%29%20660%205279>
    > > >>>>>> Twitter: Aimsduke
    > > >>>>>>
    > > >>>>>> Office: 1111 Hudson Hall
    > > >>>>>>
    > > >>>>>>
    > > >>>>>>
    > > >>>>>>
    > > >>>>>> _______________________________________________
    > > >>>>>> users mailing list
    > > >>>>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > >>>>>>
    > > >>>>>>
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=>
    > > >>>>> _______________________________________________
    > > >>>>> users mailing list
    > > >>>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > >>>>>
    > > >>>>>
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=QLtQXqnGSkgQnmgI4RxZXa9R6FhMmgj2FLN452Q0Wis&s=BeracGkSHhIyI_bjKJqPHCqMuP-Se2pRmbiNfugkdK8&e=>
    > > >>>>
    > > >>>> Volker Blum
    > > >>>> Associate Professor
    > > >>>> Ab Initio Materials Simulations
    > > >>>> Duke University, MEMS Department
    > > >>>> 144 Hudson Hall, Box 90300, Duke University, Durham, NC
    27708, USA
    > > >>>>
    > > >>>> volker.b...@duke.edu <mailto:volker.b...@duke.edu>
    > > >>>> https://aims.pratt.duke.edu
    > > >>>> +1 (919) 660 5279 <tel:%2B1%20%28919%29%20660%205279>
    > > >>>> Twitter: Aimsduke
    > > >>>>
    > > >>>> Office: 1111 Hudson Hall
    > > >>>>
    > > >>>>
    > > >>>>
    > > >>>>
    > > >>>>
    > > >>>> _______________________________________________
    > > >>>> users mailing list
    > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > >>>>
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=>
    > > >>> _______________________________________________
    > > >>> users mailing list
    > > >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > >>>
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=>
    > > >>
    > > >> Volker Blum
    > > >> Associate Professor
    > > >> Ab Initio Materials Simulations
    > > >> Duke University, MEMS Department
    > > >> 144 Hudson Hall, Box 90300, Duke University, Durham, NC
    27708, USA
    > > >>
    > > >> volker.b...@duke.edu <mailto:volker.b...@duke.edu>
    > > >> https://aims.pratt.duke.edu
    > > >> +1 (919) 660 5279 <tel:%2B1%20%28919%29%20660%205279>
    > > >> Twitter: Aimsduke
    > > >>
    > > >> Office: 1111 Hudson Hall
    > > >>
    > > >>
    > > >>
    > > >>
    > > >> _______________________________________________
    > > >> users mailing list
    > > >> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > >>
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=>
    > > > _______________________________________________
    > > > users mailing list
    > > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > >
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=>
    > >
    > > Volker Blum
    > > Associate Professor
    > > Ab Initio Materials Simulations
    > > Duke University, MEMS Department
    > > 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
    > >
    > > volker.b...@duke.edu <mailto:volker.b...@duke.edu>
    > > https://aims.pratt.duke.edu
    > > +1 (919) 660 5279 <tel:%2B1%20%28919%29%20660%205279>
    > > Twitter: Aimsduke
    > >
    > > Office: 1111 Hudson Hall
    > >
    > >
    > >
    > >
    > > _______________________________________________
    > > users mailing list
    > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
    <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
    > >
    > > _______________________________________________
    > > users mailing list
    > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > >
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=4W_1_DI3QU9PRT2fhTXyUeXiu7HvTSNzX48E-9ifoTc&s=i3f7Olcbyor4Pu0hv6YlgO10cJ_XvOR13zZn7PPIGto&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=4W_1_DI3QU9PRT2fhTXyUeXiu7HvTSNzX48E-9ifoTc&s=i3f7Olcbyor4Pu0hv6YlgO10cJ_XvOR13zZn7PPIGto&e=>
    >
    > Volker Blum
    > Associate Professor
    > Ab Initio Materials Simulations
    > Duke University, MEMS Department
    > 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
    >
    > volker.b...@duke.edu <mailto:volker.b...@duke.edu>
    > https://aims.pratt.duke.edu
    > +1 (919) 660 5279 <tel:%2B1%20%28919%29%20660%205279>
    > Twitter: Aimsduke
    >
    > Office: 1111 Hudson Hall
    >
    >
    >
    >
    > _______________________________________________
    > users mailing list
    > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
    <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
    >
    > _______________________________________________
    > users mailing list
    > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    >
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=h6zBsOI45o8fovfy43A2FyCt-fL_yVpNbVSf1OA8CrQ&s=YNADyKbvRnxPmnDVmVlYmsYgZEr8m-etPBXLHPRkflw&e=
    
<https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=h6zBsOI45o8fovfy43A2FyCt-fL_yVpNbVSf1OA8CrQ&s=YNADyKbvRnxPmnDVmVlYmsYgZEr8m-etPBXLHPRkflw&e=>

    Volker Blum
    Associate Professor
    Ab Initio Materials Simulations
    Duke University, MEMS Department
    144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA

    volker.b...@duke.edu <mailto:volker.b...@duke.edu>
    https://aims.pratt.duke.edu
    +1 (919) 660 5279 <tel:%2B1%20%28919%29%20660%205279>
    Twitter: Aimsduke

    Office: 1111 Hudson Hall




    _______________________________________________
    users mailing list
    users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    https://rfd.newmexicoconsortium.org/mailman/listinfo/users
    <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>




_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to