Intel apparently changed something in their 2016 compiler (compared to the 2015 compiler); the Open MPI configure script decided to use a different pragma. Per the issue I opened up on Github, I need to look at the configure script and see what's going wrong.
> On Sep 24, 2015, at 4:51 PM, Fabrice Roy <fabrice....@obspm.fr> wrote: > > Hi, > > I have made some other tests. I don't know if it can help you but here is > what I observed. > > Using the array contructor [] solves the problem for a scalar, as someone > wrote on the Intel forum. > The same code with tok declared as an integer and > call mpi_bcast([tok],1,mpi_integer,0,mpi_comm_world,ierr) > works fine. > > But I get the same compilation error (no matching specific subroutine) if tok > is a 2d array: > integer, dimension(2:2) :: tok > call mpi_bcast(tok,1,mpi_integer,0,mpi_comm_world,ierr) > does not compile. > > In this case, I can also solve the problem with the array constructor but I > don't understand why I have to use this. > And if I try to send only a part of my 2d array, it doesn't work. > call mpi_bcast([tok(1,2)],1,mpi_integer,0,mpi_comm_world,ierr) > compiles but I don't get the good result. > > Thanks for your help, > > Fabrice > > > Le 24/09/2015 16:32, Jeff Squyres (jsquyres) a écrit : >> Yes -- typo -- it's not a problem with mpi_f08, it's a problem with the mpi >> module using the "ignore TKR" implementation. >> >> See https://github.com/open-mpi/ompi/issues/937. >> >> >>> On Sep 24, 2015, at 4:30 PM, Gilles Gouaillardet >>> <gilles.gouaillar...@gmail.com> wrote: >>> >>> Jeff, >>> >>> I am not sure whether you made a typo or not ... >>> >>> the issue only occuex with f90 bindings (aka use mpi) >>> f08 bindings (aka use mpi_f08) works fine >>> >>> Cheers, >>> >>> Gilles >>> >>> On Thursday, September 24, 2015, Jeff Squyres (jsquyres) >>> <jsquy...@cisco.com> wrote: >>> I looked into the MPI_BCAST problem -- I think we (Open MPI) have a problem >>> with the mpi_f08 bindings and the Intel 2016 compilers. >>> >>> It looks like configure is choosing to generate a different pragma for >>> Intel 2016 vs. Intel 2015 compilers, and that's causing a problem. >>> >>> Let me look into this a little more... >>> >>> >>> >>>> On Sep 24, 2015, at 11:09 AM, Fabrice Roy <fabrice....@obspm.fr> wrote: >>>> >>>> Hello, >>>> >>>> Thanks for the quick answer. >>>> I think I cannot use mpi_f08 in my code because I am also using parallel >>>> HDF5 which does not seem to be compatible with the Fortran 2008 module. >>>> I will ask Intel what they think about this problem. >>>> Thanks, >>>> >>>> Fabrice >>>> >>>> >>>> Le 24/09/2015 02:18, Gilles Gouaillardet a écrit : >>>>> Fabrice, >>>>> >>>>> i do not fully understand the root cause of this error, and you might >>>>> want to ask Intel folks to comment on that. >>>>> >>>>> that being said, and since this compiler does support fortran 2008, i >>>>> strongly encourage you to >>>>> use mpi_f08 >>>>> instead of >>>>> use mpi >>>>> >>>>> a happy feature/side effect is that your program compiles and runs just >>>>> fine if you use mpi_f08 module (!) >>>>> >>>>> Cheers, >>>>> >>>>> Gilles >>>>> >>>>> >>>>> On 9/24/2015 1:00 AM, Fabrice Roy wrote: >>>>>> program testmpi >>>>>> use mpi >>>>>> implicit none >>>>>> >>>>>> integer :: pid >>>>>> integer :: ierr >>>>>> integer :: tok >>>>>> >>>>>> call mpi_init(ierr) >>>>>> call mpi_comm_rank(mpi_comm_world, pid,ierr) >>>>>> if(pid==0) then >>>>>> tok = 1 >>>>>> else >>>>>> tok = 0 >>>>>> end if >>>>>> call mpi_bcast(tok,1,mpi_integer,0,mpi_comm_world,ierr) >>>>>> call mpi_finalize(ierr) >>>>>> end program testmpi >>>>> _______________________________________________ >>>>> users mailing list >>>>> us...@open-mpi.org >>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>>>> Link to this post: >>>>> http://www.open-mpi.org/community/lists/users/2015/09/27657.php >>>> -- >>>> Fabrice Roy >>>> Ingénieur en calcul scientifique >>>> LUTH - CNRS / Observatoire de Paris >>>> 5 place Jules Janssen >>>> 92190 Meudon >>>> Tel. : 01 45 07 71 20 >>>> >>>> >>>> _______________________________________________ >>>> users mailing list >>>> us...@open-mpi.org >>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>>> Link to this post: >>>> http://www.open-mpi.org/community/lists/users/2015/09/27660.php >>> >>> -- >>> Jeff Squyres >>> jsquy...@cisco.com >>> For corporate legal information go to: >>> http://www.cisco.com/web/about/doing_business/legal/cri/ >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this post: >>> http://www.open-mpi.org/community/lists/users/2015/09/27663.php >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this post: >>> http://www.open-mpi.org/community/lists/users/2015/09/27664.php >> > > -- > Fabrice Roy > Ingénieur en calcul scientifique > LUTH - CNRS / Observatoire de Paris > 5 place Jules Janssen > 92190 Meudon > Tel. : 01 45 07 71 20 > > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2015/09/27667.php -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/