Hi,
I have built openmpi from the nightly snapshot v1.10.0-73-ge27ab85 and
everything seems to work fine.
Thanks a lot!
Fabrice
Le 25/09/2015 07:38, Jeff Squyres (jsquyres) a écrit :
Fabrice --
I have committed a fix to our development master; it is pending moving over to
the v1.10 and v2
Fabrice --
I have committed a fix to our development master; it is pending moving over to
the v1.10 and v2.x release branches (see
https://github.com/open-mpi/ompi-release/pull/610 and
https://github.com/open-mpi/ompi-release/pull/611, respectively). Once the fix
is in the release branches, i
Intel apparently changed something in their 2016 compiler (compared to the 2015
compiler); the Open MPI configure script decided to use a different pragma.
Per the issue I opened up on Github, I need to look at the configure script and
see what's going wrong.
> On Sep 24, 2015, at 4:51 PM, Fa
Hi,
I have made some other tests. I don't know if it can help you but here
is what I observed.
Using the array contructor [] solves the problem for a scalar, as
someone wrote on the Intel forum.
The same code with tok declared as an integer and
call mpi_bcast([tok],1,mpi_integer,0,mpi_comm_w
Yes -- typo -- it's not a problem with mpi_f08, it's a problem with the mpi
module using the "ignore TKR" implementation.
See https://github.com/open-mpi/ompi/issues/937.
> On Sep 24, 2015, at 4:30 PM, Gilles Gouaillardet
> wrote:
>
> Jeff,
>
> I am not sure whether you made a typo or not .
BTW, I created this Github issue to track the problem:
https://github.com/open-mpi/ompi/issues/937
> On Sep 24, 2015, at 4:27 PM, Jeff Squyres (jsquyres)
> wrote:
>
> I looked into the MPI_BCAST problem -- I think we (Open MPI) have a problem
> with the mpi_f08 bindings and the Intel 201
Jeff,
I am not sure whether you made a typo or not ...
the issue only occuex with f90 bindings (aka use mpi)
f08 bindings (aka use mpi_f08) works fine
Cheers,
Gilles
On Thursday, September 24, 2015, Jeff Squyres (jsquyres)
wrote:
> I looked into the MPI_BCAST problem -- I think we (Open MPI)
I looked into the MPI_BCAST problem -- I think we (Open MPI) have a problem
with the mpi_f08 bindings and the Intel 2016 compilers.
It looks like configure is choosing to generate a different pragma for Intel
2016 vs. Intel 2015 compilers, and that's causing a problem.
Let me look into this a l
Hello,
Thanks for the quick answer.
I think I cannot use mpi_f08 in my code because I am also using parallel
HDF5 which does not seem to be compatible with the Fortran 2008 module.
I will ask Intel what they think about this problem.
Thanks,
Fabrice
Le 24/09/2015 02:18, Gilles Gouaillardet a
Fabrice,
i do not fully understand the root cause of this error, and you might
want to ask Intel folks to comment on that.
that being said, and since this compiler does support fortran 2008, i
strongly encourage you to
use mpi_f08
instead of
use mpi
a happy feature/side effect is that your
Hello,
I have built Open MPI 1.10.0 using Intel compilers 16.0.0.
When I am trying to compile the following test code:
program testmpi
use mpi
implicit none
integer :: pid
integer :: ierr
integer :: tok
call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world, pid,ie
11 matches
Mail list logo