Re: [OMPI users] Problem using mpifort(Intel)

2015-09-25 Thread Jeff Squyres (jsquyres)
This problem was literally reported just the other day; it was partially fixed earlier today, the rest of the fix will be committed shortly. The Intel 2016 compiler suite changed something in how they handle the !GCC pragma (i.e., they didn't handle it at all before, and now they only partially

[OMPI users] Problem using mpifort(Intel)

2015-09-25 Thread Julien Bodart
Hi, This problem has probably been discussed already but I could not find it: I a m trying to compile openmpi with intel-16 compilers mpicc,mpicxx work, but I have trouble with mpifort: Trying to compile one of example programs I get the following error message: ring_usempi.f90(35): error #6285

Re: [OMPI users] How does MPI_Allreduce work?

2015-09-25 Thread Rolf vandeVaart
In the case of reductions, yes, we copy into host memory so we can do the reduction. For other collectives or point to point communication, then GPU Direct RDMA will be used (for smaller messages). Rolf >-Original Message- >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Y

Re: [OMPI users] How does MPI_Allreduce work?

2015-09-25 Thread Yang Zhang
Hi Rolf, Thanks very much for the info! So with CUDA-aware build, OpenMPI still have to copy all the data first into host memory, and then do send/recv on the host memory? I thought OpenMPI would use GPUdirect and RDMA to send/recv GPU memory directly. I will try a debug build and see what does i

Re: [OMPI users] How does MPI_Allreduce work?

2015-09-25 Thread Rolf vandeVaart
Hello Yang: It is not clear to me if you are asking about a CUDA-aware build of Open MPI where you do the MPI_Allreduce() or the GPU buffer or if you are handling staging the GPU into host memory and then calling the MPI_Allreduce(). Either way, they are somewhat similar. With CUDA-aware, the

[OMPI users] Missing pointer in MPI_Request / MPI_Ibarrier in documentation for 1.10.0

2015-09-25 Thread Harald Servat
Dear all, I'd like to note you that the manual pages for the C-syntax MPI_Ibarrier in OpenMPI v1.10.0 misses the pointer in the MPI_Request. See: https://www.open-mpi.org/doc/v1.10/man3/MPI_Ibarrier.3.php https://www.open-mpi.org/doc/v1.10/man3/MPI_Barrier.3.php Best, WARNING / LEGAL

Re: [OMPI users] Problem using Open MPI 1.10.0 built with Intel compilers 16.0.0

2015-09-25 Thread Jeff Squyres (jsquyres)
Fabrice -- I have committed a fix to our development master; it is pending moving over to the v1.10 and v2.x release branches (see https://github.com/open-mpi/ompi-release/pull/610 and https://github.com/open-mpi/ompi-release/pull/611, respectively). Once the fix is in the release branches, i