Hi, Arturo.
Usually, for OpenMPI+UCX we use the following recipe
for UCX:
./configure --prefix=/path/to/ucx-cuda-install
--with-cuda=/usr/local/cuda --with-gdrcopy=/usr
make -j install
then OpenMPI:
./configure --with-cuda=/usr/local/cuda --with-ucx=/path/to/ucx-cuda-install
make -j insta
These two links will likely be able to help you:
http://www.mellanox.com/page/products_dyn?product_family=295&mtag=gpudirect
https://github.com/Mellanox/nv_peer_memory
On Fri, Dec 14, 2018 at 4:50 PM Weicheng Xue wrote:
> Hi all,
>
> I am now having a GPU Direct issue. I loaded gcc/5.2.0,
don't compile with CUDA support.
>
> That's kinda the point / it's probably obvious, but I thought I would
> clarify, anyway. :-)
>
>
> > On Oct 30, 2018, at 4:29 PM, Akshay Venkatesh
> wrote:
> >
> > +1 to what Jeff said.
> >
> > So you
tually an API call that will tell you if
> your Open MPI has CUDA support:
>
> https://www.open-mpi.org/doc/v3.1/man3/MPIX_Query_cuda_support.3.php
>
>
>
>
> > On Oct 30, 2018, at 3:14 PM, Akshay Venkatesh
> wrote:
> >
> > The first one is the critical
> ompi_info -a | grep "xtensions" returns
> MPI extensions: affinity, cuda
>
> It seems the two outputs are in conflict, what does that mean?
>
>
> On Tue, Oct 30, 2018 at 8:50 PM Akshay Venkatesh
> wrote:
>
>> Andrei,
>>
>> I generally check with
Andrei,
I generally check with one of these two:
$ ompi_info -a | grep "\-with\-cuda"
Configure command line: '--prefix=$HOME/ompi/build-cuda'
'--enable-mpirun-prefix-by-default' '--with-cuda=/usr/local/cuda'
'--with-ucx=$HOME/ucx-github/build'
'--with-ucx-libdir=$HOME/ucx-github/build/lib' '--
Hi Siegmar,
Would it possible for you to provide the source to reproduce the issue?
Thanks
On Tue, Mar 21, 2017 at 9:52 AM, Sylvain Jeaugey
wrote:
> Hi Siegmar,
>
> I think this "NVIDIA : ..." error message comes from the fact that you add
> CUDA includes in the C*FLAGS. If you just use --with