Actually, since that GPUDirect is not yet officially released, but you may want
to contact h...@mellanox.com to get the needed info and when the drivers will
be released. Thanks!
- Pak
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of
Hi Brice,
You will need the MLNX_OFED with the GPUDirect support in order to work. I will
check to there's a release of it that supports SLES and let you know.
[pak@maia001 ~]$ /sbin/modinfo ib_core
filename:
/lib/modules/2.6.18-194.nvel5/updates/kernel/drivers/infiniband/core/ib_core.ko
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Brice Goglin
Sent: Monday, February 28, 2011 2:14 PM
To: Open MPI Users
Subject: Re: [OMPI users] anybody tried OMPI with gpudirect?
Le 28/02/2011 19:49, Rolf vandeVaart a écrit :
> For t
Le 28/02/2011 19:49, Rolf vandeVaart a écrit :
> For the GPU Direct to work with Infiniband, you need to get some updated OFED
> bits from your Infiniband vendor.
>
> In terms of checking the driver updates, you can do a grep on the string
> get_driver_pages in the file/proc/kallsyms. If it is
For the GPU Direct to work with Infiniband, you need to get some updated OFED
bits from your Infiniband vendor.
In terms of checking the driver updates, you can do a grep on the string
get_driver_pages in the file/proc/kallsyms. If it is there, then the Linux
kernel is updated correctly.
Th
Hello,
I am running into the following issue while trying to run osu_latency:
--
-bash-3.2$ mpiexec --mca btl openib,self -mca btl_openib_warn_default_gid_
prefix 0 -np 2 --hostfile mpihosts
/home/jagga/osu-micro-benchmarks-3.3/openmpi/ofed-1.5.2/bin/osu_latency
# OSU MPI Latency Test v3.3
# Size
More specifically -- ensure that LD_LIBRARY_PATH is set properly *on all nodes
where you are running Open MPI processes*.
For example, if you're using a hostfile to launch across multiple machines,
ensure that your shell startup files (e.g., .bashrc) are setup to set your
LD_LIBRARY_PATH proper
Le 28/02/2011 17:30, Rolf vandeVaart a écrit :
> Hi Brice:
> Yes, I have tired OMPI 1.5 with gpudirect and it worked for me. You
> definitely need the patch or you will see the behavior just as you described,
> a hang. One thing you could try is disabling the large message RDMA in OMPI
> and se
Hi Brice:
Yes, I have tired OMPI 1.5 with gpudirect and it worked for me. You definitely
need the patch or you will see the behavior just as you described, a hang. One
thing you could try is disabling the large message RDMA in OMPI and see if that
works. That can be done by adjusting the openi
Hello,
I am trying to play with nvidia's gpudirect. The test program given with
the gpudirect tarball just does a basic MPI ping-pong between two
process that allocated their buffers with cudaHostMalloc instead of
malloc. It seems to work with Intel MPI but Open MPI 1.5 hangs in the
first MPI_Send
10 matches
Mail list logo