Re: [OMPI users] bug in CUDA support for dual-processor systems?

2012-07-31 Thread Zbigniew Koza
Thanks for a quick reply. I do not know much about low-level CUDA and IPC, but there's no problem using high-level CUDA to determine if device A can talk to B via GPUDirect (cudaDeviceCanAccessPeer). Then, for such connections, one only needs to call cudaDeviceEnablePeerAccess and then essential

Re: [OMPI users] bug in CUDA support for dual-processor systems?

2012-07-31 Thread Rolf vandeVaart
The current implementation does assume that the GPUs are on the same IOH and therefore can use the IPC features of the CUDA library for communication. One of the initial motivations for this was that to be able to detect whether GPUs can talk to one another, the CUDA library has to be initialized

[OMPI users] bug in CUDA support for dual-processor systems?

2012-07-31 Thread Zbigniew Koza
Hi, I wrote a simple program to see if OpenMPI can really handle cuda pointers as promised in the FAQ and how efficiently. The program (see below) breaks if MPI communication is to be performed between two devices that are on the same node but under different IOHs in a dual-processor Intel mac

Re: [OMPI users] infiniband with MPI

2012-07-31 Thread Jeff Squyres
On Jul 31, 2012, at 12:14 AM, Joen Chen wrote: > After reading the FAQ about OFED, I knew that openMPI can collaborate with > RoCE. Correct -- Open MPI can use RoCE interfaces, if they are available. > Moreover, using the RoCE make some overhead because the underlying network > layers. In my i

Re: [OMPI users] sndlib problem by mpicc compiler

2012-07-31 Thread Jeff Squyres
On Jul 31, 2012, at 4:26 AM, Paweł Jaromin wrote: > Sorry, in the code is big mes, but I`am sure it not effects my > problem. - I tried another ways to solve the problem. I can pretty guarantee you that these two issues will cause you problems. You need to fix them. Specifically: it seems like

Re: [OMPI users] sndlib problem by mpicc compiler

2012-07-31 Thread Paweł Jaromin
2012/7/30 Jeff Squyres : > On Jul 30, 2012, at 12:48 PM, Paweł Jaromin wrote: > >> make all >> Building file: ../src/snd_0.1.c >> Invoking: GCC C Compiler >> mpicc -I/usr/include/mpi -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP >> -MF"src/snd_0.1.d" -MT"src/snd_0.1.d" -o "src/snd_0.1.o" >> "../src/

Re: [OMPI users] setsockopt() fails with EINVAL on solaris

2012-07-31 Thread Daniel Junglas
Thanks, configuring with '--enable-mca-no-build=rmcast' did the trick for me. Daniel users-boun...@open-mpi.org wrote on 07/30/2012 04:21:13 PM: > FWIW: the rmcast framework shouldn't be in 1.6. Jeff and I are > testing removal and should have it out of there soon. > > Meantime, the best solut

[OMPI users] infiniband with MPI

2012-07-31 Thread Joen Chen
Hi every one! After reading the FAQ about OFED, I knew that openMPI can collaborate with RoCE. Moreover, using the RoCE make some overhead because the underlying network layers. In my infiniband bandwidth testing, I get the 5Gbps using IPoIB and 12Gbps using RDMA. The performance gap is huge for m