I have the release version of CUDA 8.0 installed and am trying to build OpenMPI.
Here is my configure and build line:
./configure --prefix=$PREFIXPATH --with-cuda=$CUDA_HOME --with-tm=
--with-openib= && make && sudo make install
Where CUDA_HOME points to the cuda install path.
When I run the a
I'd suggest updating the configure/make scripts to look for nvml there
and link in the stubs. This way the build is not dependent on the driver being
installed and only the toolkit.
Thanks,
Justin
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Justin
Luitjens
Sent: Tu
I have an application that works on other systems but on the current system I'm
running I'm seeing the following crash:
[dt04:22457] *** Process received signal ***
[dt04:22457] Signal: Segmentation fault (11)
[dt04:22457] Signal code: Address not mapped (1)
[dt04:22457] Failing at address: 0x555
Hello,
I'm working on an application using OpenMPI with CUDA and GPUDirect. I would
like to get the MPI transfers to overlap with computation on the CUDA device.
To do this I need to ensure that all memory transfers do not go to stream 0.
In this application I have one step that performs an
easy if
>you are using only the host routines of MPI. Since your kernel calls are async
>with respect to host already, all you have to do is asynchronously copy the
>data between host and device.
>
>Jens
>
>On Dec 12, 2012, at 6:30 PM, Justin Luitjens wrote:
>
>> Hello,
Hi, I am attempting to debug a memory corruption in an mpi program using
valgrind. Howver, when I run with valgrind I get semi-random segfaults and
valgrind messages with the openmpi library. Here is an example of such a
seg fault:
==6153==
==6153== Invalid read of size 8
==6153==at 0x19102
I was able to get rid of the segfaults/invalid reads by disabling the
shared memory path. They still reported an error with uninitialized memory
in the same spot which I believe is due to the struct being padded for
alignment. I added a supression and was able to get past this part just
fine.
T
Why not do something like this:
double **A=new double*[N];
double *A_data new double [N*N];
for(int i=0;i wrote:
> Hi
>thanks for the quick response. Yes, that is what I meant. I thought
> there was no other way around what I am doing but It is always good to ask a
> expert rather than assum
original way to create the matrices, one can use
>> MPI_Create_type_struct to create an MPI datatype (
>> http://web.mit.edu/course/13/13.715/OldFiles/build/mpich2-1.0.6p1/www/www3/MPI_Type_create_struct.html
>> )
>> using MPI_BOTTOM as the original displacement.
>>
>
Hello,
I have installed OpenMPI 1.10.2 with cuda support:
[jluitjens@dt03 repro]$ ompi_info --parsable --all | grep
mpi_built_with_cuda_support:value
mca:mpi:base:param:mpi_built_with_cuda_support:value:true
I'm trying to verify that GPU direct is working and that messages aren't
traversing t
We have figured this out. It turns out that the first call to each
MPI_Isend/Irecv is staged through the host but subsequent calls are not.
Thanks,
Justin
From: Justin Luitjens
Sent: Wednesday, March 30, 2016 9:37 AM
To: us...@open-mpi.org
Subject: CUDA IPC/RDMA Not Working
Hello,
I have
I'm trying to build OpenMPI on Ubuntu 16.04.3 and I'm getting an error.
Here is how I configure and build:
./configure --with-cuda=$CUDA_HOME --prefix=$MPI_HOME && make clean && make -j
&& make install
Here is the error I see:
make[2]: Entering directory
'/tmpnfs/jluitjens/libs/src/openmpi
That is not guaranteed to work. There is no streaming concept in the MPI
standard. The fundamental issue here is MPI is only asynchronous on the
completion and not the initiation of the send/recv.
It would be nice if the next version of mpi would look to add something like a
triggered send or
13 matches
Mail list logo