Received from Rolf vandeVaart on Tue, May 19, 2015 at 08:28:46PM EDT:
> >-Original Message-
> >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon
> >Sent: Tuesday, May 19, 2015 6:30 PM
> >To: us...@open-mpi.org
> >Subject: [OMPI users] cuIpcOpenMemHandle failure when usi
Thanks a lot, Gilles.
On Wed, May 20, 2015 at 2:47 AM, Gilles Gouaillardet
wrote:
> Khalid,
>
> this is probably not the intended behavior, i will followup on the devel
> mailing list.
>
> Thanks for reporting this
>
> Cheers,
>
> Gilles
>
>
> On 5/20/2015 10:30 AM, Khalid Hasanov wrote:
>
> Hi
Khalid,
this is probably not the intended behavior, i will followup on the devel
mailing list.
Thanks for reporting this
Cheers,
Gilles
On 5/20/2015 10:30 AM, Khalid Hasanov wrote:
Hi Gilles,
Thank you a lot, it works now.
Just one minor thing I have seen now. If I use some communicator
Hi Gilles,
Thank you a lot, it works now.
Just one minor thing I have seen now. If I use some communicator size which
does not exist in the configuration file, it will still use the
configuration file. For example, if I use the previous config file with
mpirun -n 4 it will use the config for the
Hi Khalid,
i checked the source code and it turns out rules must be ordered :
- first by communicator size
- second by message size
Here is attached an updated version of the ompi_tuned_file.conf you
should use
Cheers,
Gilles
On 5/20/2015 8:39 AM, Khalid Hasanov wrote:
Hello,
I am trying t
Received from Rolf vandeVaart on Tue, May 19, 2015 at 08:28:46PM EDT:
>
> >-Original Message-
> >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon
> >Sent: Tuesday, May 19, 2015 6:30 PM
> >To: us...@open-mpi.org
> >Subject: [OMPI users] cuIpcOpenMemHandle failure when
I am not sure why you are seeing this. One thing that is clear is that you
have found a bug in the error reporting. The error message is a little garbled
and I see a bug in what we are reporting. I will fix that.
If possible, could you try running with --mca btl_smcuda_use_cuda_ipc 0. My
exp
Hello,
I am trying to use coll_tuned_dynamic_rules_filename option.
I am not sure if I do everything right or not. But my impression is that
config file feature does not work as expected.
For example, if I specify config file as in the attached
ompi_tuned_file.conf and execute the attached simpl
I'm encountering intermittent errors while trying to use the Multi-Process
Service with CUDA 7.0 for improving concurrent access to a Kepler K20Xm GPU by
multiple MPI processes that perform GPU-to-GPU communication with each other
(i.e., GPU pointers are passed to the MPI transmission primitives).
It looks like you have PSM enabled cards on your system as well as
Ethernet, and we are picking that up. Try adding "-mca pml ob1" to your cmd
line and see if that helps
On Tue, May 19, 2015 at 5:04 AM, Nilo Menezes wrote:
> Hello,
>
> I'm trying to run openmpi with multithread support enabled.
Hello,
I'm trying to run openmpi with multithread support enabled.
I'm getting this error messages before init finishes:
[node011:61627] PSM returned unhandled/unknown connect error: Operation
timed out
[node011:61627] PSM EP connect error (unknown connect error):
*** An error occurred in MPI
11 matches
Mail list logo