Hi,
Can MPI_THREAD_MULTIPLE and openib btl work together in open mpi 1.8.4? If
so are there any command line options needed during run time?
Thanks,
Subhra.
PI w/ mxm (-mca mtl mxm) and multiple thread mode lin 1.8 x
> series or (-mca pml yalla) in the master branch.
>
> M
>
> On Mon, Mar 30, 2015 at 9:09 AM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
>
>> Hi,
>>
>> Can MPI_THREAD_MULTIPLE and op
Hi,
When I run my mpi job with openmpi 1.8.4, it hangs in the following stack:
#0 0x7fe59e07b264 in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x7fe59e076508 in _L_lock_854 () from /lib64/libpthread.so.0
#2 0x7fe59e0763d7 in pthread_mutex_lock () from /lib64/libpthread.so.0
a/roce technologies. Once can select UD/RC/DC transports to
> be used in mxm.
>
> By selecting mxm, all MPI p2p routines will be mapped to appropriate mxm
> functions.
>
> M
>
> On Mon, Mar 30, 2015 at 7:32 PM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
&
default transport is UD for internode communication and shared-memory
> for intra-node.
>
> http://bgate,mellanox.com/products/hpcx/
>
> Also, mxm included in the Mellanox OFED.
>
> On Fri, Apr 10, 2015 at 5:26 AM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
version do you use?
> does it have /opt/mellanox/mxm in it?
> You could just run mpirun from HPCX package which looks for mxm internally
> and recompile ompi as mentioned in README.
>
> On Mon, Apr 13, 2015 at 3:24 AM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
>
&g
-np 2 $HPCX_MPI_TESTS_DIR/examples/hello_c
> % oshrun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_oshmem
> % module unload hpcx
>
> ...
>
> On Tue, Apr 14, 2015 at 5:42 AM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
>
>> I am using 2.4-1.0.0 mella
.
On Sat, Apr 18, 2015 at 12:28 AM, Mike Dubman
wrote:
> could you please check that ofed_info -s indeed prints mofed 2.4-1.0.0?
> why LD_PRELOAD needed in your command line? Can you try
>
> module load hpcx
> mpirun -np $np test.exe
> ?
>
> On Sat, Apr 18, 2015 at
ernel drivers version and ofed userspace
> libraries version.
> or you have multiple ofed libraries installed on your node and use
> incorrect one.
> could you please check that ofed_info -s indeed prints mofed 2.4-1.0.0?
>
>
>
>
>
> On Wed, Apr 22, 2015 at 7:59 AM, S
o, the OFED on your system is not MellanoxOFED 2.4.x but smth else.
>
> try #rpm -qi libibverbs
>
>
> On Thu, Apr 23, 2015 at 7:47 AM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
>
>> Hi,
>>
>> where is the command ofed_info located? I searched fr
:57 PM, Mike Dubman
wrote:
> HPCX package uses pml "yalla" by default (part of ompi master branch, not
> in v1.8).
> So, "-mca mtl mxm" has no effect, unless "-mca pml cm" specified to
> disable "pml yalla" and let mtl layer to play.
>
MXM_TLS=rc,self,shm" for rc)
> #3 - cm as pml, mxm as mtl and mxm as a transport (default: ud, use params
> from #2 for rc)
>
> On Fri, Apr 24, 2015 at 10:46 AM, Subhra Mazumdar <
> subhramazumd...@gmail.com> wrote:
>
>> I am a little confused now, I ran 3 differe
Hi,
Is cuda aware mpi supported with pml yalla?
Thanks,
Subhra
h have CUDA-aware built into them.
>
> Rolf
>
>
>
> *From:* users [mailto:users-boun...@open-mpi.org] *On Behalf Of *Subhra
> Mazumdar
> *Sent:* Friday, August 21, 2015 12:18 AM
> *To:* Open MPI Users
> *Subject:* [OMPI users] cuda aware mpi
>
>
>
> H
14 matches
Mail list logo