Hi!
I am trying to build openmpi 1.8 with Open SHMEM and Mellanox MXM.
But oshmem_info does not display methe information about ikrit in spml.
...
MCA scoll: mpi (MCA v2.0, API v1.0, Component v1.8)
MCA spml: yoda (MCA v2.0, API v2.0, Component v1.8)
MCA sshmem: mmap (MCA v2.0, API v2.0, Component
Hi Timur,
What "configure" line you used? ikrit could be compile-it if no
"--with-mxm=/opt/mellanox/mxm" was provided.
Can you please attach your config.log?
Thanks
On Wed, Apr 23, 2014 at 3:10 PM, Тимур Исмагилов wrote:
> Hi!
> I am trying to build openmpi 1.8 with Open SHMEM and Mellanox M
I am running IMB (Intel MPI Benchmarks), the MPI-1 benchmarks, which was built
with Intel 12.1 compiler suite and OpenMPI 1.6.5 (and running w/ OMPI 1.6.5).
I decided to use the following for the mca parameters:
--mca btl openib,tcp,self --mca btl_openib_receive_queues
X,9216,256,128,32:X,6553
A few suggestions:
- Try using Open MPI 1.8.1. It's the newest release, and has many improvements
since the 1.6.x series.
- Try using "--mca btl openib,sm,self" (in both v1.6.x and v1.8.x). This
allows Open MPI to use shared memory to communicate between processes on the
same server, which c
Thank-you Jeff. I re-ran IMB (a 64-core run, distributed across a number of
nodes) under different mca parameters. Here are the results using OpenMPI
1.6.5:
1. --mca btl openib,sm,self --mca btl_openib_receive_queues
X,9216,256,128,32:X,65536,256,128,32
IMB did not hang. Consumed 926
On Wed, 2014-04-23 at 13:05 -0400, Hao Yu wrote:
> Hi Ross,
>
> Sorry for backing to you later on this issue. After finishing my course, I
> am working on Rmpi 0.6-4 to be released soon to CRAN.
>
> I did a few tests like yours and indeed I was able to produce some
> deadlocks whenever mpi.isend.