Thanks for the quick response. I'm having alot of fun learning MPI and
this mailing list has been invaluable.
So, If I do a scatter on an inter communicator will this use all left
process to scatter on all right processes?
Where the left processes define MPI_ROOT and the right processes define
On May 9, 2014, at 7:56 PM, Spenser Gilliland wrote:
> I'm having some trouble understanding Intercommunicators with
> Collective Communication. Is there a collective routine to express a
> transfer from all left process to all right processes? or vice versa?
The intercomm collectives are all b
Hi,
I'm having some trouble understanding Intercommunicators with
Collective Communication. Is there a collective routine to express a
transfer from all left process to all right processes? or vice versa?
Thanks,
Spenser
--
Spenser Gilliland
Computer Engineer
Doctoral Candidate
Just wondering if you've tried with the latest stable OMPI, 1.8.1? I'm
wondering if this is an issue with the OOB. If you have a debug build, you
can run -mca btl_openib_verbose 10
Josh
On Fri, May 9, 2014 at 6:26 PM, Joshua Ladd wrote:
> Hi, Tim
>
> Run "ibstat" on each host:
>
> 1. Make sure
I've checked the links repeatedly with "ibstatus" and they look OK. Both
nodes shoe a link layer of "InfiniBand".
As I stated, everything works well with MVAPICH2, so I don't suspect a
physical or link layer problem (but I could always be wrong on that).
Tim
On Fri, May 9, 2014 at 6:26 PM, Josh
Hi, Tim
Run "ibstat" on each host:
1. Make sure the adapters are alive and active.
2. Look at the Link Layer settings for host w34. Does it match host w4's?
Josh
On Fri, May 9, 2014 at 1:18 PM, Tim Miller wrote:
> Hi All,
>
> We're using OpenMPI 1.7.3 with Mellanox ConnectX InfiniBand adap
Hi All,
We're using OpenMPI 1.7.3 with Mellanox ConnectX InfiniBand adapters, and
periodically our jobs abort at start-up with the following error:
===
Open MPI detected two different OpenFabrics transport types in the same
Infiniband network.
Such mixed network trasport configuration is not supp
There is a known bug in the 1.8.1 release whereby daemons failing to start on a
remote node will cause a silent failure. This has been fixed for the upcoming
1.8.2 release, but you might want to use one of the nightly 1.8.2 snapshots in
the interim.
Most likely causes:
* not finding the requir
Hi,
I have encountered a problem with openmpi I can't seem to be able to
diagnose or find precedence in in the mailing-list. I have two pc's with
a fresh install of Arch linux and openmpi 1.8.1. One is a dedicated PC
and the other is a virtualbox installation. The virtualbox install is
the master