Also, loop back interface is somehow special.
though all nodes do have the same ip 127.0.0.1, this interface cannot be
used for inter node communication.
On Saturday, March 12, 2016, Jeff Squyres (jsquyres)
wrote:
> It's set by default in btl_tcp_if_exclude (because in most cases, you *do*
> wan
It's set by default in btl_tcp_if_exclude (because in most cases, you *do* want
to exclude the loopback interface -- it's much slower than other shared memory
types of scenarios). But this value can certainly be overridden:
mpirun --mca btl_tcp_if_exclude ''
> On Mar 11, 2016, at 11:15
Hello all
>From a user standpoint, that does not seem right to me. Why should one need
any kind of network at all if one is entirely dealing with a single node?
Is there any particular reason OpenMPI does not/cannot use the lo
(loopback) interface? I'd think it is there for exactly this kind of
si
Spawned tasks cannot use the sm nor vader btl so you need an other one
(tcp, openib, ...)
self btl is only to send/recvcount with oneself (e.g. it does not work for
inter process and intra node communications.
I am pretty sure the lo interface is always discarded by openmpi, so I have
no solution
Hello,
I'm having communications problem between two processes (with one being
spawned by the other, on the *same* physical machine). Everything works
as expected when I have network interface such as eth0 or wlo1 up, but
as soon as they are down, I get errors (such as « At least one pair of
MPI pr