It's set by default in btl_tcp_if_exclude (because in most cases, you *do* want
to exclude the loopback interface -- it's much slower than other shared memory
types of scenarios). But this value can certainly be overridden:
mpirun --mca btl_tcp_if_exclude ''
> On Mar 11, 2016, at 11:15
Gary,
The current fine tuning of our TCP layer was done on a 1Gb network, and might
result in the performance degradation you see. There is a relationship between
the depth of the pipeline and the length of the packets, together with another
set of MCA parameters that can have a drastic impact
Hello all
>From a user standpoint, that does not seem right to me. Why should one need
any kind of network at all if one is entirely dealing with a single node?
Is there any particular reason OpenMPI does not/cannot use the lo
(loopback) interface? I'd think it is there for exactly this kind of
si
This is a great collection! Thank you, Howard.
On Fri, Mar 11, 2016 at 1:17 AM, Howard Pritchard
wrote:
> Hello Saliya,
>
> Sorry i did not see this email earlier. There are a bunch of java test
> codes including performance tests like used in the paper at
>
> https://github.com/open-mpi/ompi-j
Gary,
I previously missed the fact you are running on a 10GbE network, and I
still assumes you are not running a debug build.
maybe you need to increase send/recv buffer sizes
ompi_info --all | grep btl_tcp
will list the parameters that can be tuned,
then you can
mpirun --mca btl_tcp_
Cheers,
Spawned tasks cannot use the sm nor vader btl so you need an other one
(tcp, openib, ...)
self btl is only to send/recvcount with oneself (e.g. it does not work for
inter process and intra node communications.
I am pretty sure the lo interface is always discarded by openmpi, so I have
no solution
Hello,
I'm having communications problem between two processes (with one being
spawned by the other, on the *same* physical machine). Everything works
as expected when I have network interface such as eth0 or wlo1 up, but
as soon as they are down, I get errors (such as « At least one pair of
MPI pr
Hello Saliya,
Sorry i did not see this email earlier. There are a bunch of java test
codes including performance tests like used in the paper at
https://github.com/open-mpi/ompi-java-test
Howard
2016-02-27 23:01 GMT-07:00 Saliya Ekanayake :
> Hi,
>
> I see this paper from Oscar refers to a J