But in many ways, it’s also not helpful to change the MTU from Open MPI. It
sounds like you made a bunch of changes all at once; I’d break them down and
build up. MTU is a very system-level configuration. Use a tcp transmission
test (iperf, etc.) to make sure TCP connections work between the
Sorry for not providing an update earlier. The bug has been fixed and
the messages should disappear in a future version of the driver
(hopefully the next one if it got picked in time).
On 05/04/2017 10:23 PM, Ben Menadue wrote:
Hi,
Sorry to reply to an old thread, but we’re seeing this messag
On 05/04/2017 09:08 PM, gil...@rist.or.jp wrote:
William,
the link error clearly shows libcaffe.so does require C++ bindings.
did you build caffe from a fresh tree ?
what if you
ldd libcaffe.so
nm libcaffe.so | grep -i ompi
if libcaffe.so does require mpi c++ bindings, it should depend on
Alberto,
Are you saying the program hang even without jumbo frame (aka 1500 MTU) ?
At first, make sure there is no firewall running, and then you can try
mpirun --mca btl tcp,vader,self --mca oob_tcp_if_include eth0 --mca
btl_tcp_if_include eth0 ...
(Replace eth0 with the interface name you want
"ompi_info --param btl tcp -l 9" will give you all the TCP options.
Unfortunately, OMPI does not support programatically changing the value of
the MTU.
George.
PS: We would be happy to receive contributions from the community.
On Fri, May 5, 2017 at 10:29 AM, Alberto Ortiz
wrote:
> I am usi
This error should really be posted to the caffe mailing list. This is an
error with caffe. Most likely, you are not specifying the location to
your Open MPI installation properly. And Caffe definitely depends on
OpenMPI you errors:
.build_release/lib/libcaffe.so: undefined reference to
`ompi
I am using version 1.10.6 on archlinux.
The option I should pass to mpirun should then be "-mca btl_tcp_mtu 13000"?
Just to be sure.
Thank you,
Alberto
El 5 may. 2017 16:26, "r...@open-mpi.org" escribió:
> If you are looking to use TCP packets, then you want to set the send/recv
> buffer size in
If you are looking to use TCP packets, then you want to set the send/recv
buffer size in the TCP btl, not the openib one, yes?
Also, what version of OMPI are you using?
> On May 5, 2017, at 7:16 AM, Alberto Ortiz wrote:
>
> Hi,
> I have a program running with openMPI over a network using a gig
Hi,
I have a program running with openMPI over a network using a gigabit
switch. This switch supports jumbo frames up to 13.000 bytes, so, in order
to test and see if it would be faster communicating with this frame
lengths, I am trying to use them with my program. I have set the MTU in
each node t
On May 4, 2017, at 5:36 PM, Nathan Hjelm mailto:hje...@me.com>>
wrote:
This behavior is clearly specified in the standard. From MPI 3.1 § 11.2.4:
Thanks - I see it now.Some of the text is so similar to the online man
pages that I must have glossed over that critical phrase. It remains
u
On 05/05/17 12:10, marcin.krotkiewski wrote:
> in my case it was enough to allocate my own arrays using posix_memalign.
Be happy. This did not work for Fortran codes..
But since that worked, it means that 1.10.6 deals somehow better with unaligned
data. Anyone knows the reason for this?
In 1.
Thanks, Paul. That was useful! although in my case it was enough to
allocate my own arrays using posix_memalign. The internals of OpenMPI
did not play any role, which I guess is quite natural assuming OpenMPI
doesn't reallocate.
But since that worked, it means that 1.10.6 deals somehow better
Ben,
I would regard the serialization an implementation issue not a
standards issue, thus it would still be a valid approach to perform
the operations in the way the benchmark does.
As far as I know, Nathan Hjelm did a major overhaul of the RMA
handling in Open-MPI 2.x, so my first suggestion wou
13 matches
Mail list logo