Hi,
Sorry to reply to an old thread, but we’re seeing this message with 2.1.0 built
against CUDA 8.0. We're using libcuda.so.375.39. Has anyone had any luck
suppressing these messages?
Thanks,
Ben
> On 27 Mar 2017, at 7:13 pm, Roland Fehrenbacher wrote:
>
>> "SJ" == Sylvain Jeaugey wri
William,
the link error clearly shows libcaffe.so does require C++ bindings.
did you build caffe from a fresh tree ?
what if you
ldd libcaffe.so
nm libcaffe.so | grep -i ompi
if libcaffe.so does require mpi c++ bindings, it should depend on it
(otherwise the way it was built is questionnable
This behavior is clearly specified in the standard. From MPI 3.1 § 11.2.4:In the case of a window created with MPI_WIN_CREATE_DYNAMIC, the target_disp for all RMA functions is the address at the target; i.e., the effective window_base is MPI_BOTTOM and the disp_unit is one. For dynamic windows, the
I know this could possibly be off-topic, but the errors are OpenMPI errors and
if anyone could shed light on the nature of these errors I figure it would be
this group:
CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin
g++ .build_release/tools/upgrade_solver_proto_text.o -o
.build_re
I know this could possibly be off-topic, but the errors are OpenMPI errors and
if anyone could shed light on the nature of these errors I figure it would be
this group:
CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin
g++ .build_release/tools/upgrade_solver_proto_text.o -o
.build_r
I have encountered a problem that seems well suited to dynamic windows with
one-sided comunication.To verify my understanding, I put together a simple
demo code (attached). My initial attempt consistently crashed until I
stumbled upon passing the base address of the attached memory on the
Is there any way to issue simultaneous MPI_Accumulate() requests to
different targets, then? I need to update a distributed array, and this
serializes all of the communication.
Ben
On Thu, May 4, 2017 at 5:53 AM, Marc-André Hermanns <
m.a.herma...@fz-juelich.de> wrote:
> Dear Benjamin,
>
> as f
Note that 2.x lost the memory hooks, cf. the thread
https://www.mail-archive.com/devel@lists.open-mpi.org/msg00039.html
The numbers you have looks like 20% loss we also have seen with 4.x vs. 1.10.x
versions. Try the dirty hook with 'memalign', LD_PRELOAD this:
$ cat alignmalloc64.c
/* Dirk Sc
Dear Benjamin,
as far as I understand the MPI standard, RMA operations non-blocking
in the sense that you need to complete them with a separate call
(flush/unlock/...).
I cannot find the place in the standard right now, but I think an
implementation is allowed to either buffer RMA requests or blo
Hi, everyone,
I ran some bandwidth tests on two different systems with Mellanox IB
(FDR and EDR). I compiled the three supported versions of openmpi
(1.10.6, 2.0.2, 2.1.0) and measured the time it takes to send/receive
4MB arrays of doubles betweentwo hosts connected to the same IB switch.
MP
10 matches
Mail list logo