Re: [OMPI users] Suppressing Nvidia warnings

2017-05-04 Thread Ben Menadue
Hi, Sorry to reply to an old thread, but we’re seeing this message with 2.1.0 built against CUDA 8.0. We're using libcuda.so.375.39. Has anyone had any luck suppressing these messages? Thanks, Ben > On 27 Mar 2017, at 7:13 pm, Roland Fehrenbacher wrote: > >> "SJ" == Sylvain Jeaugey wri

Re: [OMPI users] Strange OpenMPI errors showing up in Caffe rc5 build

2017-05-04 Thread gilles
William, the link error clearly shows libcaffe.so does require C++ bindings. did you build caffe from a fresh tree ? what if you ldd libcaffe.so nm libcaffe.so | grep -i ompi if libcaffe.so does require mpi c++ bindings, it should depend on it (otherwise the way it was built is questionnable

Re: [OMPI users] How to use MPI_Win_attach() (or how to specify the 'displ' on a remote process)

2017-05-04 Thread Nathan Hjelm
This behavior is clearly specified in the standard. From MPI 3.1 § 11.2.4:In the case of a window created with MPI_WIN_CREATE_DYNAMIC, the target_disp for all RMA functions is the address at the target; i.e., the effective window_base is MPI_BOTTOM and the disp_unit is one. For dynamic windows, the

[OMPI users] Strange OpenMPI errors on building Caffe 1.0

2017-05-04 Thread Lane, William
I know this could possibly be off-topic, but the errors are OpenMPI errors and if anyone could shed light on the nature of these errors I figure it would be this group: CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin g++ .build_release/tools/upgrade_solver_proto_text.o -o .build_re

[OMPI users] Strange OpenMPI errors showing up in Caffe rc5 build

2017-05-04 Thread Lane, William
I know this could possibly be off-topic, but the errors are OpenMPI errors and if anyone could shed light on the nature of these errors I figure it would be this group: CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin g++ .build_release/tools/upgrade_solver_proto_text.o -o .build_r

[OMPI users] How to use MPI_Win_attach() (or how to specify the 'displ' on a remote process)

2017-05-04 Thread Clune, Thomas L. (GSFC-6101)
I have encountered a problem that seems well suited to dynamic windows with one-sided comunication.To verify my understanding, I put together a simple demo code (attached). My initial attempt consistently crashed until I stumbled upon passing the base address of the attached memory on the

Re: [OMPI users] MPI_Accumulate() Blocking?

2017-05-04 Thread Benjamin Brock
Is there any way to issue simultaneous MPI_Accumulate() requests to different targets, then? I need to update a distributed array, and this serializes all of the communication. Ben On Thu, May 4, 2017 at 5:53 AM, Marc-André Hermanns < m.a.herma...@fz-juelich.de> wrote: > Dear Benjamin, > > as f

Re: [OMPI users] Performance issues: 1.10.x vs 2.x

2017-05-04 Thread Paul Kapinos
Note that 2.x lost the memory hooks, cf. the thread https://www.mail-archive.com/devel@lists.open-mpi.org/msg00039.html The numbers you have looks like 20% loss we also have seen with 4.x vs. 1.10.x versions. Try the dirty hook with 'memalign', LD_PRELOAD this: $ cat alignmalloc64.c /* Dirk Sc

Re: [OMPI users] MPI_Accumulate() Blocking?

2017-05-04 Thread Marc-André Hermanns
Dear Benjamin, as far as I understand the MPI standard, RMA operations non-blocking in the sense that you need to complete them with a separate call (flush/unlock/...). I cannot find the place in the standard right now, but I think an implementation is allowed to either buffer RMA requests or blo

[OMPI users] Performance issues: 1.10.x vs 2.x

2017-05-04 Thread marcin.krotkiewski
Hi, everyone, I ran some bandwidth tests on two different systems with Mellanox IB (FDR and EDR). I compiled the three supported versions of openmpi (1.10.6, 2.0.2, 2.1.0) and measured the time it takes to send/receive 4MB arrays of doubles betweentwo hosts connected to the same IB switch. MP