Re: [OMPI users] Why do I need a C++ linker while linking in MPI C code with CUDA?

2016-03-20 Thread Erik Schnetter
e mpic++ both for >>> compiling and linking, there are no errors either. >>> >>> Thanks in advance >>> Durga >>> >>> Life is complex. It has real and imaginary parts. >>> ___ >>> users mailing list >>> us...@open-mpi.org >>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this post: >>> http://www.open-mpi.org/community/lists/users/2016/03/28760.php > > > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/03/28762.php -- Erik Schnetter http://www.perimeterinstitute.ca/personal/eschnetter/

Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Erik Schnetter
how to help me learn? The man mpirun page > is a bit formidable in the pinning part, so maybe I've missed an obvious > answer. > > Matt > -- > Matt Thompson > > Man Among Men > Fulcrum of History > > > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/01/28217.php -- Erik Schnetter http://www.perimeterinstitute.ca/personal/eschnetter/

[OMPI users] Buffer allocation at startup

2015-11-18 Thread Erik Schnetter
I want to set process affinity at startup. Currently, I do this after calling MPI_Init. I have the suspicion that MPI_Init already allocates communication buffers, which may thus be allocated on the wrong socket. Is this indeed a problem? Is there a way to avoid this? -erik -- Erik Schnetter

[OMPI users] Maximum message size in OpenMPI 1.6.5?

2015-11-16 Thread Erik Schnetter
should I use there? For example, would checking that the message size in bytes is less than INT_MAX be more reasonable? I realize that this is a bit of a historic question (version 1.6.5). -erik -- Erik Schnetter http://www.perimeterinstitute.ca/personal/eschnetter/

Re: [OMPI users] Building OpenMPI 1.8.7 on XC30

2015-07-29 Thread Erik Schnetter
On Tue, Jul 28, 2015 at 11:52 PM, Mark Santcroos wrote: > Hi Erik, > > > On 29 Jul 2015, at 3:35 , Erik Schnetter wrote: > > I was able to build openmpi-v2.x-dev-96-g918650a without problems on > Edison, and also on other systems. > > And does it also work as expec

Re: [OMPI users] Building OpenMPI 1.8.7 on XC30

2015-07-28 Thread Erik Schnetter
ion evidenced by recent "asynchrousity". > > i will push a fix tomorrow. > > in the mean time, you can > mpirun --mca oob ^tcp ... > (if you run on one node only) > or > mpirun --mca oob ^usock > (if you have an OS X cluster ...) > > Cheers, > > Gilles &

Re: [OMPI users] Building OpenMPI 1.8.7 on XC30

2015-07-25 Thread Erik Schnetter
wrote: > Hi Erik, > > Do you really want 1.8.7, otherwise you might want to give latest master a > try. Other including myself had more luck with that on Cray's, including > Edison. > > Mark > > > On 25 Jul 2015, at 1:35 , Erik Schnetter wrote: > > > > I w

[OMPI users] Building OpenMPI 1.8.7 on XC30

2015-07-24 Thread Erik Schnetter
ib' '--with-wrapper-libs=-lhwloc -lpthread' This builds and installs fine, and works when running on a single node. However, multi-node runs are stalling: The queue starts the job, but mpirun produces no output. The "-v" option to mpirun doesn't help. When I use aprun