e mpic++ both for
>>> compiling and linking, there are no errors either.
>>>
>>> Thanks in advance
>>> Durga
>>>
>>> Life is complex. It has real and imaginary parts.
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/03/28760.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28762.php
--
Erik Schnetter
http://www.perimeterinstitute.ca/personal/eschnetter/
how to help me learn? The man mpirun page
> is a bit formidable in the pinning part, so maybe I've missed an obvious
> answer.
>
> Matt
> --
> Matt Thompson
>
> Man Among Men
> Fulcrum of History
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28217.php
--
Erik Schnetter
http://www.perimeterinstitute.ca/personal/eschnetter/
I want to set process affinity at startup. Currently, I do this after
calling MPI_Init. I have the suspicion that MPI_Init already allocates
communication buffers, which may thus be allocated on the wrong socket. Is
this indeed a problem? Is there a way to avoid this?
-erik
--
Erik Schnetter
should I use there? For example, would checking that the message
size in bytes is less than INT_MAX be more reasonable?
I realize that this is a bit of a historic question (version 1.6.5).
-erik
--
Erik Schnetter
http://www.perimeterinstitute.ca/personal/eschnetter/
On Tue, Jul 28, 2015 at 11:52 PM, Mark Santcroos wrote:
> Hi Erik,
>
> > On 29 Jul 2015, at 3:35 , Erik Schnetter wrote:
> > I was able to build openmpi-v2.x-dev-96-g918650a without problems on
> Edison, and also on other systems.
>
> And does it also work as expec
ion evidenced by recent "asynchrousity".
>
> i will push a fix tomorrow.
>
> in the mean time, you can
> mpirun --mca oob ^tcp ...
> (if you run on one node only)
> or
> mpirun --mca oob ^usock
> (if you have an OS X cluster ...)
>
> Cheers,
>
> Gilles
&
wrote:
> Hi Erik,
>
> Do you really want 1.8.7, otherwise you might want to give latest master a
> try. Other including myself had more luck with that on Cray's, including
> Edison.
>
> Mark
>
> > On 25 Jul 2015, at 1:35 , Erik Schnetter wrote:
> >
> > I w
ib'
'--with-wrapper-libs=-lhwloc -lpthread'
This builds and installs fine, and works when running on a single node.
However, multi-node runs are stalling: The queue starts the job, but mpirun
produces no output. The "-v" option to mpirun doesn't help.
When I use aprun