Okay...this isn't a performance summary or anything like that. Its
just some information on what I was able to get to work. With a
couple of suggestions from Brian Barrett about building OMPI with
static libraries (possible problem with GNU libtool support for the
Intel compiler on OS X?).
Thanks.
On 7/18/06, Bert Wesarg wrote:
hi,
s anwar wrote:
> Thank you for the clarification. Why is MPI_COMM_SELF not the correct
> communicator for MPI_Comm_spawn(). My application will have a single
> master only.
yes for a single master this should be the same, but never try this.
>
> Also
hi,
s anwar wrote:
> Thank you for the clarification. Why is MPI_COMM_SELF not the correct
> communicator for MPI_Comm_spawn(). My application will have a single
> master only.
yes for a single master this should be the same, but never try this.
>
> Also, can I merge the intercommunicator into an
Thank you for the clarification. Why is MPI_COMM_SELF not the correct
communicator for MPI_Comm_spawn(). My application will have a single master
only.
Also, can I merge the intercommunicator into an intracommunicator via
MPI_Intercomm_merge(intercomm, 0, &intracomm) and use MPI_Bcast(..., 0,
int
hi,
yes sorry for my first reply, my words were to rough.
a bcast for a intercomm works this way (in your words):
- your masters want to send a buffer to your slaves
- one of the masters must provide the MPI_ROOT as root in the MPI_BCAST call
- all slaves must provide the rank of this MPI_ROOT a
I don't think I blamed the implementation in any way in my original email.
My intent is to gain understanding of why my code does/should not work. I
assumed that I was not passing the correct intercommunicator. However, I am
at a loss on how to construct a proper intercommunicator in this case. Yo
On Tue, Jul 11, 2006 at 12:14:51PM -0400, Abhishek Agarwal wrote:
> Hello,
>
> Is there a way of providing a specific port number in MPI_Info when using a
> MPI_Open_port command so that clients know which port number to connect.
The other replies have covered this pretty well but if you are
dea
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Keith Refson
> Sent: Tuesday, July 18, 2006 6:21 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Openmpi, LSF and GM
>
> > > The arguments you want would look like:
> > >
> > >
Dear Brian,
Thanks for the help
Brian Barrett wrote:
> > The arguments you want would look like:
> >
> >mpirun -np X -mca btl gm,sm,self -mca btl_base_verbose 1 -mca
> > btl_gm_debug 1
Aha. I think I had misunderstood the syntax slightly, which explains why
I previously saw no debugging
Hi,
s anwar wrote:
> Please see attached source file.
>
> According to my understanding of MPI_Comm_spawn(), the intercommunicator
> returned is the same as it is returned by MPI_Comm_get_parent() in the
> spawned processes. I am assuming that there is one intercommunicator
> which contains all t
10 matches
Mail list logo