Jeff:
> FWIW: I have rarely seen this to be the issue.
Been bitten by similar situations before. But it may not have been OpenMPI.
In any case it was a while back.
> In short, programs are erroneous that do not guarantee that all their
> outstanding requests have completed before calling fina
> Hi,
>
> Thanks for the reply. But this can not solve the problem.
Not sure if this was not your intended meaning (using "can not" instead of "did
not"), but did you try it?
> The output indicate that both nodes hang at the second MPI_Wait, and
> no one can reaches the MPI_Finalize.
Flushing
Try putting an "MPI_Barrier()" call before your MPI_Finalize() [*]. I suspect
that one of the programs (the sending side) is calling Finalize before the
receiving side has processed the messages.
-bill
[*] pet peeve of mine : this should almost always be standard practice.
> -Original M
Have you thought about trying out MPI_Scatter/Gather and at least seeing how
efficient the internal algorithms are?
If you are always going to be running on the same platform and want to
tune-n-tweak for that, then have at it. If you are going to run this code on
different platforms w/ differe
> The tree is not symmetrical in that the valid values for the 10th
> parameter depends on the values selected in the 0th to 9th parameter
> (all the ancestry in the tree), for e.g., we may have a lot of nodes in
> the left of the tree than in the right, see attachment ( I hope they're
> allowed )
ssage-
> > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
> On
> > Behalf Of Bill Rankin
> > Sent: 23 November 2010 19:32
> > To: Open MPI Users
> > Subject: Re: [OMPI users] MPI_Comm_split
> >
> > Hicham:
> >
> > > If
Hicham:
> If I have a 256 mpi processes in 1 communicator, am I able to split
> that communicator, then again split the resulting 2 subgroups, then
> again the resulting 4 subgroups and so on, until potentially having 256
> subgroups?
You can. But as the old saying goes: "just because you *can*
Depending on the datatype and its order in memory, the "Block,*" and "*,Block"
(which we used to call "slabs" in 3D) may be implemented by a simple
scatter/gather call in MPI. The "Block,Block" distribution is a little more
complex, but if you take advantage of MPI's derived datatypes, you may
On Jun 20, 2010, at 1:49 PM, Jack Bryan wrote:
Hi, all:
I need to design a task scheduler (not PBS job scheduler) on Open MPI cluster.
Quick question - why *not* PBS?
Using shell scripts with the Job Array and Dependent Jobs features of PBS Pro
(not sure about Maui/Torque nor SGE) you can imp
) 433-7846 Fax (845) 433-8363
[cid:image001.gif@01CAED32.D4DA1450]Bill Rankin ---05/06/2010 10:35:13
AM---Actually the 'B' in MPI_Bsend() specifies that it is a blocking *buffered*
send. So if I remember m
From:
Bill Rankin
To:
Open MPI Users
List-Post: users@lists.open-mpi.o
Actually the 'B' in MPI_Bsend() specifies that it is a blocking *buffered*
send. So if I remember my standards correctly, this call requires:
1) you will have to explicitly manage the send buffers via
MPI_Buffer_[attach|detach](), and
2) the send will block until a corresponding receive is pos
On 12/14/2009 11:11 PM, Dmitry Zaletnev wrote:
> Hi,
> is it possible to have NFS and openmpi running on different NICs?
Yes. Just make sure that the two subnets for the NICs don't overlap and
that your routing tables are correct.
As for channel bonding, I'll let someone who has actually used it
Hi Amit,
Among these, the term that actually _fascinated_ me is a
"cluster". What I would like to know is- what can be called a cluster?
and what cannot be?
I tested my parallel search program using using 4-nodes running Linux
and Open MPI. So have I implemented a Linux cluster?
Yes, you ha
13 matches
Mail list logo