Hi,
I had a glance at OpenMPI source codes and there are several algorithms for
MPI_Bcast function.
My question is how is the algorithm decided to use in a given MPI_Bcast call?
message size?
Anyone give me little detailed information for this question?
Thanks a lot.
Axida
Hi,
Any body know how to make use of shared memory in OpenMPI implementation?
Thanks
memory, let us know
what you had in mind.
Elvedin Trnjanin wrote:
Shared memory is used for send-to-self scenarios such as if you're
making use of multiple slots on the same machine.
shan axida wrote:
Any body know how to make use of shared memory in OpenMPI
implementation?
Sent: Thursday, April 23, 2009 2:08:33 PM
Subject: Re: [OMPI users] SHARED Memory
shan axida wrote:
What
I am asking is if I use MPI_Send and MPI_Recv between processes in
a node, does it mean using shared memory or not?
It (typically) does. (Some edge cases could occur.) Your
Hi,
One more question:
I have executed the MPI_Bcast() in 64 processes in 16 nodes Ethernet multiple
links cluster.
The result is shown in the file attached on this E-mail.
What is going on at 131072 double message size?
I have executed it many times but the result is still the same.
THANK YOU!
- Forwarded Message
From: shan axida
To: Open MPI Users
Sent: Thursday, April 23, 2009 2:32:08 PM
Subject: MPI_Bcast from OpenMPI
Hi,
One more question:
I have executed the MPI_Bcast() in 64 processes in 16 nodes Ethernet multiple
links cluster.
The result is shown in the file
uch that you're incurring congestion and/or retransmission at that
size for some reason.
You could also be running up against memory bus congestion (I assume you mean 4
cores per node; are they NUMA or UMA?). But that wouldn't account for the huge
spike at 1MB.
On Apr 23, 2009,
y bus congestion (I assume you mean 4
cores per node; are they NUMA or UMA?). But that wouldn't account for the huge
spike at 1MB.
On Apr 23, 2009, at 1:32 AM, shan axida wrote:
> Hi,
> One more question:
> I have executed the MPI_Bcast() in 64 processes in 16 nodes Ethernet multipl
doubles should have been faster,
but 256 Mbyte... seems reasonable.
So, the remaining mystery is the 6x or so spike at 128 Mbyte. Dunno.
How important is it to resolve that mystery?
shan axida wrote:
Sorry, I had a mistake in calculation.
Not 131072 (double) but 131072 KB.
It means around 128 M
hunt this down, only to find that it's some oddity that has
no general relevence. I don't know if that's really the case, but I'm
just suggesting that it may make most sense just to let this one go.
shan axida wrote:
But, exactly the same program gets different result in another
Hi all,
I think there are several algorithms used in MPI_Bcast.
I am wondering how are they decided to be excuted ?
I mean, How to decide which algorithm will be used?
Is it depending on the message size or something ?
Would some people help me?
Thank you!
Hello all,
I want to configure NIS and MPI with different network.
For example, NIS uses eth0 and MPI uses eth1 some thing like that.
How can I do that?
Axida
Hi everyone,
I want to ask how to use multiple links (multiple NICs) with OpenMPI.
For example, how can I assign a link to each process, if there are 4 links
and 4 processors on each node in our cluster?
Is this a correct way?
hostfile:
--
host1-eth0 slots=1
host1-eth1 slots=1
assumedly assigned their own individual links as
well)?
On May 26, 2009, at 12:24 AM, shan axida wrote:
> Hi everyone,
> I want to ask how to use multiple links (multiple NICs) with OpenMPI.
> For example, how can I assign a link to each process, if there are 4 links
> and 4 processo
ous if you want MPI_COMM_WORLD rank X to
only use link Y -- what does that mean to the other 4 MPI processes on the
other host (with whom you have assumedly assigned their own individual links as
well)?
On May 26, 2009, at 12:24 AM, shan axida wrote:
> Hi everyone,
> I want to ask how
PCI bus speeds and
contention?), and you may run into secondary performance issues due to
contention on your hubs.
On May 28, 2009, at 11:06 PM, shan axida wrote:
> Thank you! Mr. Jeff Squyres,
> I have conducted a simple MPI_Bcast experiment in out cluster.
> The results are shown in
, 4.
Thank you!
Axida.
From: Jeff Squyres
To: Open MPI Users
Sent: Friday, June 5, 2009 11:19:02 PM
Subject: Re: [OMPI users] How to use Multiple linkswithOpenMPI??
On Jun 4, 2009, at 3:42 AM, shan axida wrote:
> We have Dell powerconnect 2
Hi,
Would you please tell me how did you do the experiment by calling MPI_Test in
little more details?
Thanks!
From: Lars Andersson
To: us...@open-mpi.org
Sent: Tuesday, June 9, 2009 6:11:11 AM
Subject: Re: [OMPI users] "Re: Best way to overlap computation
18 matches
Mail list logo