[OMPI users] How does OpenMPI decided to use which algorithm in MPI_Bcast????????????????

2009-09-03 Thread shan axida
Hi, I had a glance at OpenMPI source codes and there are several algorithms for MPI_Bcast function. My question is how is the algorithm decided to use in a given MPI_Bcast call? message size? Anyone give me little detailed information for this question? Thanks a lot. Axida

[OMPI users] SHARED Memory----------------

2009-04-22 Thread shan axida
Hi, Any body know how to make use of shared memory in OpenMPI implementation? Thanks

Re: [OMPI users] SHARED Memory----------------

2009-04-23 Thread shan axida
memory, let us know what you had in mind. Elvedin Trnjanin wrote: Shared memory is used for send-to-self scenarios such as if you're making use of multiple slots on the same machine. shan axida wrote: Any body know how to make use of shared memory in OpenMPI implementation?

Re: [OMPI users] SHARED Memory----------------

2009-04-23 Thread shan axida
Sent: Thursday, April 23, 2009 2:08:33 PM Subject: Re: [OMPI users] SHARED Memory shan axida wrote: What I am asking is if I use MPI_Send and MPI_Recv between processes in a node, does it mean using shared memory or not? It (typically) does. (Some edge cases could occur.) Your

[OMPI users] MPI_Bcast from OpenMPI

2009-04-23 Thread shan axida
Hi, One more question: I have executed the MPI_Bcast() in 64 processes in 16 nodes Ethernet multiple links cluster. The result is shown in the file attached on this E-mail. What is going on at 131072 double message size? I have executed it many times but the result is still the same. THANK YOU!

[OMPI users] Fw: MPI_Bcast from OpenMPI

2009-04-23 Thread shan axida
- Forwarded Message From: shan axida To: Open MPI Users Sent: Thursday, April 23, 2009 2:32:08 PM Subject: MPI_Bcast from OpenMPI Hi, One more question: I have executed the MPI_Bcast() in 64 processes in 16 nodes Ethernet multiple links cluster. The result is shown in the file

Re: [OMPI users] MPI_Bcast from OpenMPI

2009-04-23 Thread shan axida
uch that you're incurring congestion and/or retransmission at that size for some reason. You could also be running up against memory bus congestion (I assume you mean 4 cores per node; are they NUMA or UMA?). But that wouldn't account for the huge spike at 1MB. On Apr 23, 2009,

Re: [OMPI users] MPI_Bcast from OpenMPI

2009-04-23 Thread shan axida
y bus congestion (I assume you mean 4 cores per node; are they NUMA or UMA?). But that wouldn't account for the huge spike at 1MB. On Apr 23, 2009, at 1:32 AM, shan axida wrote: > Hi, > One more question: > I have executed the MPI_Bcast() in 64 processes in 16 nodes Ethernet multipl

Re: [OMPI users] MPI_Bcast from OpenMPI

2009-04-23 Thread shan axida
doubles should have been faster, but 256 Mbyte... seems reasonable. So, the remaining mystery is the 6x or so spike at 128 Mbyte. Dunno. How important is it to resolve that mystery? shan axida wrote: Sorry, I had a mistake in calculation. Not 131072 (double) but 131072 KB. It means around 128 M

Re: [OMPI users] MPI_Bcast from OpenMPI

2009-04-24 Thread shan axida
hunt this down, only to find that it's some oddity that has no general relevence. I don't know if that's really the case, but I'm just suggesting that it may make most sense just to let this one go. shan axida wrote: But, exactly the same program gets different result in another

[OMPI users] OpenMPI MPI_Bcast Algorithms

2009-04-28 Thread shan axida
Hi all, I think there are several algorithms used in MPI_Bcast. I am wondering how are they decided to be excuted ? I mean, How to decide which algorithm will be used? Is it depending on the message size or something ? Would some people help me? Thank you!

[OMPI users] ****---How to configure NIS and MPI on spread NICs?----****

2009-05-12 Thread shan axida
Hello all, I want to configure NIS and MPI with different network. For example, NIS uses eth0 and MPI uses eth1 some thing like that. How can I do that? Axida

[OMPI users] How to use Multiple links with OpenMPI? ?????????????????

2009-05-26 Thread shan axida
Hi everyone, I want to ask how to use multiple links (multiple NICs) with OpenMPI. For example, how can I assign a link to each process, if there are 4 links and 4 processors on each node in our cluster? Is this a correct way? hostfile: -- host1-eth0 slots=1 host1-eth1 slots=1

Re: [OMPI users] How to use Multiple links with OpenMPI??????????????????

2009-05-28 Thread shan axida
assumedly assigned their own individual links as well)? On May 26, 2009, at 12:24 AM, shan axida wrote: > Hi everyone, > I want to ask how to use multiple links (multiple NICs) with OpenMPI. > For example, how can I assign a link to each process, if there are 4 links > and 4 processo

Re: [OMPI users] How to use Multiple links with OpenMPI??????????????????

2009-05-29 Thread shan axida
ous if you want MPI_COMM_WORLD rank X to only use link Y -- what does that mean to the other 4 MPI processes on the other host (with whom you have assumedly assigned their own individual links as well)? On May 26, 2009, at 12:24 AM, shan axida wrote: > Hi everyone, > I want to ask how

Re: [OMPI users] How to use Multiple links withOpenMPI??????????????????

2009-06-04 Thread shan axida
PCI bus speeds and contention?), and you may run into secondary performance issues due to contention on your hubs. On May 28, 2009, at 11:06 PM, shan axida wrote: > Thank you! Mr. Jeff Squyres, > I have conducted a simple MPI_Bcast experiment in out cluster. > The results are shown in

Re: [OMPI users] How to use Multiple linkswithOpenMPI??????????????????

2009-06-08 Thread shan axida
, 4. Thank you! Axida. From: Jeff Squyres To: Open MPI Users Sent: Friday, June 5, 2009 11:19:02 PM Subject: Re: [OMPI users] How to use Multiple linkswithOpenMPI?? On Jun 4, 2009, at 3:42 AM, shan axida wrote: > We have Dell powerconnect 2

Re: [OMPI users] "Re: Best way to overlap computation and transfer using MPI over TCP/Ethernet?"

2009-06-08 Thread shan axida
Hi, Would you please tell me how did you do the experiment by calling MPI_Test in little more details? Thanks! From: Lars Andersson To: us...@open-mpi.org Sent: Tuesday, June 9, 2009 6:11:11 AM Subject: Re: [OMPI users] "Re: Best way to overlap computation