Thank you for your fast response.

 

I am launching 200 light processes in two computers with 8 cores each
one (Intel i7 processor). They are dedicated and are interconnected
through a point-to-point Gigabit Ethernet link.

 

I read about oversubscribing nodes in the open-mpi documentation, and
for that reason I am using the option 

 

-    Mca mpi_yield_when_idle 1

 

Regards

 

Pedro

 

 

 

>>On Feb 29, 2012, at 11:01 AM, Pinero, Pedro_jose wrote:

 

>> I am using OMPI v.1.5.5 to communicate 200 Processes in a 2-Computers
cluster connected though Ethernet, obtaining a very poor performance.

 

>Let me making sure I'm parsing this statement properly: are you
launching 200 MPI processes on 2 computers?  If so, do >those computers
each have 100 cores?

 

>I ask because oversubscribing MPI processes (i.e., putting more than 1
process per core) will be disastrous to >performance.

 

>> I have measured each operation time and I haver realised that the
MPI_Gather operation takes about 1 second in each >>synchronization
(only an integer is send in each case). Is this time range normal or I
have a synchronization >>problem?  Is there any way to improve this
performance?

 

>I'm afraid I can't say more without more information about your
hardware and software setup.  Is this a dedicated HPC >cluster?  Are you
oversubscribing the cores?  What kind of Ethernet switching gear do you
have?  ...etc.

 

>-- 

>Jeff Squyres

>jsquy...@cisco.com

>For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/

 

Reply via email to