OMPI does use those methods, but they can't be used for something like shared 
memory. So if you want the performance benefit of shared memory, then we have 
to poll.


On Dec 13, 2010, at 9:00 AM, Hicham Mouline wrote:

> I don't understand 1 thing though and would appreciate your comments.
>  
> In various interfaces, like network sockets, or threads waiting for data from 
> somewhere, there are various solutions based on _not_ checking the state of 
> the socket or some sort of  queue continuously, but sort of getting 
> _interrupted_ when there is data around, or like condition variables for 
> threads.
> I am not very clear on these points, but it seems that in these contexts, 
> continuous polling is avoided and so actual CPU usage is usually not close to 
> 100%.
>  
> Why can't something similar be implemented with broadcast for e.g.?
>  
> -----Original Message-----
> From: "Jeff Squyres" [jsquy...@cisco.com]
> Date: 13/12/2010 03:55 PM
> To: "Open MPI Users" 
> Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
> 
> I think there *was* a decision and it effectively changed how sched_yield() 
> effectively operates, and that it may not do what we expect any more.
> 
> See this thread (the discussion of Linux/sched_yield() comes in the later 
> messages):
> 
> http://www.open-mpi.org/community/lists/users/2010/07/13729.php
> 
> I believe there's similar threads in the MPICH mailing list archives; that's 
> why Dave posted on the OMPI list about it.
> 
> We briefly discussed replacing OMPI's sched_yield() with a usleep(1), but it 
> was shot down.
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to