FWIW: there's also work going on to use direct process-to-process copies (vs. using shared memory bounce buffers). Various MPI implementations have had this technology for a while (e.g., QLogic's PSM-based MPI); the Open-MX guys are publishing the knem open source kernel module for this purpose these days (http://runtime.bordeaux.inria.fr/knem/ ), etc.

On Jun 25, 2009, at 8:31 AM, Simone Pellegrini wrote:

Ralph Castain wrote:
> At the moment, I believe the answer is the main memory route. We have
> a project just starting here (LANL) to implement the cache-level
> exchange, but it won't be ready for release for awhile.
Interesting, actually I am a PhD student and my topic is optimization of MPI applications on multi-core architectures. I will be very interested in collaborating in such project. Can you give me more details about it
(links/pointers)?

regards, Simone
>
>
> On Jun 25, 2009, at 2:39 AM, Simone Pellegrini wrote:
>
>> Hello,
>> I have a simple question for the shared memory (sm) module developers
>> of Open MPI.
>>
>> In the current implementation, is there any advantage of having
>> shared cache among processes communicating?
>> For example let say we have P1 and P2 placed in the same CPU on 2
>> different physical cores with shared cache, P1 wants to send a
>> message to P2 and the message is already in the cache.
>>
>> How the message is being actually exchanged? Is the cache line
>> invalidated, written to main memory and exchanged by using some DMA
>> transfer... or is the message in the cache used (avoiding access to
>> the main memory)?
>>
>> thanks in advance, Simone P.
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems

Reply via email to