It depends on what you're trying to do. Are you writing new components internal to Open MPI, or are you just trying to leverage OMPI's PML for some other project? Or are you writing MPI applications? Or ...?

On Nov 2, 2006, at 2:22 PM, Brian Budge wrote:

Thanks for the pointer, it was a very interesting read.

It seems that by default OpenMPI uses the nifty pipelining trick with pinning pages while transfer is happening. Also the pinning can be (somewhat) perminant and the state is cached so that next usage requires no registration. I guess it is possible to use pre- pinned memory, but do I need to do anything special to do so? I will already have some buffers pinned to allow DMAs to devices across PCI-Express, so it makes sense to use one pinned buffer so that I can avoid memcpys.

Are there any HOWTO tutorials or anything? I've searched around, but it's possible I just used the wrong search terms.

Thanks,
  Brian



On 11/2/06, Jeff Squyres <jsquy...@cisco.com> wrote: This paper explains it pretty well:

     http://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/



On Nov 2, 2006, at 1:37 PM, Brian Budge wrote:

> Hi all -
>
> I'm wondering how DMA is handled in OpenMPI when using the
> infiniband protocol.  In particular, will I get a speed gain if my
> read/write buffers are already pinned via mlock?
>
> Thanks,
>   Brian
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to