Durga,
Currently there are two options for porting an interconnect to Open
MPI, one would be to use the BTL interface (Byte Transfer Layer).
Another would be to use the MTL (Matching Transport Layer). The
difference is that the MTL is useful for those APIs which expose
matching and other high level semantics such as Portals or MX. If
your API does not support these higher level protocols the BTL would
be a good choice. There is some documentation on the BTL interface,
one paper describes the various layers at a high level and is found
here: http://www.open-mpi.org/papers/ipdps-2006. For more details
there was an extensive presentation done at a developers conference
and the slides are available here: http://www.open-mpi.org/papers/
workshop-2006/wed_01_pt2pt.pdf . This should give you a place to
start. Depending on the semantics of your API, writing a BTL can be
done quite quickly although interconnect specific optimization can
take some time. You mention that you would like to leverage your
switch architecture, not sure what you mean by this, is this for
point-to-point communication or for collective optimization? If you
need collective optimization you would need to touch a few other
components in Open MPI but this could also be done in a modular way.
Let me know if you need any other assistance as we would be happy to
have another interconnect supported in Open MPI.
- Galen
On Aug 7, 2006, at 8:18 AM, Durga Choudhury wrote:
Hi All
We have been using the Argonne MPICH (over TCP/IP) on our in-house
designed embedded multicomputer for last several months with
satisfactory results. Our network technology is custom built and is
* not* infiniband (or any published standards, such as Myrinet)
based. This is due to the nature of our application. We are
currently running TCP/IP over out backplane network and using that
as the transport layer of MPICH.
For the next generation of our software release, we are planning to
write a low level transport layer to leverage our switch
architecture and considering changing the entire MPI protocol stack
to openMPI. From what I have found so far, I'd have to write
routines to provide services similar to the ones found under ompi/
mca/btl/{tcp,mx,...}. I'd like to get some guidance as to how to do
this. Is there a document about this? Has anybody in this list done
something similar before and if so, what was the difficulty level
involved?
Thanks a lot in advance.
Durga
--
Devil wanted omnipresence;
He therefore created communists.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users