George Bosilca wrote:
All in all we end up with a multi-hundreds KB library which in most
of the applications will be only used at 10%.
Seems like it ought to be possible to do some coverage analysis for a
particular application and figure out what parts of the library (and
user code) to ma
The main problem with MPI is the huge number of function in the API.
Even if we implement only the 1.0 standard we still have several
hundreds of functions around. Moreover, an MPI library is far from
being a simple self-sufficient library, it requires a way to start
and monitor processes,
Marcus G. Daniels wrote:
Marcus G. Daniels wrote:
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE. They mention dynamic
overlay loading,
Marcus G. Daniels wrote:
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE. They mention dynamic
overlay loading, which is also how we deal with large
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE.
Just for reference, here are the stripped shared library sizes for
OpenMPI 1.2 as built on a Mercury Ce
That's pretty cool. The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE. They mention dynamic
overlay loading, which is also how we deal with large code size, but
things get t
Hi,
Has anyone investigated adding intra chip Cell EIB messaging to OpenMPI?
It seems like it ought to work. This paper seems pretty convincing:
http://www.cs.fsu.edu/research/reports/TR-061215.pdf