Hi, openmpi developers.
I'd like to report that MPI_STATUS_SIZE in mpif-config.h of openmpi-1.6.2
compiled with -i8 fortran option(Fort integer size: 8) is defined as 3,
which
is half of normal opnempi's MPI_STATUS_SIZE. I think this also should be 6.
openmpi(Fort integer size: 8)
!
! Misce
Is anyone aware of an MPI based HLA/RTI (DoD High Level Architecture
(HLA) / Runtime Infrastructure)?
---John
This would be a departure from the SPMD paradigm that seems central to
MPI's design. Each process would be a completely different program
(piece of code) and I'm not sure how well that would working using
MPI?
BTW, MPI is commonly used in the parallel discrete even world for
communication between
FWIW: some of us are working on a variant of MPI that would indeed support what
you describe - it would support send/recv (i.e., MPI-1), but not collectives,
and so would allow communication between arbitrary programs.
Not specifically targeting HLA/RTI, though I suppose a wrapper that conformed
I just received an e-mail notifying me that MPI-2 supports MPMD. This
would seen to be just what the doctor ordered?
---John
On Mon, Apr 15, 2013 at 11:10 AM, Ralph Castain wrote:
> FWIW: some of us are working on a variant of MPI that would indeed support
> what you describe - it would suppo
It isn't the fact that there are multiple programs being used - we support that
just fine. The problem with HLA/RTI is that it allows programs to come/go at
will - i.e., not every program has to start at the same time, nor complete at
the same time. MPI requires that all programs be executing at
That would seem to preclude its use for an RTI. Unless you have a card up
your sleeve?
---John
On Mon, Apr 15, 2013 at 11:23 AM, Ralph Castain wrote:
> It isn't the fact that there are multiple programs being used - we support
> that just fine. The problem with HLA/RTI is that it allows progr
On Apr 15, 2013, at 8:29 AM, John Chludzinski
wrote:
> That would seem to preclude its use for an RTI. Unless you have a card up
> your sleeve?
One can relax those requirements while maintaining the ability to use send/recv
- you just can't use MPI collectives, and so the result doesn't con
Is that "in the works"?
On Mon, Apr 15, 2013 at 11:33 AM, Ralph Castain wrote:
>
> On Apr 15, 2013, at 8:29 AM, John Chludzinski
> wrote:
>
> That would seem to preclude its use for an RTI. Unless you have a card up
> your sleeve?
>
>
> One can relax those requirements while maintaining the a
Yeah - but no timetable for completion.
On Apr 15, 2013, at 8:37 AM, John Chludzinski
wrote:
> Is that "in the works"?
>
>
> On Mon, Apr 15, 2013 at 11:33 AM, Ralph Castain wrote:
>
> On Apr 15, 2013, at 8:29 AM, John Chludzinski
> wrote:
>
>> That would seem to preclude its use for an R
On 4/15/13 8:22 AM, "John Chludzinski" wrote:
>Is anyone aware of an MPI based HLA/RTI (DoD High Level Architecture
>(HLA) / Runtime Infrastructure)?
Information Sciences Institute wrote an MPI transport for RTI-s 8 years or
so ago. I'm not sure what happened to that code, but it was useful for
MPI-1 even supported MPMD -- SPMD is the most common way to invoke MPI, but it
certainly is not necessary.
See Open MPI's man page for mpirun for how to start a series of different
executables that all launch in the same MPI_COMM_WORLD.
On Apr 15, 2013, at 8:14 AM, John Chludzinski
wrote:
>
12 matches
Mail list logo