2:05 PM, James Dinan wrote:
> Hi Toon,
>
> Can you use non-blocking send/recv? It sounds like this will give you the
> completion semantics you want.
>
> Best,
> ~Jim.
>
>
> On 2/24/11 6:07 AM, Toon Knapen wrote:
>
>> In that case, I have a small question
Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:
>
> > (Probably this issue has been discussed at length before but
> unfortunately I did not find any threads (on this site or anywhere else) on
> this topic, if you are able to provide me with links to earlier discussions
> on this
(Probably this issue has been discussed at length before but unfortunately I
did not find any threads (on this site or anywhere else) on this topic, if
you are able to provide me with links to earlier discussions on this topic,
please do not hesitate)
Is there an alternative to MPI_Win_complete th
>
> I agree with Bill that performance portability is an issue. That is, the
> MPI standard itself doesn't really provide any guarantees here about what is
> fastest. Perhaps polling this mailing list will be helpful, but if you are
> looking for "the fastest" solution regardless of which MPI imp
>
>
> So when you say you want your master to send "as fast as possible", I
> suppose you meant get back to running your code as soon as possible. In
> that case you would want nonblocking. However when you say you want the
> slaves to receive data faster, it seems you're implying the actual data
Hi all,
If I have a master-process that needs to send a chunk of (different)
data to each of my N slave processes as fast as possible, would I
receive the chunk in each of the slaves faster if the master would
launch N threads each doing a blocking send or would it be better to
launch N nonbl
William Gropp wrote:
>
> You might also look at http://www-unix.mcs.anl.gov/mpi/tools/genericmpi/
> . The software is currently being revised but should be available
> soon. For users willing to interpose libraries, this solves many (but
> not all) of these problems, particularly for C-only app
Tim Prins wrote:
> I am in the process of developing MorphMPI and have designed my
> implementation a bit different than what you propose (my apologies if I
> misunderstood what you have said). I am creating one main library, which
> users will compile and run against, and which should not need to
Robert G. Brown wrote:
> Ashley Pittman writes:
>
>> Personnel I think a MPI ABI would be a good thing however this is not
>> the way to do it.
>
>
> And this is exactly right. Futhermore, we all know the right way to do
> it. It is for a new governing body or consortium to be established (or
Ashley Pittman wrote:
> The second problem is that of linking, most MPI vendors already have
> MPI_Init in their own library, having another library with it's own
> wrapper MPI_Init in it is going to lead to a whole world of pain to do
> with dynamic linking and symbol resolution. This is not som
William Gropp wrote:
> At 08:44 AM 10/11/2005, Toon Knapen wrote:
>
>> William Gropp wrote:
>> > in the Fortran source mapped to
>> >
>> > MPI_INIT
>> > mpi_init
>> > mpi_init_
>> > mpi_init__
>> > MPI_Init_
>> >
William Gropp wrote:
>
> Fortran name mangling here means how are Fortran routine names in the
> source code mapped to names in the object library. For example, is
>
> MPI_Init
>
> in the Fortran source mapped to
>
> MPI_INIT
> mpi_init
> mpi_init_
> mpi_init__
> MPI_Init_
>
> Each of these h
Greg Lindahl wrote:
>>ignoring the politics for a moment, what are the technical sticking points?
>
> Fortran name-mangling
Up to the MPI 1.2, f77 is used so only functions that are new in MPI2
will be available in a mangled version only. So all others can be linked
with using C linkage conventio
Patrick Geoffray wrote:
> The Fortran interface is actually worse than the C interface. Instead of
> using pointers to opaque structures, the Fortran interface may use
> integers as indexes into array of structures, into array of pointers, as
> pointers casted to integers, etc.
I can imagine that
Joachim Worringen wrote:
> Wrong - i.e., the value of MPI_COMM_WORLD is not defined in Fortran,
> either. This won't work if one MPI implementation sets
> MPI_COMM_WORLD to 35 and another expects 626.
>
> Of course, you are right for opaque datatypes like MPI_Group, but this
> is not sufficient fo
Coming back to the MPI ABI discussion (which dates back from a long time
ago though), just one additional question (to which MPI implementers
certainly have an interesting opinion):
Why don't we use the fortran interface instead of the C interface.
Different C interfaces for MPI are likely incompa
Hi all,
You might remember the very intriging discussions we had on a MPI ABI.
During this discussion the idea of a 'MorphMPI' was launched
(http://www.open-mpi.org/community/lists/users/2005/03/0028.php).
I for one like the idea because it allows us to launch applications
using MPI implemen
Jeff Squyres wrote:
Greetings. I loosely watched the MPI ABI discussions on the Beowulf
list but refrained from commenting (I stopped checking -- is it still
going on?). Now that the discussion has come to my project's list, I
guess I should speak up. :)
Since I've been "saving up" for a w
Stuart Midgley wrote:
The other issue we are concerned about is that an ABI doesn't resolve
one of the central issues. While you might have different MPI's with
the same ABI, different mpi's behave differently and can cause a code to
behave differently. An ISV would still have to verify thei
As posted on comp.parallel.mpi, I also wanted to forward this message to
us...@open-mpi.org because I think it is relavent to the (undoubtly
upcoming) mpich2 <-> open-mpi discussion.
Greg Lindahl wrote:
The first question is: Does an ABI provide enough benefit for people
to care?
I care a
20 matches
Mail list logo