Mateus,
MPI guarantee message ordering per communicator per peer. In other words any
message going from peer A to peer B in the same communicator will be
__matched__ on the receiver in the exact same order as they were sent (this
remains true even for multi-threaded libraries). MPI does not man
Hello again,
I need own static "mpirun" for porting (together with the static executable)
onto various (unknown) grid servers. In grid computing one can not expect
OpenMPI-ILP64 installtion on each computing element.
Jeff: I tried LDFLAGS in configure
ilias@194.160.135.47:~/bin/ompi-ilp64_ful
Hi,
I cannot call MPI::Datatype::Commit() and MPI::Datatype::Get_size()
functions from my program. The error that I receive is the some for both of
them:
"cannot call member function 'virtual void MPI::Datatype::Commit()' without
an object
or
"cannot call member function 'virtual void MPI::Dataty
The tag also factors in here. What I said in the blog entry was:
"The MPI specification doesn’t define which message arrives first. It defines
which message is matched first at the receiver: the first one (which happens to
be the long one). Specifically, between a pair of peers, MPI defines t
On 24-Jan-12 5:59 PM, Ronald Heerema wrote:
> I was wondering if anyone can comment on the current state of support for the
> openib btl when MPI_THREAD_MULTIPLE is enabled.
Short version - it's not supported.
Longer version - no one really spent time on testing it and fixing all
the places where
Hi,
Are you using 32 bit Windows or 64 bit? Because as far as I know, the
build for 64 bit windows with MinGW is not working. Which CMake
Generator did you use? Did you run CMake from the MSYS command window?
Thanks,
Shiqing
On 2012-01-24 9:24 PM, Temesghen Kahsai wrote:
Hello,
I am having
On Jan 25, 2012, at 5:03 AM, Victor Pomponiu wrote:
> I cannot call MPI::Datatype::Commit() and MPI::Datatype::Get_size() functions
> from my program. The error that I receive is the some for both of them:
>
> "cannot call member function 'virtual void MPI::Datatype::Commit()' without
> an obje
Hi Thatyene,
I took a look in your code and it seems to be logically correct. Maybe
there is some problem when you call the split function having one client
process with color = MPI_UNDEFINED. I understood you are trying to isolate
one of the client process to do something applicable only to it, a
It seems the split is blocking when must return MPI_COMM_NULL, in the case
I have one process with a color that does not exist in the other group or
with the color = MPI_UNDEFINED.
On Wed, Jan 25, 2012 at 4:28 PM, Rodrigo Oliveira wrote:
> Hi Thatyene,
>
> I took a look in your code and it seems