There are two version of probe (MPI_Probe and MPI_IProbe) but I can't
tell you off hand their details. I know when looking at them in the past
the basic understanding that I took away was the MPI_Probe is like
MPI_Test but it doesn't actually receive or deallocate the message.
From
http://ww
If an MPI_Irecv has already been posted, and a single message is sent
to the receiver, then will an MPI_Probe return that there is no
message waiting to be received? The message has already been received
by the MPI_Irecv. It's the MPI_Request object of the MPI_Irecv call
that needs to be probed
Have you tried MPI_Probe?
Justin
Shaun Jackman wrote:
Is there a function similar to MPI_Test that doesn't deallocate the
MPI_Request object? I would like to test if a message has been
received (MPI_Irecv), check its tag, and dispatch the MPI_Request to
another function based on that tag.
C
Is there a function similar to MPI_Test that doesn't deallocate the
MPI_Request object? I would like to test if a message has been
received (MPI_Irecv), check its tag, and dispatch the MPI_Request to
another function based on that tag.
Cheers,
Shaun
Shaun Jackman wrote:
On Tue, 2009-03-24 at 07:03 -0800, Eugene Loh wrote:
I'm not sure I understand this suggestion, so I'll say it the way I
understand it. Would it be possible for each process to send an
"all done" message to each of its neighbors? Conversely, each
process would poll its
On Tue, 2009-03-24 at 07:03 -0800, Eugene Loh wrote:
I'm not sure I understand this suggestion, so I'll say it the way I
understand it. Would it be possible for each process to send an "all
done" message to each of its neighbors? Conversely, each process would
poll its neighbors for messages,
On Mar 25, 2009, at 9:55 AM, Simon Köstlin wrote:
I'm new to MPI and I've got a question about blocking routines like
the Send-, Wait-Function and so on. I wrote a parallel program that
uses the blocking Send and the Nonblocking Isend function. Now my
question: If I'm sending something with
Simon Köstlin wrote:
I'm new to MPI and I've got a question about blocking routines like
the Send-, Wait-Function and so on. I wrote a parallel program that
uses the blocking Send and the Nonblocking Isend function. Now my
question: If I'm sending something with the blocking Send function it
Hello,
I'm new to MPI and I've got a question about blocking routines like the
Send-, Wait-Function and so on. I wrote a parallel program that uses the
blocking Send and the Nonblocking Isend function. Now my question: If I'm
sending something with the blocking Send function it should block the
pr
Just to strengthen what Brian said...
We agree that what you cite is a problem. The only issue is that we
don't know of any way to do it better -- Brian laid out the 3 possible
options below pretty well. --enable-mca-static might be a decent
solution for you; you still build libmpi.so (no
On Mon, 23 Mar 2009, Olaf Lenz wrote:
and the solution that is described there still looks as though it should
still work now, or shouldn't it? Just link all the OpenMPI plugins against
the base OpenMPI libraries, and it should work. Or am I wrong?
I believe your suggestion will not work, cer
Dear list,
The bad behaviour now only occurs with version 1.2.X of openmpi (I have
tried 1.2.5, 1.2.8 and 1.2.9 with gcc and 1.2.7 and 1.2.9 with pgi cc.
Problem is in all of those.). With 1.3.1 I can find no problem at all. So
perhaps that means that the problem is solved?
mpirun -np 4 .
Dear OpenMPI experts
Against all odds and the OpenMPI developer's and FAQ recommendation,
I've been building hybrid OpenMPI libraries using Gnu
gcc/g++ and Fortran compilers from PGI and from Intel.
One reason for this is that some climate/oceans/atmosphere
code we use compiles and runs with less
Dear openmpi users and developers,
I encounter dead-lock problems with spawn processes in openmpi, as
soon as more than one Send/Recv operation is done.
The test case I used has been extracted from the MPICH2 examples. It is
a simple parent/child program. The original version (see attached file
Dear list,
A colleague pointed out an error in my test code. The final loop should
not be
for (i=0; idetails, details... Anyway, I still get problems from time to time with
this test code, but I have not yet had time to figure out the
circumstances when this happens. I will report back to
Am Mittwoch, den 25.03.2009, 00:38 +0800 schrieb Jerome BENOIT:
> is there a way to check that SLURM and OpenMPI communicate as expected ?
You can check if mpirun forks as many instances as you requested via
SLURM. Also, you could check if the hostnames of the hosts your job ran
match those alloca
On Mar 24, 2009, at 8:40 AM, Jerome BENOIT wrote:
I read what you said on the web before I sent my email.
But it does not work with my sample. It is an old LAM source C.
Can you be a little more explicit in what does not work? You didn't
give a lot of explanation for us to work with. :-)
Hi,
I installed the patch provided from Ralph and everything works fine now!
thanks a lot,
regards Simone
Jeff Squyres wrote:
On Mar 24, 2009, at 4:24 PM, Simone Pellegrini wrote:
@eNerd:~$ mpirun --np 2 ls
mpirun: symbol lookup error: mpirun: undefined symbol: orted_cmd_line
FWIW, this s
On Mar 24, 2009, at 4:24 PM, Simone Pellegrini wrote:
@eNerd:~$ mpirun --np 2 ls
mpirun: symbol lookup error: mpirun: undefined symbol: orted_cmd_line
FWIW, this sounds like a version mismatch or some kind. If you're
getting "undefined symbol" errors, then it's quite possible/probable
t
On Tue, 2009-03-24 at 07:03 -0800, Eugene Loh wrote:
> > Perhaps there is a better way of accomplishing the same thing however,
> > MPI_Barrier syncronises all processes so is potentially a lot more
> > heavyweight than it needs to be, in this example you only need to
> > syncronise with your ne
Dear list,
We've found a problem with openmpi when running over IB when calculation
reading elements of an array is overlapping communication to other
elements (that are not used in the calculation) of the same array. I have
written a small test program (below) that shows this behaviour. Wh
21 matches
Mail list logo