I have to agree with Jeff, we really need a complete program to really
debug this. Note, without really seeing what the structures look like
it is hard to determine if maybe there is some type of structure
mismatch going between recv_packet and load_packet. Also the output you
show seems inco
All OFED 1.4 and 2.6.32 (that's what I can get to today)
qib to qib:
# OSU MPI Latency Test v3.3
# SizeLatency (us)
0 0.29
1 0.32
2 0.31
4 0.32
8 0.32
16
Sadly, OS X does not provide an API for processor affinity. :-(
On Jul 15, 2011, at 5:03 PM, Karl Dockendorf wrote:
> Hi,
>
> I just upgraded from the default ompi install on OS X 10.6 to v1.5.3
> so that I can use the processor affinity options. However there seems
> to be some trouble. My m
Hi,
I just upgraded from the default ompi install on OS X 10.6 to v1.5.3
so that I can use the processor affinity options. However there seems
to be some trouble. My mpi application executes perfectly with the
following CL statement:
/usr/local/openmpi-1.5.3/bin/mpiexec --host `hostname` --np 2
Hi,
Am 15.07.2011 um 21:14 schrieb Terry Dontje:
> On 7/15/2011 1:46 PM, Paul Kapinos wrote:
>> Hi OpenMPI volks (and Oracle/Sun experts),
>>
>> we have a problem with Sun's MPI (Cluster Tools 8.2.x) on a part of our
>> cluster. In the part of the cluster where LDAP is activated, the mpiexec
Can you write this up in a small, complete program that shows the problem, and
that we can compile and run?
On Jul 15, 2011, at 3:36 PM, Mudassar Majeed wrote:
> *id is same as myid
>
> I am comparing the results by seeing the printed messages, given by the
> printfs
>
> the recv_packet
*id is same as myid
I am comparing the results by seeing the printed messages, given by the
printfs
the recv_packet.rank is the rank of the sender that should be equal to
status.MPI_SOURCE but it is not.
I have updated the code a little bit, here is it.
if( (is_receiver == 1) && (is_sen
On 7/15/2011 1:46 PM, Paul Kapinos wrote:
Hi OpenMPI volks (and Oracle/Sun experts),
we have a problem with Sun's MPI (Cluster Tools 8.2.x) on a part of
our cluster. In the part of the cluster where LDAP is activated, the
mpiexec does not try to spawn tasks on remote nodes at all, but exits
On 7/15/2011 2:35 PM, Mudassar Majeed wrote:
Here is the code
if( (is_receiver == 1) && (is_sender != 1) )
{
printf("\nP%d >> Receiver only ...!!", myid);
printf("\n");
MPI_Recv(&recv_packet, 1, loadDatatype, MPI_ANY_SOURCE,
MPI_TAG_LOAD, comm, &status);
Here is the code
if( (is_receiver == 1) && (is_sender != 1) )
{
printf("\nP%d >> Receiver only ...!!", myid);
printf("\n");
MPI_Recv(&recv_packet, 1, loadDatatype, MPI_ANY_SOURCE, MPI_TAG_LOAD,
comm, &status);
printf("\nP%d >> Received from P%d", myid, sta
Hi OpenMPI volks (and Oracle/Sun experts),
we have a problem with Sun's MPI (Cluster Tools 8.2.x) on a part of our
cluster. In the part of the cluster where LDAP is activated, the mpiexec
does not try to spawn tasks on remote nodes at all, but exits with an
error message alike below. If 'stra
I'm going to echo what you've already heard here: it is impossible for
a process to receive a message that was sent to a different process. A
sender must specify a unique destination. No process other than the
destination process will see that message.
In what you write below, why do you th
On 7/15/2011 12:49 PM, Mudassar Majeed wrote:
Yes, processes receive messages that were not sent to them. I am
receiving the message with the following call
MPI_Recv(&recv_packet, 1, loadDatatype, MPI_ANY_SOURCE, MPI_TAG_LOAD,
comm, &status);
and that was sent using the following call,
Yes, processes receive messages that were not sent to them. I am receiving the
message with the following call
MPI_Recv(&recv_packet, 1, loadDatatype, MPI_ANY_SOURCE, MPI_TAG_LOAD, comm,
&status);
and that was sent using the following call,
MPI_Ssend(&load_packet, 1, loadDatatype, rec_rank, M
I don't think too many people have done combined QLogic + Mellanox runs, so
this probably isn't a well-explored space.
Can you run some microbenchmarks to see what kind of latency / bandwidth you're
getting between nodes of the same type and nodes of different types?
On Jul 14, 2011, at 8:21 PM
It strikes me that you should be able to use the tag to identify the
message that is to be received. In other words, you receive a message
from any source but with a tag that identifies the message as containing
the load value that is expected.
- Jeff
From: Jeff Squyres
To: Open MPI
+1
I reiterate what I said before:
>> > You will always only receive messages that were sent to *you*.
>> > There's no MPI_SEND_TO_ANYONE_WHO_IS_LISTENING functionality, for
>> > example. So your last statement: "But when it captures with ..
>> > MPI_ANY_SOURCE and MPI_ANY_TAG, the receiver
Well MPI_Recv does give you the message that was sent specifically to
the rank calling it by any of the processes in the communicator. If you
think the message you received should have gone to another rank then
there is a bug somewhere. I would start by either adding debugging
printf's to you
Here's, hopefully, more useful info. Note reading the job2core.pdf
presentation, that was mentioned earlier, more closely will also
clarify a couple points (I've put those points inline below).
On 7/15/2011 12:01 AM, Ralph Castain wrote:
On Jul 14, 2011, at 5:46 PM, Jeff Squyres wrote:
Loo
I get the sender's rank in status.MPI_SOURCE, but it is different than
expected. I need to receive that message which was sent to me, not any message.
regards,
Date: Fri, 15 Jul 2011 06:33:41 -0400
From: Terry Dontje
Subject: Re: [OMPI users] Urgent Question regarding, MPI_ANY_SOURCE.
To: us
Mudassar,
You can do what you are asking. The receiver uses MPI_ANY_SOURCE for
the source rank value and when you receive a message the
status.MPI_SOURCE will contain the rank of the actual sender not the
receiver's rank. If you are not seeing that then there is a bug somewhere.
--td
On 7
On Jul 14, 2011, at 5:46 PM, Jeff Squyres wrote:
> Looping in the users mailing list so that Ralph and Oracle can comment...
Not entirely sure what I can contribute here, but I'll try - see below for some
clarifications. I think the discussion here is based on some misunderstanding
of how OMPI
22 matches
Mail list logo