your code is actually not correct. If you look at the MPI specification
you will see that s should also be an array of length nProcs (in your
test), since you send different elements to each process. If you want to
send the same s to each process, you have to use MPI_Bcast.
Thanks
Edgar
Chand
There are 2 answers for this question.
1. If only some of the processes on a communicator are on the same
node, then the point-to-point communications generated by the
collective call between these processes will always use shared memory
(if shared memory is enabled).
2. If all processes in
I am trying to use MPI_Alltoall in the following program. After
execution all the nodes should show same value for the array su.
However only the root node is showing correct value. other nodes are giving
garbage value. Please help.
I have used openmpi version "1.1.4". The mpif90 uses intel fortr
I was asked by a user if the MPI allreduce recognizes when
process ids are situated on the same node so that the communication
can then proceed over shared memory rather over the slower networking
communication channels.
Would anyone of the openmpi developers be able to comment on
that question