George,
I figured it out. The defined type was
MPI_Type_vector(N, wrows, N, MPI_FLOAT, &mpi_all_unaligned_t);
Where it should have been
MPI_Type_vector(wrows, wrows, N, MPI_FLOAT, &mpi_all_unaligned_t);
This clears up all the errors.
Thanks,
Spenser
On Thu, May 8, 2014 at 5:43 PM, S
George,
> The alltoall exchanges data from all nodes to all nodes, including the
> local participant. So every participant will write the same amount of
> data.
Yes, I believe that is what my code is doing. However, I'm not sure
why the out of bounds is occurring. Can you be more specific? I
r
The alltoall exchanges data from all nodes to all nodes, including the
local participant. So every participant will write the same amount of
data.
George.
On Thu, May 8, 2014 at 6:16 PM, Spenser Gilliland
wrote:
> George,
>
>> Here is basically what is happening. On the top left, I depicted t
George,
> Here is basically what is happening. On the top left, I depicted the datatype
> resulting from the vector type. The two arrows point to the lower bound and
> upper bound (thus the extent) of the datatype. On the top right, the resized
> datatype, where the ub is now moved 2 elements a
Spenser,
Here is basically what is happening. On the top left, I depicted the datatype
resulting from the vector type. The two arrows point to the lower bound and
upper bound (thus the extent) of the datatype. On the top right, the resized
datatype, where the ub is now moved 2 elements after th
Matthieu & George,
Thanks you both for helping me. I really appreciate it.
> A simple test would be to run it with valgrind, so that out of bound
> reads and writes will be obvious.
I ran it through valgrind (i left the command line I used in there so
you can verify the methods)
I am getting er
A simple test would be to run it with valgrind, so that out of bound
reads and writes will be obvious.
Cheers,
Matthieu
2014-05-08 21:16 GMT+02:00 Spenser Gilliland :
> George & Mattheiu,
>
>> The Alltoall should only return when all data is sent and received on
>> the current rank, so there sho
The segfault indicates that you overwrite outside of the allocated memory (and
conflicts with the ptmalloc library). I’m quite certain that you write outside
the allocated array …
George.
On May 8, 2014, at 15:16 , Spenser Gilliland wrote:
> George & Mattheiu,
>
>> The Alltoall should only
George & Mattheiu,
> The Alltoall should only return when all data is sent and received on
> the current rank, so there shouldn't be any race condition.
Your right this is MPI not pthreads. That should never happen. Duh!
> I think the issue is with the way you define the send and receive
> buff
I think the issue is with the way you define the send and receive
buffer in the MPI_Alltoall. You have to keep in mind that the
all-to-all pattern will overwrite the entire data in the receive
buffer. Thus, starting from a relative displacement in the data (in
this case matrix[wrank*wrows]), begs f
The Alltoall should only return when all data is sent and received on
the current rank, so there shouldn't be any race condition.
Cheers,
Matthieu
2014-05-08 15:53 GMT+02:00 Spenser Gilliland :
> George & other list members,
>
> I think I may have a race condition in this example that is masked
George & other list members,
I think I may have a race condition in this example that is masked by
the print_matrix statement.
For example, lets say rank one has a large sleep before reaching the
local transpose, will the other ranks have completed the Alltoall and
when rank one reaches the local
George,
> Do you mind posting your working example here on the mailing list?
> This might help future users understanding how to correctly use the
> MPI datatype.
No problem. I wrote up this simplified example so others can learn to
use the functionality. This is a matrix transpose operation usin
Spenser,
Do you mind posting your working example here on the mailing list?
This might help future users understanding how to correctly use the
MPI datatype.
Thanks,
George.
On Wed, May 7, 2014 at 3:16 PM, Spenser Gilliland
wrote:
> George,
>
> Thanks for taking the time to respond to my qu
George,
Thanks for taking the time to respond to my question! I've succeeded
on getting my program to run using the information you provided. I'm
actually doing a matrix transpose with data distribute on contiguous
rows. However, the code I provided did not show this clearly. Thanks
for your i
Spenser,
There are several issues with the code you provided.
1. You are using a 1D process grid to create a 2D block cyclic distribution.
That’s just not possible.
2. You forgot to take in account the extent of the datatype. By default the
extent of a vector type is starting from the first by
Hi,
I've recently started working with MPI and I noticed that when a
Alltoall is utilized with a vector datatype, the call only uses the
extent to determine the location for the back to back transactions.
This makes using the vector type with collective communicators
difficult.
For Example:
Using
Hi
We are getting the following kind of error messages when trying to run
MPI_alltoall on 170 nodes with slots=8 on each node (i.e. 170*8=1360
MPI processes in total):
$ mpiexec -n 1360 -hostfile ./mach.8 ./a.out
...
[n2154][[20427,1],1180][../../../../../ompi/mca/btl/openib/connect/btl_openib_c
your code is actually not correct. If you look at the MPI specification
you will see that s should also be an array of length nProcs (in your
test), since you send different elements to each process. If you want to
send the same s to each process, you have to use MPI_Bcast.
Thanks
Edgar
Chand
I am trying to use MPI_Alltoall in the following program. After
execution all the nodes should show same value for the array su.
However only the root node is showing correct value. other nodes are giving
garbage value. Please help.
I have used openmpi version "1.1.4". The mpif90 uses intel fortr
20 matches
Mail list logo