vasilis wrote:

Thank you Eugene for your suggestion. I used different tags for each variable, and now I do not get this error. The problem now is that I am getting a different solution when I use more than 2 CPUs. I checked the matrices and I found that they differ by a very small amount of the order 10^(-10). Actually, I am getting a different solution if I use 4CPUs or 16CPUs!!!
Do you have any idea what could cause this behavior?
Sure.

Rank 0 accumulates all the res_cpu values into a single array, res. It starts with its own res_cpu and then adds all other processes. When np=2, that means the order is prescribed. When np>2, the order is no longer prescribed and some floating-point rounding variations can start to occur.

If you want results to be more deterministic, you need to fix the order in which res is aggregated. E.g., instead of using MPI_ANY_SOURCE, loop over the peer processes in a specific order.



P.S. It seems to me that you could use MPI collective operations to implement what you're doing. E.g., something like:

call MPI_Reduce(res_cpu, res, total_unknown, MPI_DOUBLE_PRECISION, MPI_SUM, 0, MPI_COMM_WORLD, ierr)

call MPI_Gather(jacob_cpu, total_elem_cpu * unique, MPI_DOUBLE_PRECISION, &
jacob , total_elem_cpu * unique, MPI_DOUBLE_PRECISION, 0, MPI_COMM_WORLD, ierr)
call MPI_Gather(  row_cpu, total_elem_cpu * unique, MPI_INTEGER         , &
row , total_elem_cpu * unique, MPI_INTEGER , 0, MPI_COMM_WORLD, ierr)
call MPI_Gather(  col_cpu, total_elem_cpu * unique, MPI_INTEGER         , &
col , total_elem_cpu * unique, MPI_INTEGER , 0, MPI_COMM_WORLD, ierr)

I think the res part is right. The jacob/row/col parts are not quite right since you don't just want to gather the elements, but add them into particular arrays. Not sure if you really want to allocate a new scratch array to use for this purpose or what. Nor would this solve the res_cpu indeterministic problem you had. I just wanted to make sure you knew about the MPI collective operations as an alternative to your point-to-point implementation.

Reply via email to