Hello

I got a problem with my code, wich run some kinf of a simulator.

I get 4 worker (aka 4 mpi process ) wich process data.

These data aren't available at the same time, so i get another process (Splitter) wich send chunk of data to each process in round robin.

This work well using MPI_Send and Receive but aftet that i need to reduce all the data.

I hope to be able to use MPI_Reduce to reduce all data from all worker but there is a problem :

1. All results data aren't available at the same time, dut to the delay from the original data delay. 2. I cannot wait all data to be computed, i need to proceed the reduce a soon as possible

So when the first and second worker have finished, i can reduce the two results data and keep the results on the first worker. And when the third and the fourth have finished, i can reduce these two too, and keep results on third worker.
At last i need to reduce data from first and third worker.

The only way to do that using MPI_Reduce is to create "communicators".

All i want is :

commA : contain W1 W2
commB : contain W3 W4
commC : contain W1 W3


Let's say i've already create a communicator only for my workers
I can easily add this line in all my workers :


MPI_Comm_Split(workers_comm,(int)(workerId/2),rank,CommAlpha)

*If i understand well i will get an communicator on W1 and W2 wich contains W1 and W2, and a communicator on W3 and W4 wich contains W3 and W4. Am i right?*


But next when i try to use (only on W1 and W3):

MPI_Create_comm(MPI_Comm workers_comm,group,commC),

*I need also to call MPI_Create_comm on W2 and W4 or it will block. Why?*

After that, i got lot of non persistent error depending of the number of worker i want to use. *So is this allowed to create and use overlapping communicator? and if so how to do that?*

Thanks

Mathieu

Reply via email to