Hi, thanks for the answer. You are right is not the rank what matters but how do I arrange the physical procs in the cartesian topology. I don't care about the label. So, how do I achieve that?
Regards, Claudio 2012/3/1, Ralph Castain <r...@open-mpi.org>: > Is it really the rank that matters, or where the rank is located? For > example, you could leave the ranks as assigned by the cartesian topology, > but then map them so that ranks 0 and 2 share a node, 1 and 3 share a node, > etc. > > Is that what you are trying to achieve? > > > On Mar 1, 2012, at 11:57 AM, Claudio Pastorino wrote: > >> Dear all, >> I apologize in advance if this is not the right list to post this. I >> am a newcomer and please let me know if I should be sending this to >> another list. >> >> I program MPI trying to do HPC parallel programs. In particular I >> wrote a parallel code >> for molecular dynamics simulations. The program splits the work in a >> matrix of procs and >> I send messages along rows and columns in an equal basis. I learnt >> that the typical >> arrangement of cartesian topology is not usually the best option, >> because in a matrix, let's say of 4x4 procs with quad procs, the >> procs are arranged so that >> through columns one stays inside the same quad proc and through rows >> you are always going out to the network. This means procs are >> arranged as one quad per row. >> >> I try to explain this for a 2x2 case. The cartesian topology does this >> assignment, typically: >> cartesian mpi_comm_world >> 0,0 --> 0 >> 0,1 --> 1 >> 1,0 --> 2 >> 1,1 --> 3 >> The question is, how do I get a "user defined" assignment such as: >> 0,0 --> 0 >> 0,1 --> 2 >> 1,0 --> 1 >> 1,1 --> 3 >> >> ? >> >> Thanks in advance and I hope to have made this more or less >> understandable. >> Claudio >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >