Here is the idea on how to get the number of tasks per node


MPI_Comm intranode_comm;

int tasks_per_local_node;

MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &intranode_comm);

MPI_Comm_size(intranode_comm, &tasks_per_local_node)

MPI_Comm_free(&intranode_comm);



then you can get the available memory per node, for example
grep ^MemAvailable: /proc/meminfo
and then divide this by the number of tasks on the local node.



now if distribution should be based on cpu speed, you can simply retrieve this value on each task, and then MPI_Gather() it to rank 0, and do the distribution.


in any case, if you MPI_Gather() the task parameter you are interested in, you should be able to get rid of a static config file.

non blocking collective are also available
MPI_Igather[v] / MPI_Iscatter[v]
if your algorithm can exploit this, that might be helpful

Cheers,

Gilles
You can use MPI_Comm_split_type in order to create inter node communicators.
I don't quite understand this function. One starts with the world
communicator including all ranks 0...9, and splits that  into multiple
subcommunicators? only split type appears to be MPI_COMM_TYPE_SHARED.


Then you can find how much memory is available per task,
How? by reading '/proc/self/statm' on linux?

MPI_Gather that on the master task, and MPI_Scatterv/MPI_Gatherv to 
distribute/consolidate the data

Apologies for my scattered comments, my question is not actually
totally clear in my head :-)
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/06/29465.php


Reply via email to