George,
imho, you are right !
here is attached a new version of Ghislain's program and that uses
MPI_Dist_graph_neighbors_count and MPI_Dist_graph_neighbors
as you suggested.
it produces correct results
/* note that in this case, realDestinations is similar to targets,
so i might have left some
> On Nov 25, 2014, at 01:12 , Gilles Gouaillardet
> wrote:
>
> Bottom line, though Open MPI implementation of MPI_Dist_graph_create is not
> deterministic, it is compliant with the MPI standard.
> /* not to mention this is not the right place to argue what the standard
> could or should have b
Hi,
I have random segmentation violations (signal 11) in the mentioned
function when testing MPI I/O calls with 2 processes on a single
machine. Most of the time (1499/1500), it works perfectly.
here are the call stacks (for 1.6.3) on processes:
process 0:
==
Might be worth trying 1.8.3 to see if it works - there is an updated version of
ROMIO in it.
> On Nov 25, 2014, at 12:13 PM, Eric Chamberland
> wrote:
>
> Hi,
>
> I have random segmentation violations (signal 11) in the mentioned function
> when testing MPI I/O calls with 2 processes on a si
Hi,
I am trying to figure out a way for each local MPI rank to identify the ranks
located in physically remote nodes (just different nodes) of cluster or MPPs
such as Cray. I am using MPI_Get_processor_name to get the node ID, but it
requires some processing to map MPI rank to the node ID. I
Every process has a complete map of where every process in the job is located -
not sure if there is an MPI API for accessing it, though.
> On Nov 25, 2014, at 2:32 PM, Teranishi, Keita wrote:
>
> Hi,
>
> I am trying to figure out a way for each local MPI rank to identify the
> ranks locate
Are you doing this just for debugging? Or you really want to do it within the
MPI program?
orte-ps
Gives you the pid/host for each rank, but I don't think there is any standard
way to do this via API.
Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich
Hi Keita,
There is no MPI API to do this from within an MPI application. One method I
have used for this purpose is to create a function that executes an
MPI_Comm_split operation using a string as a color value. As output, it
returns a communicator containing all procs that specified the same
Adam,
Thanks for the suggestion. I was using my own tokenizer (or those from boost)
to extract node ID numbers, but your approach is more generic and serves my own
purpose. For the time being I will try to leverage the SCR source, and will
ask you if I need any further assistance.
Best Regar
We should probably create an MPI extension that exposes the internal database
so you can just get the info as it is already present, and it seems a shame to
execute a collective operation just to re-create it. I’ll put it on my “to-do”
list.
> On Nov 25, 2014, at 4:14 PM, Teranishi, Keita wro
10 matches
Mail list logo