Hi Rolf,
yes this is exactly what I was looking for, I just hoped that there is
also a way to manually control this behavior.
But in most cases that would be the best setting.
Thanks!
Tobias
On 07/21/2014 05:01 PM, Rolf vandeVaart wrote:
With Open MPI 1.8.1, the library will use the NIC that is "closest" to the CPU. There was
a bug in earlier versions of Open MPI 1.8 so that did not happen. You can see this by running with
some verbosity using the "btl_base_verbose" flag. For example, this is what I observed
on a two node cluster with two NICs on each node.
[rvandevaart@ivy0] $ mpirun --mca btl_base_verbose 1 -host ivy0,ivy1 -np 4
--mca pml ob1 --mca btl_openib_warn_default_gid_prefix 0 MPI_Alltoall_c
[ivy0.nvidia.com:28896] [rank=0] openib: using device mlx5_0
[ivy0.nvidia.com:28896] [rank=0] openib: skipping device mlx5_1; it is too far
away
[ivy0.nvidia.com:28897] [rank=1] openib: using device mlx5_1
[ivy0.nvidia.com:28897] [rank=1] openib: skipping device mlx5_0; it is too far
away
[ivy1.nvidia.com:04652] [rank=2] openib: using device mlx5_0
[ivy1.nvidia.com:04652] [rank=2] openib: skipping device mlx5_1; it is too far
away
[ivy1.nvidia.com:04653] [rank=3] openib: using device mlx5_1
[ivy1.nvidia.com:04653] [rank=3] openib: skipping device mlx5_0; it is too far
away
So, maybe the right thing is happening by default? Or you are looking for more
fine-grained control?
Rolf
________________________________________
From: users [users-boun...@open-mpi.org] On Behalf Of Tobias Kloeffel
[tobias.kloef...@fau.de]
Sent: Sunday, July 20, 2014 12:33 PM
To: Open MPI Users
Subject: Re: [OMPI users] Help with multirail configuration
I found no option in 1.6.5 and 1.8.1...
Am 7/20/2014 6:29 PM, schrieb Ralph Castain:
What version of OMPI are you talking about?
On Jul 20, 2014, at 9:11 AM, Tobias Kloeffel <tobias.kloef...@fau.de> wrote:
Hello everyone,
I am trying to get the maximum performance out of my two node testing setup.
Each node consists of 4 Sandy Bridge CPUs and each CPU has one directly
attached Mellanox QDR card. Both nodes are connected via a 8-port Mellanox
switch.
So far I found no option that allows binding mpi-ranks to a specific card, as
it is available in MVAPICH2. Is there a way to change the round robin behavior
of openMPI?
Maybe something like "btl_tcp_if_seq" that I have missed?
Kind regards,
Tobias
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2014/07/24822.php
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2014/07/24825.php
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2014/07/24827.php
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may
contain
confidential information. Any unauthorized review, use, disclosure or
distribution
is prohibited. If you are not the intended recipient, please contact the
sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2014/07/24836.php