Dear all,
recently I tryied to switch from openMPI 2.1.x to openMPI 3.1.x.
I try to run a openMP/MPI hybrid program and prior to openMPI 3.1 I used
--bind-to core --map-by slot:PE=4
and requested full nodes via PBS or Slurm (:ppn=16; --cpus-per-task=1,
--tasks-per-node=16)
With openMPI 3.1, ho
Hello Jingchao,
try to use -mca mpi_leave_pinned 0, also for multinode jobs.
kind regards,
Tobias Klöffel
On 02/06/2017 09:38 PM, Jingchao Zhang wrote:
Hi,
We recently noticed openmpi is using btl openib over self,sm for
single node jobs, which has caused performance degradation for some
a
Hi,
use: --map-by core
regards,
Tobias
On 09/13/2015 09:41 AM, Saliya Ekanayake wrote:
I tried,
--map-by ppr:12:node --slot-list 0,2,4,6,8,10,12,14,16,18,20,22
--bind-to core -np 12
but it complains,
"Conflicting directives for binding policy are causing the policy
to be redefined:
New
Hi all,
The configuration might be a bit exotic:
Kernel 4.1.5 vanilla, Mellanox OFED 3.0-2.0.1
ccc174 1 x dual port ConnectX-3
mini4 2 x single port ConnectX-2
mini2 8 x single port ConnectX-2
MIS20025
The following does work:
using oob coonection manager in 1.7.3:
everything works, excep
ill
spend some time over the next couple of weeks testing the updated code.
-Nathan
On Tue, Mar 17, 2015 at 12:02:43PM +0100, Tobias Kloeffel wrote:
Hello Nathan,
I am using:
IMB 4.0 Update 2
gcc version 4.8.1
Intel compilers 15.0.1 20141023
xpmem from your github
I a
but I have not full stress-tested my xpmem branch.
I will see if I can reproduce and fix the hang.
-Nathan
On Mon, Mar 16, 2015 at 05:32:26PM +0100, Tobias Kloeffel wrote:
Hello everyone,
currently I am benchmarking the different single copy mechanisms
knem/cma/xpmem on a Xeon E5 V3 machine.
I am
Hello everyone,
currently I am benchmarking the different single copy mechanisms
knem/cma/xpmem on a Xeon E5 V3 machine.
I am using openmpi 1.8.4 with the CMA patch for vader.
While it turns out that xpmem is the clear winner (reproducing Nathan
Hjelm's results) I always ran into a problem at
ok i have to wait until tomorrow, they have some problems with the
network...
On 09/18/2014 01:27 PM, Nick Papior Andersen wrote:
I am not sure whether test will cover this... You should check it...
I here attach my example script which shows two working cases, and one
not workning (you ca
[users-boun...@open-mpi.org] On Behalf Of Tobias Kloeffel
[tobias.kloef...@fau.de]
Sent: Sunday, July 20, 2014 12:33 PM
To: Open MPI Users
Subject: Re: [OMPI users] Help with multirail configuration
I found no option in 1.6.5 and 1.8.1...
Am 7/20/2014 6:29 PM, schrieb Ralph Castain:
What version of
I found no option in 1.6.5 and 1.8.1...
Am 7/20/2014 6:29 PM, schrieb Ralph Castain:
What version of OMPI are you talking about?
On Jul 20, 2014, at 9:11 AM, Tobias Kloeffel wrote:
Hello everyone,
I am trying to get the maximum performance out of my two node testing setup.
Each node
Hello everyone,
I am trying to get the maximum performance out of my two node testing
setup. Each node consists of 4 Sandy Bridge CPUs and each CPU has one
directly attached Mellanox QDR card. Both nodes are connected via a
8-port Mellanox switch.
So far I found no option that allows binding m
11 matches
Mail list logo