Hi,
I've installed openmpi-master-201805080348-b39bbfb on my "SUSE Linux
Enterprise Server 12.3 (x86_64)" with gcc-6.4.0. Unfortunately I get
an error if I use process binding.
loki config_files 137 mpiexec -report-bindings -np 4 -rf rf_loki_nfs1 hostname
[loki:17301] OPAL dss:unpack: got type
Good news - thanks!
> On May 8, 2018, at 7:02 PM, Bill Broadley wrote:
>
>
> Sorry all,
>
> Chris S over on the slurm list spotted it right away. I didn't have the
> MpiDefault set to pmix_v2.
>
> I can confirm that Ubuntu 18.04, gcc-7.3, openmpi-3.1.0, pmix-2.1.1, and
> slurm-17.11.5 seem t
Sorry all,
Chris S over on the slurm list spotted it right away. I didn't have the
MpiDefault set to pmix_v2.
I can confirm that Ubuntu 18.04, gcc-7.3, openmpi-3.1.0, pmix-2.1.1, and
slurm-17.11.5 seem to work well together.
Sorry for the bother.
__
I have openmpi-3.0.1, pmix-1.2.4, and slurm-17.11.5 working well on a few
clusters. For things like:
bill@headnode:~/src/relay$ srun -N 2 -n 2 -t 1 ./relay 1
c7-18 c7-19
size= 1, 16384 hops, 2 nodes in 0.03 sec ( 2.00 us/hop) 1953 KB/sec
I've been having a tougher time trying to get
Looks like it doesn't fail with master so at some point I fixed this bug. The
current plan is to bring all the master changes into v3.1.1. This includes a
number of bug fixes.
-Nathan
On May 08, 2018, at 08:25 AM, Joseph Schuchart wrote:
Nathan,
Thanks for looking into that. My test progra
Nathan,
Thanks for looking into that. My test program is attached.
Best
Joseph
On 05/08/2018 02:56 PM, Nathan Hjelm wrote:
I will take a look today. Can you send me your test program?
-Nathan
On May 8, 2018, at 2:49 AM, Joseph Schuchart wrote:
All,
I have been experimenting with using Op
I will take a look today. Can you send me your test program?
-Nathan
> On May 8, 2018, at 2:49 AM, Joseph Schuchart wrote:
>
> All,
>
> I have been experimenting with using Open MPI 3.1.0 on our Cray XC40
> (Haswell-based nodes, Aries interconnect) for multi-threaded MPI RMA.
> Unfortunately