On Aug 16, 2016, at 3:07 PM, Reuti wrote:
>
> Thx a bunch - that was it. Despite searching for a solution I found only
> hints that didn't solve the issue.
FWIW, we talk about this in the HACKING file, but I admit that's not
necessarily the easiest place to find:
https://github.com/open-m
Am 16.08.2016 um 13:26 schrieb Jeff Squyres (jsquyres):
> On Aug 12, 2016, at 2:15 PM, Reuti wrote:
>>
>> I updated my tools to:
>>
>> autoconf-2.69
>> automake-1.15
>> libtool-2.4.6
>>
>> but I face with Open MPI's ./autogen.pl:
>>
>> configure.ac:152: error: possibly undefined macro: AC_PR
Hi Josh,
Thanks for your reply. I did try setting MXM_RDMA_PORTS=mlx4_0:1 for all my MPI
processes
and it did improve performance but the performance I obtain isn't completely
satisfying.
When I use IMB 4.1 pingpong and sendrecv benchmarks between two nodes I get
using
Open MPI 1.10.3:
witho
On Aug 12, 2016, at 2:15 PM, Reuti wrote:
>
> I updated my tools to:
>
> autoconf-2.69
> automake-1.15
> libtool-2.4.6
>
> but I face with Open MPI's ./autogen.pl:
>
> configure.ac:152: error: possibly undefined macro: AC_PROG_LIBTOOL
>
> I recall seeing in already before, how to get rid of i
assuming you have an infiniband network, an other option is to install mxm
(mellanox proprietary but free library) and rebuild Open MPI.
pml/yalla will be used instead of ob1 and you should be just fine
Cheers,
Gilles
On Tuesday, August 16, 2016, Jeff Squyres (jsquyres)
wrote:
> On Aug 16, 201
On Aug 16, 2016, at 6:09 AM, Debendra Das wrote:
>
> As far as I understood I have to wait for version 2.0.1 to fix the issue.So
> can you please give any idea about when 2.0.1 will be released.
We had hoped to release it today, actually. :-\ But there's still a few
issues we're working out
As far as I understood I have to wait for version 2.0.1 to fix the issue.So
can you please give any idea about when 2.0.1 will be released.Also I could
not understand how to use the patch.
Thanking You,
Debendranath Das
On Mon, Aug 15, 2016 at 8:27 AM, Gilles Gouaillardet
wrote:
> Thanks for bo
Hi Gilles Gouaillardet,
Thank you for your kind assistant and YES --mca plm_rsh_no_tree_spawn 1
works fine. i think it suppose to be slower than normal mpi run.
as you mentioned slave1 can't ssh to others. only master can ssh to all
slaves. I'll fix it and check again.
Thanking you in advance,
By default, Open MPI spawns orted via ssh in a tree fashion. that
basically requires all nodes can ssh to each other.
this is likely not your case (for example slave2 might not be able to
ssh slave4)
as a workaround, can you try to
mpirun --mca plm_rsh_no_tree_spawn 1 ...
and see whether i
I have a parallel setup of 6 identical machines with Linux mint 18, ssh and
openmpi.
when i execute this,
mpiexec -np 16 --hostfile mpi-hostfile namd2 apoa1.namd > apoa1.log
with following host file
localhost slots=4
slave1 slots=4
slave2 slots=4
slave3 slots=4
slave4 slots=4
slave5 slots=4
it gi
Hi Gilles,
Ah, of course - I forgot about that.
Thanks,
Ben
-Original Message-
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles
Gouaillardet
Sent: Tuesday, 16 August 2016 4:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping by hwthreads without fully po
11 matches
Mail list logo