ARY_PATH
> to the remote nodes prior to executing the command there.
>
> -- bennet
>
>
>
> On Tue, Aug 22, 2017 at 11:55 AM, Jackson, Gary L.
> wrote:
>> I’m using a build of OpenMPI provided by a third party.
>>
&g
m-wide by adding the following line to
/.../etc/openmpi-mca-params.conf
orte_launch_agent = /.../myorted
Cheers,
Gilles
On 8/22/2017 1:06 AM, Jackson, Gary L. wrote:
>
> I’m using a binary distribution of OpenMPI 1.10.2. As
I’m using a binary distribution of OpenMPI 1.10.2. As linked, it requires
certain shared libraries outside of OpenMPI for orted itself to start. So,
passing in LD_LIBRARY_PATH with the “-x” flag to mpirun doesn’t do anything:
$ mpirun –hostfile ${HOSTFILE} -N 1 -n 2 -x LD_LIBRARY_PATH hostname
Gb network to tune. If you manage to tune it, I
would like to get the values for the different MCA parameters so that out
TCP BTL behaves optimally by default.
Thanks,
George.
On Mar 10, 2016, at 11:45 , Jackson, Gary L.
wrote:
I re-ran all experiments with 1.10.2 configured the way you sp
On Tue, Mar 8, 2016 at 9:08 AM, Jackson, Gary L.
<<mailto:gary.jack...@jhuapl.edu>gary.jack...@jhuapl.edu<mailto:gary.jack...@jhuapl.edu>>
wrote:
I've built OpenMPI 1.10.1 on Amazon EC2. Using NetPIPE, I'm seeing about half
the performance for MPI over TCP as I do
Jason,
how many Ethernet interfaces are there ?
if several, can you try again with one only
mpirun --mca btl_tcp_if_include eth0 ...
Cheers,
Gilles
On Tuesday, March 8, 2016, Jackson, Gary L.
wrote:
I've built OpenMPI 1.10.1 on Amazon EC2. Using NetPIPE, I'm seeing abou
I've built OpenMPI 1.10.1 on Amazon EC2. Using NetPIPE, I'm seeing about half
the performance for MPI over TCP as I do with raw TCP. Before I start digging
in to this more deeply, does anyone know what might cause that?
For what it's worth, I see the same issues with MPICH, but I do not see it