Dear Jeff,

Thanks for the information and helping me out. I too delayed replying, I
wanted to test this but the cluster here is down. I will check it and let
you know in case it doesn't work.

Thanks
Bibrak Qamar



On Sat, May 24, 2014 at 5:23 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com
> wrote:

> I am sorry for the delay in replying; this week got a bit crazy on me.
>
> I'm guessing that Open MPI is striping across both your eth0 and ib0
> interfaces.
>
> You can limit which interfaces it uses with the btl_tcp_if_include MCA
> param.  For example:
>
>     # Just use eth0
>     mpirun --mca btl tcp,sm,self --mca btl_tcp_if_include eth0 ...
>
>     # Just use ib0
>     mpirun --mca btl tcp,sm,self --mca btl_tcp_if_include ib0 ...
>
> Note that IPoIB is nowhere near as efficient as native verbs, so you won't
> get nearly as good performance as you do with OMPI's openib transport.
>
> Note, too, that I specifically included "--mca btl tcp,sm,self" in the
> above examples to force the use of the TCP MPI transport.  Otherwise, OMPI
> may well automatically choose the native IB (openib) transport.  I see you
> mentioned this in your first mail, too, but I am listing it here just to be
> specific/pedantic.
>
>
>
> On May 22, 2014, at 3:30 AM, Bibrak Qamar <bibr...@gmail.com> wrote:
>
> > Hi,
> >
> > I am facing problem in running Open MPI using TCP (on 1G Ethernet). In
> practice the bandwidth must not exceed 1000 Mbps but for some data points
> (for point-to-point ping pong) it exceeds this limit. I checked with MPICH
> it works as desired.
> >
> > Following is the command I issue to run my program on TCP. Am I missing
> something?
> >
> > -bash-3.2$ mpirun -np 2  -machinefile machines -N 1 --mca btl tcp,self
> ./bandwidth.ompi
> >
> --------------------------------------------------------------------------
> > The following command line options and corresponding MCA parameter have
> > been deprecated and replaced as follows:
> >
> >   Command line options:
> >     Deprecated:  --npernode, -npernode
> >     Replacement: --map-by ppr:N:node
> >
> >   Equivalent MCA parameter:
> >     Deprecated:  rmaps_base_n_pernode, rmaps_ppr_n_pernode
> >     Replacement: rmaps_base_mapping_policy=ppr:N:node
> >
> > The deprecated forms *will* disappear in a future version of Open MPI.
> > Please update to the new syntax.
> >
> --------------------------------------------------------------------------
> > Hello, world.  I am 1 on compute-0-16.local
> > Hello, world.  I am 0 on compute-0-15.local
> > 1    25.66    0.30
> > 2    25.54    0.60
> > 4    25.34    1.20
> > 8    25.27    2.42
> > 16    25.24    4.84
> > 32    25.49    9.58
> > 64    26.44    18.47
> > 128    26.85    36.37
> > 256    29.43    66.37
> > 512    36.02    108.44
> > 1024    42.03    185.86
> > 2048    194.30    80.42
> > 4096    255.21    122.45
> > 8192    258.85    241.45
> > 16384    307.96    405.90
> > 32768    422.78    591.32
> > 65536    790.11    632.83
> > 131072    1054.08    948.70
> > 262144    1618.20    1235.94
> > 524288    3126.65    1279.33
> >
> > -Bibrak
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to