Thanks for the quick reply.
This test is between 2 cores that are on different CPUs. Say data has to
traverse coherent fabric (eg. QPI,UPI, cHT).
It has to go to main memory independently of cache size. Wrong assumption ?
Can data be evicted from cache and put into cache of second core on different
CPU without placing it first in main memory ? 
I am more thinking that there is a parameter that splits large messages in
smaller ones at 64k or 128k ?
This seems (wrong assumption ?) like the kind of parameter I would need for
large messages on a NIC. Coalescing data / large MTU,...

Joshua 








------ Original Message ------
Received: 02:15 PM CDT, 03/17/2017
From: George Bosilca <bosi...@icl.utk.edu>
To: Open MPI Users <users@lists.open-mpi.org>

Subject: Re: [OMPI users] tuning sm/vader for large messages

















> Joshua,
> 
> In shared memory the bandwidth depends on many parameters, including the
> process placement and the size of the different cache levels. In your
> particular case I guess after 128k you are outside the L2 cache (1/2 of the
> cache in fact) and the bandwidth will drop as the data need to be flushed
> to main memory.
> 
>   George.
> 
> 
> 
> On Fri, Mar 17, 2017 at 1:47 PM, Joshua Mora <joshua_m...@usa.net> wrote:
> 
> > Hello,
> > I am trying to get the max bw for shared memory communications using
> > osu_[bw,bibw,mbw_mr] benchmarks.
> > I am observing a peak at ~64k/128K msg size and then drops instead of
> > sustaining it.
> > What parameters or linux config do I need to add to default openmpi
> > settings
> > to get this improved ?
> > I am already using vader and knem.
> >
> > See below one way bandwidth with peak at 64k.
> >
> > # Size      Bandwidth (MB/s)
> > 1                       1.02
> > 2                       2.13
> > 4                       4.03
> > 8                       8.48
> > 16                     11.90
> > 32                     23.29
> > 64                     47.33
> > 128                    88.08
> > 256                   136.77
> > 512                   245.06
> > 1024                  263.79
> > 2048                  405.49
> > 4096                 1040.46
> > 8192                 1964.81
> > 16384                2983.71
> > 32768                5705.11
> > 65536                7181.11
> > 131072               6490.55
> > 262144               4449.59
> > 524288               4898.14
> > 1048576              5324.45
> > 2097152              5539.79
> > 4194304              5669.76
> >
> > Thanks,
> > Joshua
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> >
> 








> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

















_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to