Joshua Mora Cc:
Open MPI Users
Subject: Re: [OMPI users] tuning sm/vader for large messages
On Mon, Mar 20, 2017 at 12:45 PM, Joshua Mora wrote:
If at certain x msg size you achieve X performance (MB/s) and at 2x msg
size
or higher you achieve Y performance, being Y significantly lower than X,
osilca
To: Joshua Mora Cc:
Open MPI Users
Subject: Re: [OMPI users] tuning sm/vader for large messages
> On Mon, Mar 20, 2017 at 12:45 PM, Joshua Mora wrote:
>
> > If at certain x msg size you achieve X performance (MB/s) and at 2x msg
> > size
> > or higher
rameters to improve the bandwidth at large message size ? I did
> see
> some documentation for sm, but not for vader.
>
> Thanks,
> Joshua
>
>
> -- Original Message --
> Received: 03:06 PM CDT, 03/17/2017
> From: George Bosilca
> To: Joshua Mora
> Cc:
.
Thanks,
Joshua
-- Original Message --
Received: 03:06 PM CDT, 03/17/2017
From: George Bosilca
To: Joshua Mora
Cc: Open MPI Users
Subject: Re: [OMPI users] tuning sm/vader for large messages
> On Fri, Mar 17, 2017 at 3:33 PM, Joshua Mora wrote:
>
> > Thanks f
eceived: 02:15 PM CDT, 03/17/2017
> From: George Bosilca
> To: Open MPI Users
>
> Subject: Re: [OMPI users] tuning sm/vader for large messages
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> > Joshua,
> >
> > In shared memory the
-- Original Message --
Received: 02:15 PM CDT, 03/17/2017
From: George Bosilca
To: Open MPI Users
Subject: Re: [OMPI users] tuning sm/vader for large messages
> Joshua,
>
> In shared memory the bandwidth depends on many parameters, including the
> proce
Joshua,
In shared memory the bandwidth depends on many parameters, including the
process placement and the size of the different cache levels. In your
particular case I guess after 128k you are outside the L2 cache (1/2 of the
cache in fact) and the bandwidth will drop as the data need to be flush
Hello,
I am trying to get the max bw for shared memory communications using
osu_[bw,bibw,mbw_mr] benchmarks.
I am observing a peak at ~64k/128K msg size and then drops instead of
sustaining it.
What parameters or linux config do I need to add to default openmpi settings
to get this improved ?
I am