On 2018-11-11 11:29:54 +1300, Thomas Munro wrote:
> On Sat, Nov 10, 2018 at 6:01 PM Andres Freund wrote:
> > I've replaced that with a write barrier / read barrier. I strongly
> > suspect this isn't a problem on the write side in practice (due to the
> > dependent read), but the read side looks m
On Sat, Nov 10, 2018 at 6:01 PM Andres Freund wrote:
> On 2018-10-05 10:29:55 -0700, Andres Freund wrote:
> > - remove the volatiles from GetSnapshotData(). As we've, for quite a
> > while now, made sure both lwlocks and spinlocks are proper barriers
> > they're not needed.
>
> Attached is a p
Hi,
On 2018-10-05 10:29:55 -0700, Andres Freund wrote:
> - remove the volatiles from GetSnapshotData(). As we've, for quite a
> while now, made sure both lwlocks and spinlocks are proper barriers
> they're not needed.
Attached is a patch that removes all the volatiles from procarray.c and
var
Hi,
On 2018-10-03 17:28:59 -0700, Daniel Wood wrote:
> FYI, be careful with padding PGXACT's to a full cache line.
I'm not actually thinking of doing that, but just to round it up so we
don't have PGXACTs spanning cachelines. It's currently 12bytes, so we
end up with one spanning 60-72, then from
On Thu, Oct 4, 2018 at 9:50 AM Adrien Nayrat wrote:
>
> On 10/3/18 11:29 PM, Daniel Wood wrote:
> > If running benchmarks or you are a customer which is currently impacted by
> > GetSnapshotData() on high end multisocket systems be wary of Skylake-S.
> >
> >
> > Performance differences of nearly 2
On 10/3/18 11:29 PM, Daniel Wood wrote:
> If running benchmarks or you are a customer which is currently impacted by
> GetSnapshotData() on high end multisocket systems be wary of Skylake-S.
>
>
> Performance differences of nearly 2X can be seen on select only pgbench due to
> nothing else but un
On 2018-10-03 17:01:44 -0700, Daniel Wood wrote:
>
> > On October 3, 2018 at 3:55 PM Andres Freund wrote:
>
> > In the thread around
> > https://www.postgresql.org/message-id/20160411214029.ce3fw6zxim5k6...@alap3.anarazel.de
> > I'd found doing more aggressive padding helped a lot. Unfortunate
One other thought. Could we update pgxact->xmin less often? What would be the
impact of this lower bound being lower than it would normally be with the
existing scheme. Yes, it needs to be moved forward "occasionally".
FYI, be careful with padding PGXACT's to a full cache line. With 1024 cli
> On October 3, 2018 at 3:55 PM Andres Freund wrote:
> In the thread around
> https://www.postgresql.org/message-id/20160411214029.ce3fw6zxim5k6...@alap3.anarazel.de
> I'd found doing more aggressive padding helped a lot. Unfortunately I
> didn't pursue this further :(
Interesting. Looks li
Hi,
On 2018-10-03 14:29:39 -0700, Daniel Wood wrote:
> If running benchmarks or you are a customer which is currently
> impacted by GetSnapshotData() on high end multisocket systems be wary
> of Skylake-S.
> Performance differences of nearly 2X can be seen on select only
> pgbench due to nothing
If running benchmarks or you are a customer which is currently impacted by
GetSnapshotData() on high end multisocket systems be wary of Skylake-S.
Performance differences of nearly 2X can be seen on select only pgbench due to
nothing else but unlucky choices for max_connections. Scale 1000, 19
11 matches
Mail list logo