On Mon, Jul 23, 2018 at 10:06 PM, Thomas Munro
wrote:
> Here's an attempt to use existing style better: a union, like
> LWLockPadded and WALInsertLockPadded. I think we should back-patch to
> 10. Thoughts?
Pushed to 10, 11, master.
It's interesting that I could see a further ~12% speedup by us
On Mon, Jul 23, 2018 at 3:40 PM, Thomas Munro
wrote:
> On Sun, Jul 22, 2018 at 8:19 AM, Tom Lane wrote:
>> 8 clients 72 clients
>>
>> unmodified HEAD 16112 16284
>> with padding patch 16096 16283
>> with SysV s
On Mon, Jul 23, 2018 at 3:40 PM, Thomas Munro
wrote:
> I did some testing on 2-node, 4-node and 8-node systems running Linux
> 3.10.something (slightly newer but still ancient). Only the 8-node
> box (= same one Mithun used) shows the large effect (the 2-node box
> may be a tiny bit faster patche
On Sun, Jul 22, 2018 at 8:19 AM, Tom Lane wrote:
> Andres Freund writes:
>> On 2018-07-20 16:43:33 -0400, Tom Lane wrote:
>>> On my RHEL6 machine, with unmodified HEAD and 8 sessions (since I've
>>> only got 8 cores) but other parameters matching Mithun's example,
>>> I just got
>
>> It's *really
Andres Freund writes:
> On 2018-07-20 16:43:33 -0400, Tom Lane wrote:
>> On my RHEL6 machine, with unmodified HEAD and 8 sessions (since I've
>> only got 8 cores) but other parameters matching Mithun's example,
>> I just got
> It's *really* common to have more actual clients than cpus for oltp
>
Hi,
On 2018-07-20 16:43:33 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-07-20 15:35:39 -0400, Tom Lane wrote:
> >> In any case, I strongly resist making performance-based changes on
> >> the basis of one test on one kernel and one hardware platform.
>
> > Sure, it'd be good to do m
Andres Freund writes:
> On 2018-07-20 15:35:39 -0400, Tom Lane wrote:
>> In any case, I strongly resist making performance-based changes on
>> the basis of one test on one kernel and one hardware platform.
> Sure, it'd be good to do more of that. But from a theoretical POV it's
> quite logical th
On 2018-07-20 15:35:39 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-07-21 00:53:28 +0530, Mithun Cy wrote:
> >> I did a quick test applying the patch with same settings as initial mail I
> >> have reported (On postgresql 10 latest code)
> >> 72 clients
> >>
> >> CASE 1:
> >> Withou
Andres Freund writes:
> On 2018-07-21 00:53:28 +0530, Mithun Cy wrote:
>> I did a quick test applying the patch with same settings as initial mail I
>> have reported (On postgresql 10 latest code)
>> 72 clients
>>
>> CASE 1:
>> Without Patch : TPS 29269.823540
>>
>> With Patch : TPS 36005.54496
Hi,
On 2018-07-21 00:53:28 +0530, Mithun Cy wrote:
> On Fri, Jul 20, 2018 at 10:52 AM, Thomas Munro <
> thomas.mu...@enterprisedb.com> wrote:
>
> > On Fri, Jul 20, 2018 at 7:56 AM, Tom Lane wrote:
> > >
> > > It's not *that* noticeable, as I failed to demonstrate any performance
> > > difference
On Fri, Jul 20, 2018 at 10:52 AM, Thomas Munro <
thomas.mu...@enterprisedb.com> wrote:
> On Fri, Jul 20, 2018 at 7:56 AM, Tom Lane wrote:
> >
> > It's not *that* noticeable, as I failed to demonstrate any performance
> > difference before committing the patch. I think some more investigation
> >
On Fri, Jul 20, 2018 at 7:56 AM, Tom Lane wrote:
> Alvaro Herrera writes:
>> On 2018-Jul-19, Amit Kapila wrote:
>>> It appears so. I think we should do something about it as the
>>> regression is quite noticeable.
>
> It's not *that* noticeable, as I failed to demonstrate any performance
> diffe
Andres Freund writes:
> I'm a bit hesitant to just revert without further evaluation - it's just
> about as likely we'll regress on other hardware / kernel
> versions.
I looked into the archives for the discussion that led up to ecb0d20a9,
and found it here:
https://www.postgresql.org/message-id
Hi Andres,
On Fri, Jul 20, 2018 at 1:21 AM, Andres Freund wrote:
> Hi,
>
> On 2018-01-24 00:06:44 +0530, Mithun Cy wrote:
> > Server:
> > ./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
> > max_wal_size=20GB -c checkpoint_timeout=900 -c
> > maintenance_work_mem=1GB -c checkpoint_
Alvaro Herrera writes:
> On 2018-Jul-19, Amit Kapila wrote:
>> It appears so. I think we should do something about it as the
>> regression is quite noticeable.
It's not *that* noticeable, as I failed to demonstrate any performance
difference before committing the patch. I think some more invest
Hi,
On 2018-07-19 15:39:44 -0400, Alvaro Herrera wrote:
> On 2018-Jul-19, Amit Kapila wrote:
>
> > On Thu, Feb 22, 2018 at 7:55 PM, Robert Haas wrote:
> > > On Wed, Feb 21, 2018 at 10:03 PM, Mithun Cy
> > > wrote:
> > >> seeing futex in the call stack andres suggested that following commit
>
Hi,
On 2018-01-24 00:06:44 +0530, Mithun Cy wrote:
> Server:
> ./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
> max_wal_size=20GB -c checkpoint_timeout=900 -c
> maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
Which kernel & glibc version does this server have?
Gre
On 2018-Jul-19, Amit Kapila wrote:
> On Thu, Feb 22, 2018 at 7:55 PM, Robert Haas wrote:
> > On Wed, Feb 21, 2018 at 10:03 PM, Mithun Cy
> > wrote:
> >> seeing futex in the call stack andres suggested that following commit could
> >> be the reason for regression
> >>
> >> commit ecb0d20a9d2e09b
On Thu, Feb 22, 2018 at 7:55 PM, Robert Haas wrote:
> On Wed, Feb 21, 2018 at 10:03 PM, Mithun Cy
> wrote:
>> seeing futex in the call stack andres suggested that following commit could
>> be the reason for regression
>>
>> commit ecb0d20a9d2e09b7112d3b192047f711f9ff7e59
>> Author: Tom Lane
>>
On Wed, Feb 21, 2018 at 10:03 PM, Mithun Cy wrote:
> seeing futex in the call stack andres suggested that following commit could
> be the reason for regression
>
> commit ecb0d20a9d2e09b7112d3b192047f711f9ff7e59
> Author: Tom Lane
> Date: 2016-10-09 18:03:45 -0400
>
> Use unnamed POSIX sema
On Wed, Jan 24, 2018 at 7:36 AM, Amit Kapila wrote:
> Both the cases look identical, but from the document attached, it
> seems the case-1 is for scale factor 300.
Oops sorry it was a typo. CASE 1 is scale factor 300 which will fit in
shared buffer =8GB.
--
Thanks and Regards
Mithun C Y
Ente
On Wed, Jan 24, 2018 at 12:06 AM, Mithun Cy wrote:
> Hi all,
>
> When I was trying to do read-write pgbench bench-marking of PostgreSQL
> 9.6.6 vs 10.1 I found PostgreSQL 10.1 regresses against 9.6.6 in some
> cases.
>
> Non Default settings and test
> ==
> Server:
> ./postgres
22 matches
Mail list logo