On Thu, 9 Nov 2023 at 21:48, Heikki Linnakangas wrote:
>
> On 18/09/2023 07:08, David Rowley wrote:
> > On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas wrote:
> >>> I've added a call to LockAssertNoneHeld(false) in there.
> >>
> >> I don't see it in the patch?
> >
> > hmm. I must've git format-
On 18/09/2023 07:08, David Rowley wrote:
On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas wrote:
I've added a call to LockAssertNoneHeld(false) in there.
I don't see it in the patch?
hmm. I must've git format-patch before committing that part.
I'll try that again... see attached.
This n
On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas wrote:
> > I've added a call to LockAssertNoneHeld(false) in there.
>
> I don't see it in the patch?
hmm. I must've git format-patch before committing that part.
I'll try that again... see attached.
David
v8-0001-wip-resowner-lock-release-all.
On 11/09/2023 15:00, David Rowley wrote:
On Wed, 5 Jul 2023 at 21:44, Heikki Linnakangas wrote:
index 296dc82d2ee..edb8b6026e5 100644
--- a/src/backend/commands/discard.c
+++ b/src/backend/commands/discard.c
@@ -71,7 +71,7 @@ DiscardAll(bool isTopLevel)
Async_UnlistenAll();
- L
Thank you for having a look at this. Apologies for not getting back to
you sooner.
On Wed, 5 Jul 2023 at 21:44, Heikki Linnakangas wrote:
>
> On 10/02/2023 04:51, David Rowley wrote:
> > I've attached another set of patches. I do need to spend longer
> > looking at this. I'm mainly attaching thes
On 10/02/2023 04:51, David Rowley wrote:
I've attached another set of patches. I do need to spend longer
looking at this. I'm mainly attaching these as CI seems to be
highlighting a problem that I'm unable to recreate locally and I
wanted to see if the attached fixes it.
I like this patch's app
Thanks for having a look at this.
On Wed, 1 Feb 2023 at 03:07, Amit Langote wrote:
> Maybe you're planning to do it once this patch is post the PoC phase
> (isn't it?), but it would be helpful to have commentary on all the new
> dlist fields.
I've added comments on the new fields. Maybe we can
Hi,
On 2023-01-24 16:57:37 +1300, David Rowley wrote:
> I've attached a rebased patch.
Looks like there's some issue causing tests to fail probabilistically:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3501
Several failures are when testing a 32bit build.
> Whi
Hi David,
On Tue, Jan 24, 2023 at 12:58 PM David Rowley wrote:
> On Fri, 20 Jan 2023 at 00:26, vignesh C wrote:
> > CFBot shows some compilation errors as in [1], please post an updated
> > version for the same:
>
> I've attached a rebased patch.
Thanks for the new patch.
Maybe you're planning
On Fri, 20 Jan 2023 at 00:26, vignesh C wrote:
> CFBot shows some compilation errors as in [1], please post an updated
> version for the same:
I've attached a rebased patch.
While reading over this again, I wondered if instead of allocating the
memory for the LOCALLOCKOWNER in TopMemoryContext,
On Wed, 3 Aug 2022 at 09:04, David Rowley wrote:
>
> On Wed, 3 Aug 2022 at 07:04, Jacob Champion wrote:
> > This entry has been waiting on author input for a while (our current
> > threshold is roughly two weeks), so I've marked it Returned with
> > Feedback.
>
> Thanks for taking care of this. Y
Thank you for looking at the patch.
On Fri, 4 Nov 2022 at 04:43, Ankit Kumar Pandey wrote:
> I don't see any performance improvement in tests.
Are you able to share what your test was?
In order to see a performance improvement you're likely going to have
to obtain a large number of locks in the
Hi David,
This is review of speed up releasing of locks patch.
Contents & Purpose:
Subject is missing in patch. It would have been easier to understand purpose
had it been included.
Included in the patch are change in README, but no new tests are included..
Initial Run:
The patch applies cleanl
On Wed, 3 Aug 2022 at 07:04, Jacob Champion wrote:
> This entry has been waiting on author input for a while (our current
> threshold is roughly two weeks), so I've marked it Returned with
> Feedback.
Thanks for taking care of this. You dealt with this correctly based on
the fact that I'd failed
This entry has been waiting on author input for a while (our current
threshold is roughly two weeks), so I've marked it Returned with
Feedback.
Once you think the patchset is ready for review again, you (or any
interested party) can resurrect the patch entry by visiting
https://commitfest.pos
On Wed, 6 Apr 2022 at 03:40, Yura Sokolov wrote:
> I'm looking on patch and don't get some moments.
>
> `GrantLockLocal` allocates `LOCALLOCKOWNER` and links it into
> `locallock->locallockowners`. It links it regardless `owner` could be
> NULL. But then `RemoveLocalLock` does `Assert(locallockown
Good day, David.
I'm looking on patch and don't get some moments.
`GrantLockLocal` allocates `LOCALLOCKOWNER` and links it into
`locallock->locallockowners`. It links it regardless `owner` could be
NULL. But then `RemoveLocalLock` does `Assert(locallockowner->owner != NULL);`.
Why it should not f
On Sat, 1 Jan 2022 at 15:40, Zhihong Yu wrote:
> + locallock->nLocks -= locallockowner->nLocks;
> + Assert(locallock->nLocks >= 0);
>
> I think the assertion is not needed since the above code is in if block :
>
> + if (locallockowner->nLocks < locallock->nLocks)
>
> the
On Fri, Dec 31, 2021 at 5:45 PM David Rowley wrote:
> On Fri, 3 Dec 2021 at 20:36, Michael Paquier wrote:
> > Two months later, this has been switched to RwF.
>
> I was discussing this patch with Andres. He's not very keen on my
> densehash hash table idea and suggested that instead of relying o
On Fri, 3 Dec 2021 at 20:36, Michael Paquier wrote:
> Two months later, this has been switched to RwF.
I was discussing this patch with Andres. He's not very keen on my
densehash hash table idea and suggested that instead of relying on
trying to make the hash table iteration faster, why don't we
On Fri, Oct 01, 2021 at 04:03:09PM +0900, Michael Paquier wrote:
> This last update was two months ago, and the patch has not moved
> since:
> https://commitfest.postgresql.org/34/3220/
>
> Do you have plans to work more on that or perhaps the CF entry should
> be withdrawn or RwF'd?
Two months l
I've made some remarks in related thread:
https://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1FB976EF@G01JPEXMBYT05
The new status of this patch is: Waiting on Author
On Tue, Jul 20, 2021 at 05:04:19PM +1200, David Rowley wrote:
> On Mon, 12 Jul 2021 at 19:23, David Rowley wrote:
> > I also adjusted the hash seq scan code so that it performs better when
> > faced a non-sparsely populated table. Previously my benchmark for
> > that case didn't do well [2].
>
>
On Mon, 12 Jul 2021 at 19:23, David Rowley wrote:
> I also adjusted the hash seq scan code so that it performs better when
> faced a non-sparsely populated table. Previously my benchmark for
> that case didn't do well [2].
I was running some select only pgbench tests today on an AMD 3990x
machin
On Mon, 21 Jun 2021 at 01:56, David Rowley wrote:
> # A new hashtable implementation
> Also, it would be good to hear what people think about solving the
> problem this way.
Because over on [1] I'm also trying to improve the performance of
smgropen(), I posted the patch for the new hash table ov
Thanks for having a look at this.
On Mon, 21 Jun 2021 at 05:02, Zhihong Yu wrote:
> + * GH_ELEMENT_TYPE defines the data type that the hashtable stores. Each
> + * instance of GH_ELEMENT_TYPE which is stored in the hash table is done so
> + * inside a GH_SEGMENT.
>
> I think the second sen
On Sun, Jun 20, 2021 at 6:56 AM David Rowley wrote:
> On Wed, 14 Aug 2019 at 19:25, David Rowley
> wrote:
> > For now, I'm out of ideas. If anyone else feels like suggesting
> > something of picking this up, feel free.
>
> This is a pretty old thread, so we might need a recap:
>
> # Recap
>
> Ba
On Wed, 14 Aug 2019 at 19:25, David Rowley wrote:
> For now, I'm out of ideas. If anyone else feels like suggesting
> something of picking this up, feel free.
This is a pretty old thread, so we might need a recap:
# Recap
Basically LockReleaseAll() becomes slow after a large number of locks
hav
Hi, the patch was in WoA since December, waiting for a rebase. I've
marked it as returned with feedback. Feel free to re-submit an updated
version into the next CF.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Se
On Thu, Sep 26, 2019 at 07:11:53AM +, Tsunakawa, Takayuki wrote:
> I'm sorry to repeat what I mentioned in my previous mail, but my v2
> patch's approach is based on the database textbook and seems
> intuitive. So I attached the rebased version.
If you wish to do so, that's fine by me but I
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> On 2019-Sep-03, Tsunakawa, Takayuki wrote:
> > I don't think it's rejected. It would be a pity (mottainai) to refuse
> > this, because it provides significant speedup despite its simple
> > modification.
>
> I don't necessarily disagree wit
On 2019-Sep-03, Tsunakawa, Takayuki wrote:
> From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> > Hmm ... is this patch rejected, or is somebody still trying to get it to
> > committable state? David, you're listed as committer.
>
> I don't think it's rejected. It would be a pity (mottain
From: Alvaro Herrera [mailto:alvhe...@2ndquadrant.com]
> Hmm ... is this patch rejected, or is somebody still trying to get it to
> committable state? David, you're listed as committer.
I don't think it's rejected. It would be a pity (mottainai) to refuse this,
because it provides significant s
On 2019-Aug-14, David Rowley wrote:
> For now, I'm out of ideas. If anyone else feels like suggesting
> something of picking this up, feel free.
Hmm ... is this patch rejected, or is somebody still trying to get it to
committable state? David, you're listed as committer.
--
Álvaro Herrera
On Wed, Aug 14, 2019 at 07:25:10PM +1200, David Rowley wrote:
On Thu, 25 Jul 2019 at 05:49, Tom Lane wrote:
On the whole, I don't especially like this approach, because of the
confusion between peak lock count and end-of-xact lock count. That
seems way too likely to cause problems.
Thanks fo
On Thu, 25 Jul 2019 at 05:49, Tom Lane wrote:
> On the whole, I don't especially like this approach, because of the
> confusion between peak lock count and end-of-xact lock count. That
> seems way too likely to cause problems.
Thanks for having a look at this. I've not addressed the points
you'
On Thu, Jul 25, 2019 at 5:49 AM Tom Lane wrote:
> David Rowley writes:
> > Here's a more polished version with the debug code removed, complete
> > with benchmarks.
>
> A few gripes:
>
> [gripes]
Based on the above, I've set this to "Waiting on Author", in the next CF.
--
Thomas Munro
https://
David Rowley writes:
> Here's a more polished version with the debug code removed, complete
> with benchmarks.
A few gripes:
You're measuring the number of locks held at completion of the
transaction, which fails to account for locks transiently taken and
released, so that the actual peak usage
On Wed, 24 Jul 2019 at 16:16, David Rowley wrote:
>
> On Wed, 24 Jul 2019 at 15:05, David Rowley
> wrote:
> > To be able to reduce the threshold down again we'd need to make a
> > hash_get_num_entries(LockMethodLocalHash) call before performing the
> > guts of LockReleaseAll(). We could then wei
Hi,
On 2019-07-21 21:37:28 +1200, David Rowley wrote:
> select.sql:
> \set p 1
> select * from ht where a = :p
>
> Master:
>
> $ pgbench -n -f select.sql -T 60 -M prepared postgres
> tps = 10172.035036 (excluding connections establishing)
> tps = 10192.780529 (excluding connections establishing)
On Wed, 24 Jul 2019 at 15:05, David Rowley wrote:
> To be able to reduce the threshold down again we'd need to make a
> hash_get_num_entries(LockMethodLocalHash) call before performing the
> guts of LockReleaseAll(). We could then weight that onto some running
> average counter with a weight of, s
Thanks for having a look at this.
On Wed, 24 Jul 2019 at 04:13, Tom Lane wrote:
>
> David Rowley writes:
> > I'm pretty happy with v7 now. If anyone has any objections to it,
> > please speak up very soon. Otherwise, I plan to push it about this
> > time tomorrow.
>
> I dunno, this seems close
David Rowley writes:
> I've attached v7, which really is v6 with some comments adjusted and
> the order of the hash_get_num_entries and hash_get_max_bucket function
> calls swapped. I think hash_get_num_entries() will return 0 most of
> the time where we're calling it, so it makes sense to put th
On Tue, 23 Jul 2019 at 15:47, Tsunakawa, Takayuki
wrote:
> OTOH, how about my original patch that is based on the local lock list? I
> expect that it won't that significant slowdown in the same test case. If
> it's not satisfactory, then v6 is the best to commit.
I think we need to move beyon
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Another counter-argument to this is that there's already an
> unexplainable slowdown after you run a query which obtains a large
> number of locks in a session or use prepared statements and a
> partitioned table with the default plan_cache
On Mon, 22 Jul 2019 at 12:48, Tsunakawa, Takayuki
wrote:
>
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
> > I went back to the drawing board on this and I've added some code that
> > counts
> > the number of times we've seen the table to be oversized and just shrinks
> > the table b
On Mon, 22 Jul 2019 at 16:36, Tsunakawa, Takayuki
wrote:
>
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
> > For the use case we've been measuring with partitioned tables and the
> > generic plan generation causing a sudden spike in the number of
> > obtained locks, then having plan_c
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> For the use case we've been measuring with partitioned tables and the
> generic plan generation causing a sudden spike in the number of
> obtained locks, then having plan_cache_mode = force_custom_plan will
> cause the lock table not to bec
On Mon, 22 Jul 2019 at 14:21, Tsunakawa, Takayuki
wrote:
>
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
> > I personally don't think that's true. The only way you'll notice the
> > LockReleaseAll() overhead is to execute very fast queries with a
> > bloated lock table. It's pretty
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I personally don't think that's true. The only way you'll notice the
> LockReleaseAll() overhead is to execute very fast queries with a
> bloated lock table. It's pretty hard to notice that a single 0.1ms
> query is slow. You'll need to e
On Mon, 22 Jul 2019 at 12:48, Tsunakawa, Takayuki
wrote:
>
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
> > I went back to the drawing board on this and I've added some code that
> > counts
> > the number of times we've seen the table to be oversized and just shrinks
> > the table b
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I went back to the drawing board on this and I've added some code that counts
> the number of times we've seen the table to be oversized and just shrinks
> the table back down on the 1000th time. 6.93% / 1000 is not all that much.
I'm afr
On Thu, 18 Jul 2019 at 14:53, David Rowley wrote:
> Is anyone particularly concerned about the worst-case slowdown here
> being about 1.54%? The best case, and arguably a more realistic case
> above showed a 34% speedup for the best case.
I took a bit more time to test the performance on this. I
On Thu, 27 Jun 2019 at 12:59, Tsunakawa, Takayuki
wrote:
>
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Thank you, looks good. I find it ready for committer (I noticed the status
> is already set so.)
Thanks for looking.
I've just been looking at this again and I thought I'd be
From: David Rowley [mailto:david.row...@2ndquadrant.com]
v5 is attached.
Thank you, looks good. I find it ready for committer (I noticed the status is
already set so.)
Regards
Takayuki Tsunakawa
On Mon, 17 Jun 2019 at 15:05, Tsunakawa, Takayuki
wrote:
> (1)
> +#define LOCKMETHODLOCALHASH_SHRINK_SIZE 64
>
> How about LOCKMETHODLOCALHASH_SHRINK_THRESHOLD, because this determines the
> threshold value to trigger shrinkage? Code in PostgreSQL seems to use the
> term threshold.
That's prob
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> I've revised the patch to add a new constant named
> LOCKMETHODLOCALHASH_SHRINK_SIZE. I've set this to 64 for now. Once the hash
Thank you, and good performance. The patch passed make check.
I'm OK with the current patch, but I have a fe
On Mon, 8 Apr 2019 at 04:09, Tom Lane wrote:
> Also, I would not define "significantly bloated" as "the table has
> grown at all". I think the threshold ought to be at least ~100
> buckets, if we're starting at 16.
I've revised the patch to add a new constant named
LOCKMETHODLOCALHASH_SHRINK_SIZ
On 2019-04-08 05:46, Tsunakawa, Takayuki wrote:
> I'm registering you as another author and me as a reviewer, and marking this
> ready for committer.
Moved to next commit fest.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Trainin
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> It would be good to get your view on the
> shrink_bloated_locallocktable_v3.patch I worked on last night. I was
> unable to measure any overhead to solving the problem that way.
Thanks, it looks super simple and good. I understood the ide
On Mon, 8 Apr 2019 at 14:54, Tsunakawa, Takayuki
wrote:
>
> From: 'Andres Freund' [mailto:and...@anarazel.de]
> > Did you see that people measured slowdowns?
>
> Yeah, 0.5% decrease with pgbench -M prepared -S (select-only), which feels
> like a somewhat extreme test case. And that might be with
From: 'Andres Freund' [mailto:and...@anarazel.de]
> On 2019-04-08 02:28:12 +, Tsunakawa, Takayuki wrote:
> > I think the linked list of LOCALLOCK approach is natural, simple, and
> > good.
>
> Did you see that people measured slowdowns?
Yeah, 0.5% decrease with pgbench -M prepared -S (select-
Hi,
On 2019-04-08 02:28:12 +, Tsunakawa, Takayuki wrote:
> I think the linked list of LOCALLOCK approach is natural, simple, and
> good.
Did you see that people measured slowdowns?
Greetings,
Andres Freund
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> On the whole I don't think there's an adequate case for committing
> this patch.
From: Andres Freund [mailto:and...@anarazel.de]
> On 2019-04-05 23:03:11 -0400, Tom Lane wrote:
> > If I reduce the number of partitions in Amit's example from 8192
> > to
On Mon, 8 Apr 2019 at 04:09, Tom Lane wrote:
> Um ... I don't see where you're destroying the old hash?
In CreateLocalLockHash.
> Also, I entirely dislike wiring in assumptions about hash_seq_search's
> private state structure here. I think it's worth having an explicit
> entry point in dynahas
David Rowley writes:
> Okay. Here's another version with all the average locks code removed
> that only recreates the table when it's completely empty.
Um ... I don't see where you're destroying the old hash?
Also, I entirely dislike wiring in assumptions about hash_seq_search's
private state s
On Mon, 8 Apr 2019 at 03:47, Andres Freund wrote:
> Could you benchmark your adversarial case?
Which case?
I imagine the worst case for v2 is a query that just constantly asks
for over 16 locks. Perhaps a prepared plan, so not to add planner
overhead.
--
David Rowley http://
Hi,
On 2019-04-08 03:40:52 +1200, David Rowley wrote:
> On Mon, 8 Apr 2019 at 03:20, Tom Lane wrote:
> >
> > David Rowley writes:
> > > The reason I thought it was a good idea to track some history there
> > > was to stop the lock table constantly being shrunk back to the default
> > > size ever
On Mon, 8 Apr 2019 at 03:20, Tom Lane wrote:
>
> David Rowley writes:
> > The reason I thought it was a good idea to track some history there
> > was to stop the lock table constantly being shrunk back to the default
> > size every time a simple single table query was executed.
>
> I think that's
David Rowley writes:
> On Mon, 8 Apr 2019 at 02:59, Tom Lane wrote:
>> We *should* be using hash_get_num_entries(), but only to verify
>> that the table is empty before resetting it. The additional bit
>> that is needed is to see whether the number of buckets is large
>> enough to justify callin
On Mon, 8 Apr 2019 at 02:59, Tom Lane wrote:
>
> David Rowley writes:
> > hash_get_num_entries() looks cheap enough to me. Can you explain why
> > you think that's too expensive?
>
> What I objected to cost-wise was counting the number of lock
> acquisitions/releases, which seems entirely beside
David Rowley writes:
> On Mon, 8 Apr 2019 at 02:20, Tom Lane wrote:
>> I like the concept ... but the particular implementation, not so much.
>> It seems way overcomplicated. In the first place, why should we
>> add code to copy entries? Just don't do it except when the table
>> is empty. In t
On Mon, 8 Apr 2019 at 02:36, David Rowley wrote:
> > LockMethodLocalHash is special in that it predictably goes to empty
> > at the end of every transaction, so that de-bloating at that point
> > is a workable strategy. I think we'd probably need something more
> > robust if we were trying to fix
On Mon, 8 Apr 2019 at 02:20, Tom Lane wrote:
> I like the concept ... but the particular implementation, not so much.
> It seems way overcomplicated. In the first place, why should we
> add code to copy entries? Just don't do it except when the table
> is empty. In the second, I think we could
David Rowley writes:
> On Sat, 6 Apr 2019 at 16:03, Tom Lane wrote:
>> My own thought about how to improve this situation was just to destroy
>> and recreate LockMethodLocalHash at transaction end (or start)
>> if its size exceeded $some-value. Leaving it permanently bloated seems
>> like possib
On Sat, 6 Apr 2019 at 16:03, Tom Lane wrote:
> I'd also point out that this is hardly the only place where we've
> seen hash_seq_search on nearly-empty hash tables become a bottleneck.
> So I'm not thrilled about attacking that with one-table-at-time patches.
> I'd rather see us do something to le
On 2019-04-06 05:03, Tom Lane wrote:
> Trying a standard pgbench test case (pgbench -M prepared -S with
> one client and an -s 10 database), it seems that the patch is about
> 0.5% slower than HEAD. Again, that's below the noise threshold,
> but it's not promising for the net effects of this patch
Andres Freund writes:
> I wonder if one approach to solve this wouldn't be to just make the
> hashtable drastically smaller. Right now we'll often have have lots
> empty entries that are 72 bytes + dynahash overhead. That's plenty of
> memory that needs to be skipped over. I wonder if we instead
Hi,
On 2019-04-05 23:03:11 -0400, Tom Lane wrote:
> Peter Eisentraut writes:
> > I can't detect any performance improvement with the patch applied to
> > current master, using the test case from Yoshikazu Imai (2019-03-19).
>
> FWIW, I tried this patch against current HEAD (959d00e9d).
> Using t
Peter Eisentraut writes:
> I can't detect any performance improvement with the patch applied to
> current master, using the test case from Yoshikazu Imai (2019-03-19).
FWIW, I tried this patch against current HEAD (959d00e9d).
Using the test case described by Amit at
I do measure an undeniable s
On 2019-03-19 10:21, Tsunakawa, Takayuki wrote:
> From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>> Fixed.
>
> Rebased on HEAD.
Do you need the dlist_foreach_modify() calls? You are not actually
modifying the list structure.
--
Peter Eisentraut http://www.2ndQua
On Fri, Apr 5, 2019 at 0:05 AM, Tsunakawa, Takayuki wrote:
> From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> > I can't detect any performance improvement with the patch applied to
> > current master, using the test case from Yoshikazu Imai (2019-03-19).
>
> That's strange... Pe
Hi Amit-san, Imai-snan,
From: Amit Langote [mailto:langote_amit...@lab.ntt.co.jp]
> I was able to detect it as follows.
> plan_cache_mode = auto
>
>HEAD: 1915 tps
> Patched: 2394 tps
>
> plan_cache_mode = custom (non-problematic: generic plan is never created)
>
>HEAD: 2402 tps
> Patche
On Fri, Apr 5, 2019 at 1:31 AM, Amit Langote wrote:
> On 2019/04/05 5:42, Peter Eisentraut wrote:
> > On 2019-04-04 06:58, Amit Langote wrote:
> >> Also, since the "speed up partition planning" patch went in
> >> (428b260f8), it might be possible to see the performance boost even
> >> with the part
On 2019/04/05 5:42, Peter Eisentraut wrote:
> On 2019-04-04 06:58, Amit Langote wrote:
>> Also, since the "speed up partition planning" patch went in (428b260f8),
>> it might be possible to see the performance boost even with the
>> partitioning example you cited upthread.
>
> I can't detect any p
Hi Peter, Imai-san,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> I can't detect any performance improvement with the patch applied to
> current master, using the test case from Yoshikazu Imai (2019-03-19).
That's strange... Peter, Imai-san, can you compare your test procedu
On 2019-04-04 06:58, Amit Langote wrote:
> Also, since the "speed up partition planning" patch went in (428b260f8),
> it might be possible to see the performance boost even with the
> partitioning example you cited upthread.
I can't detect any performance improvement with the patch applied to
curr
Hi,
On 2019/04/04 13:37, Tsunakawa, Takayuki wrote:
> Hi Peter,
>
> From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
>> I did a bit of performance testing, both a plain pgbench and the
>> suggested test case with 4096 partitions. I can't detect any
>> performance improvements. I
Hi Peter,
From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> I did a bit of performance testing, both a plain pgbench and the
> suggested test case with 4096 partitions. I can't detect any
> performance improvements. In fact, within the noise, it tends to be
> just a bit on the s
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> Here a benchmark doing that using pgbench's script weight feature.
Wow, I didn't know that pgbench has evolved to have such a convenient feature.
Thanks for telling me how to utilize it in testing. PostgreSQL is cool!
Regards
Takayuki
From: Amit Langote [mailto:langote_amit...@lab.ntt.co.jp]
> My understanding of what David wrote is that the slowness of bloated hash
> table is hard to notice, because planning itself is pretty slow. With the
> "speeding up planning with partitions" patch, planning becomes quite fast,
> so the bl
On Tue, 26 Mar 2019 at 21:55, David Rowley wrote:
>
> On Tue, 26 Mar 2019 at 21:23, Tsunakawa, Takayuki
> wrote:
> > Thank you David for explaining. Although I may not understand the effect
> > of "speeding up planning with partitions" patch, this patch takes effect
> > even without it. That
On Tue, 26 Mar 2019 at 21:23, Tsunakawa, Takayuki
wrote:
> Thank you David for explaining. Although I may not understand the effect of
> "speeding up planning with partitions" patch, this patch takes effect even
> without it. That is, perform the following in the same session:
>
> 1. SELECT c
Tsunakawa-san,
On 2019/03/26 17:21, Tsunakawa, Takayuki wrote:
> From: David Rowley [mailto:david.row...@2ndquadrant.com]
>> On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut
>> wrote:
>>> Perhaps "speeding up planning with partitions" needs to be accepted first?
>>
>> Yeah, I think it likely will r
From: David Rowley [mailto:david.row...@2ndquadrant.com]
> On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut
> wrote:
> > Perhaps "speeding up planning with partitions" needs to be accepted first?
>
> Yeah, I think it likely will require that patch to be able to measure
> the gains from this patch.
On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut
wrote:
> I did a bit of performance testing, both a plain pgbench and the
> suggested test case with 4096 partitions. I can't detect any
> performance improvements. In fact, within the noise, it tends to be
> just a bit on the slower side.
>
> So I'
On 2019-03-19 16:38, Peter Eisentraut wrote:
> On 2019-03-19 10:21, Tsunakawa, Takayuki wrote:
>> From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>>> Fixed.
>>
>> Rebased on HEAD.
>
> I have committed the first patch that reorganizes the struct. I'll have
> to spend some time ev
On 2019-03-19 10:21, Tsunakawa, Takayuki wrote:
> From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>> Fixed.
>
> Rebased on HEAD.
I have committed the first patch that reorganizes the struct. I'll have
to spend some time evaluating the performance impact of the second
patch, but
From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
> Fixed.
Rebased on HEAD.
Regards
Takayuki Tsunakawa
0001-reorder-LOCALLOCK-structure-members-to-compact-the-s.patch
Description: 0001-reorder-LOCALLOCK-structure-members-to-compact-the-s.patch
0002-speed-up-LOCALLOCK-scan.pa
Hi Tsunakawa-san, Peter
On Tue, Mar 19, 2019 at 7:53 AM, Tsunakawa, Takayuki wrote:
> From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
> > You posted a link to some performance numbers, but I didn't see the
> > test setup explained there. I'd like to get some more information on
>
1 - 100 of 121 matches
Mail list logo