VA_ARGS_NARGS(__VA_ARGS__), _int, __VA_ARGS__)
#define list_make_oid(...) MACRO_COMBIN_3(list_make, MACRO_ARGS
VA_ARGS_NARGS(__VA_ARGS__), _oid, __VA_ARGS__)
bu...@sohu.com
From: Tom Lane
Date: 2019-02-24 10:24
To: pgsql-hackers
Subject: POC: converting Lists into arrays
For quite some years now
Alvaro Herrera writes:
> On 2019-Aug-09, Tom Lane wrote:
>> I poked at this, and attached is a patch, but again I'm not seeing
>> that there's any real performance-based argument for it.
> I'm confused. I thought that the point of doing this wasn't that we
> wanted to improve performance, but ra
David Rowley writes:
> On Fri, 9 Aug 2019 at 04:24, Tom Lane wrote:
>> Has anyone got further thoughts about naming around list_concat
>> and friends?
>> If not, I'm inclined to go ahead with the concat-improvement patch as
>> proposed in [1], modulo the one improvement David spotted.
>> [1] http
On Sat, 10 Aug 2019 at 09:03, Tom Lane wrote:
>
> David Rowley writes:
> > On Fri, 9 Aug 2019 at 09:55, Tom Lane wrote:
> >> I still have hopes for getting rid of es_range_table_array though,
> >> and will look at that tomorrow or so.
>
> > Yes, please. I've measured that to be quite an overhead
On 2019-Aug-09, Tom Lane wrote:
> I poked at this, and attached is a patch, but again I'm not seeing
> that there's any real performance-based argument for it. So far
> as I can tell, if we've got a lot of RTEs in an executable plan,
> the bulk of the startup time is going into lock (re) acquisit
David Rowley writes:
> On Fri, 9 Aug 2019 at 09:55, Tom Lane wrote:
>> I still have hopes for getting rid of es_range_table_array though,
>> and will look at that tomorrow or so.
> Yes, please. I've measured that to be quite an overhead with large
> partitioning setups. However, that was with so
On Fri, 9 Aug 2019 at 09:55, Tom Lane wrote:
> Attached is a patch that removes simple_rte_array in favor of just
> accessing the query's rtable directly. I concluded that there was
> not much point in messing with simple_rel_array or append_rel_array,
> because they are not in fact just mirrors
On Fri, 9 Aug 2019 at 04:24, Tom Lane wrote:
> I wrote:
> > Perhaps there's an argument for doing something to change the behavior
> > of list_union and list_difference and friends. Not sure --- it could
> > be a foot-gun for back-patching. I'm already worried about the risk
> > of back-patching
I wrote:
> BTW, further on the subject of performance --- I'm aware of at least
> these topics for follow-on patches:
> * Fix places that are maintaining arrays parallel to Lists for
> access-speed reasons (at least simple_rte_array, append_rel_array,
> es_range_table_array).
Attached is a patch
Andres Freund writes:
> On 2019-07-31 19:40:09 -0400, Tom Lane wrote:
>> That makes the other idea (of a foreach-ish macro declaring the
>> listcell value variable) problematic, too :-(.
> Hm. One way partially around that would be using an anonymous struct
> inside the for(). Something like
> #d
[ returning to this topic now that the CF is over ]
I wrote:
> Perhaps there's an argument for doing something to change the behavior
> of list_union and list_difference and friends. Not sure --- it could
> be a foot-gun for back-patching. I'm already worried about the risk
> of back-patching co
On Thu, 8 Aug 2019 at 12:18, Andres Freund wrote:
> Hi,
>
> On 2019-08-08 11:36:44 +0800, Craig Ringer wrote:
> > > you can only put one into the first element of a
> > > for (;;).
> > >
> >
> > Use an anonymous block outer scope? Or if not permitted even by C99
> (which
> > I think it is), a do
Hi,
On 2019-08-08 11:36:44 +0800, Craig Ringer wrote:
> > you can only put one into the first element of a
> > for (;;).
> >
>
> Use an anonymous block outer scope? Or if not permitted even by C99 (which
> I think it is), a do {...} while (0); hack?
You can't easily - the problem is that there
On Thu, 1 Aug 2019 at 07:40, Tom Lane wrote:
> Andres Freund writes:
> > Unfortunately foreach(ListCell *lc, ...) doesn't work with the current
> > definition. Which I think isn't great, because the large scopes for loop
> > iteration variables imo makes the code harder to reason about.
>
>
Tota
Hi,
On 2019-07-31 19:40:09 -0400, Tom Lane wrote:
> Andres Freund writes:
> > Unfortunately foreach(ListCell *lc, ...) doesn't work with the current
> > definition. Which I think isn't great, because the large scopes for loop
> > iteration variables imo makes the code harder to reason about.
>
>
I wrote:
> BTW, I think we could make equivalent macros in the old regime,
> which would be a good thing because then it would be possible to
> back-patch code using this notation.
Oh, wait-a-second. I was envisioning that
for (ListCell *anonymous__lc = ...)
would work for that, but of
I wrote:
> One idea is that we could do something like
> foreach_variant(identifier, list_value)
> {
>type *v = (type *) lfirst_variant(identifier);
>...
> }
> where the "identifier" isn't actually a variable name but just something
> we use to construct the ForEachState
Andres Freund writes:
> Unfortunately foreach(ListCell *lc, ...) doesn't work with the current
> definition. Which I think isn't great, because the large scopes for loop
> iteration variables imo makes the code harder to reason about.
Yeah, I tried to make that possible when I redid those macros,
On 01/08/2019 01:04, Andres Freund wrote:
Hi,
On 2019-07-31 16:00:47 -0700, Andres Freund wrote:
On 2019-07-31 15:57:56 -0700, Andres Freund wrote:
I also wonder if a foreach version that includes the typical
(Type *) var = (Type *) lfirst(lc);
or
(Type *) var = castNode(Type, lfirst(lc));
or
Hi,
On 2019-07-31 16:00:47 -0700, Andres Freund wrote:
> On 2019-07-31 15:57:56 -0700, Andres Freund wrote:
> > I also wonder if a foreach version that includes the typical
> > (Type *) var = (Type *) lfirst(lc);
> > or
> > (Type *) var = castNode(Type, lfirst(lc));
> > or
> > OpExpr *hcla
Hi,
On 2019-07-31 15:57:56 -0700, Andres Freund wrote:
> I also wonder if a foreach version that includes the typical
> (Type *) var = (Type *) lfirst(lc);
> or
> (Type *) var = castNode(Type, lfirst(lc));
> or
> OpExpr *hclause = lfirst_node(OpExpr, lc);
>
> would make it nicer to use
Hi,
I was just looking at the diff for a fix, which adds a "ListCell *lc;"
to function scope, even though it's only needed in a pretty narrow
scope.
Unfortunately foreach(ListCell *lc, ...) doesn't work with the current
definition. Which I think isn't great, because the large scopes for loop
iter
Alvaro Herrera writes:
> So with this patch we end up with:
> list_union (copies list1, appends list2 element not already in list1)
> list_concat_unique (appends list2 elements not already in list)
> list_concat (appends all list2 elements)
> list_concat_copy (copies list1, appends all list2 elem
On 2019-Jul-22, Tom Lane wrote:
> David Rowley writes:
> > If we do end up with another function, it might be nice to stay away
> > from using "concat" in the name. I think we might struggle if there
> > are too many variations on concat and there's a risk we'll use the
> > wrong one. If we nee
David Rowley writes:
> I looked over this and only noted down one thing:
> In estimate_path_cost_size, can you explain why list_concat_copy() is
> needed here? I don't see remote_param_join_conds being used after
> this, so might it be better to just get rid of remote_param_join_conds
> and pass
On Mon, 22 Jul 2019 at 08:01, Tom Lane wrote:
>
> I wrote:
> >> * Rationalize places that are using combinations of list_copy and
> >> list_concat, probably by inventing an additional list-concatenation
> >> primitive that modifies neither input.
>
> > I poked around to see what we have in this de
David Rowley writes:
> On Mon, 22 Jul 2019 at 16:37, Tom Lane wrote:
>> Interesting. I wonder if bms_next_member() could be made any quicker?
> I had a quick look earlier and the only thing I saw was maybe to do
> the first loop differently from subsequent ones. The "w &= mask;"
> does nothing
On Mon, 22 Jul 2019 at 16:37, Tom Lane wrote:
>
> David Rowley writes:
> > So the bms_next_member() loop is slower when the bitmapset is fully
> > populated with all subplans, but way faster when there's just 1
> > member.
>
> Interesting. I wonder if bms_next_member() could be made any quicker?
David Rowley writes:
> On Mon, 22 Jul 2019 at 02:45, Tom Lane wrote:
>> One small question is whether it loses if most of the subplans
>> are present in the bitmap. I imagine that would be close enough
>> to break-even, but it might be worth trying to test to be sure.
> ...
> -- Test 2 (all mat
On Mon, 22 Jul 2019 at 02:45, Tom Lane wrote:
> One small question is whether it loses if most of the subplans
> are present in the bitmap. I imagine that would be close enough
> to break-even, but it might be worth trying to test to be sure.
> (I'd think about breaking out just the loops in ques
I wrote:
>> * Rationalize places that are using combinations of list_copy and
>> list_concat, probably by inventing an additional list-concatenation
>> primitive that modifies neither input.
> I poked around to see what we have in this department. There seem to
> be several identifiable use-cases
David Rowley writes:
> On Wed, 17 Jul 2019 at 11:06, Tom Lane wrote:
>> (Actually, I doubt that any of these changes will really move the
>> performance needle in the real world. It's more a case of wanting
>> the code to present good examples not bad ones.)
> In spirit with the above, I'd quit
On Wed, 17 Jul 2019 at 11:06, Tom Lane wrote:
> (Actually, I doubt that any of these changes will really move the
> performance needle in the real world. It's more a case of wanting
> the code to present good examples not bad ones.)
In spirit with the above, I'd quite like to fix a small bad exa
Daniel Gustafsson writes:
> For cases where an Oid list is copied and then head elements immediately
> removed, as in fetch_search_path, couldn’t we instead use a counter and
> list_copy_tail to avoid repeated list_delete_first calls?
Perhaps, but I'm having a hard time getting excited about it.
David Rowley writes:
> I've only looked at 0002. Here are my thoughts:
Thanks for looking!
> get_tables_to_cluster:
> Looks fine. It's a heap scan. Any previous order was accidental, so if
> it causes issues then we might need to think of using a more
> well-defined order for CLUSTER;
Check.
>
> On 17 Jul 2019, at 01:06, Tom Lane wrote:
> There are a bunch of places that are using list_delete_first to remove
> the next-to-process entry from a List used as a queue. In principle,
> we could invert the order of those queues and then use list_delete_last,
> but I thought this would probab
On Wed, 17 Jul 2019 at 11:06, Tom Lane wrote:
> 0002 changes some additional places where it's maybe a bit less safe,
> ie there's a potential for user-visible behavioral change because
> processing will occur in a different order. In particular, the proposed
> change in execExpr.c causes aggrega
I wrote:
> * Look at places using lcons/list_delete_first to maintain FIFO lists.
> The patch makes these O(N^2) for long lists. If we can reverse the list
> order and use lappend/list_truncate instead, it'd be better. Possibly in
> some places the list ordering is critical enough to make this im
I wrote:
> * Rationalize places that are using combinations of list_copy and
> list_concat, probably by inventing an additional list-concatenation
> primitive that modifies neither input.
I poked around to see what we have in this department. There seem to
be several identifiable use-cases:
* Co
Robert Haas writes:
> On Tue, Jul 16, 2019 at 10:44 AM Tom Lane wrote:
>> OK, I'm outvoted, will do it that way.
> I cast my vote in the other direction i.e. for sticking with qsort.
Didn't see this until after pushing a commit that uses "list_sort".
While composing that commit message another
On Tue, Jul 16, 2019 at 9:01 AM Robert Haas wrote:
> I cast my vote in the other direction i.e. for sticking with qsort.
I do too.
--
Peter Geoghegan
On Tue, Jul 16, 2019 at 10:44 AM Tom Lane wrote:
> David Steele writes:
> > On 7/15/19 11:07 PM, Tom Lane wrote:
> >> David Rowley writes:
> >>> The only thoughts I have so far here are that it's a shame that the
> >>> function got called list_qsort() and not just list_sort().
>
> > I agree with
David Steele writes:
> On 7/15/19 11:07 PM, Tom Lane wrote:
>> David Rowley writes:
>>> The only thoughts I have so far here are that it's a shame that the
>>> function got called list_qsort() and not just list_sort().
> I agree with David -- list_sort() is better. I don't think "sort" is
> su
On 7/15/19 11:07 PM, Tom Lane wrote:
David Rowley writes:
The only thoughts I have so far here are that it's a shame that the
function got called list_qsort() and not just list_sort(). I don't
see why callers need to know anything about the sort algorithm that's
being used.
Meh. list_qsort(
David Rowley writes:
> The only thoughts I have so far here are that it's a shame that the
> function got called list_qsort() and not just list_sort(). I don't
> see why callers need to know anything about the sort algorithm that's
> being used.
Meh. list_qsort() is quicksort only to the extent
On Tue, 16 Jul 2019 at 07:49, Tom Lane wrote:
> A possibly controversial point is that I made list_qsort() sort the
> given list in-place, rather than building a new list as it has
> historically. For every single one of the existing and new callers,
> copying the input list is wasteful, because
I wrote:
> [ list_qsort-API-change.patch ]
Also, here's a follow-on patch that cleans up some crufty code in
heap.c and relcache.c to use list_qsort, rather than manually doing
insertions into a list that's kept ordered. The existing comments
argue that that's faster than qsort for small N, but I
I wrote:
> BTW, further on the subject of performance --- I'm aware of at least
> these topics for follow-on patches:
> ...
> * Adjust API for list_qsort(), as discussed, to save indirections and
> avoid constructing an intermediate pointer array. I also seem to recall
> one place in the planner t
Daniel Gustafsson writes:
>> On 13 Jul 2019, at 18:32, Tom Lane wrote:
>> I see from the cfbot that v8 is already broken (new call of lnext
>> to be fixed). Don't really want to keep chasing a moving target,
>> so unless I hear objections I'm going to adjust the additional
>> spot(s) and commit
> On 13 Jul 2019, at 18:32, Tom Lane wrote:
> I see from the cfbot that v8 is already broken (new call of lnext
> to be fixed). Don't really want to keep chasing a moving target,
> so unless I hear objections I'm going to adjust the additional
> spot(s) and commit this pretty soon, like tomorrow
David Rowley writes:
> Thanks for the speedy turnaround. I've looked at v8, as far as a diff
> between the two patches and I'm happy.
> I've marked as ready for committer.
So ... last chance for objections?
I see from the cfbot that v8 is already broken (new call of lnext
to be fixed). Don't re
On Thu, 4 Jul 2019 at 06:15, Tom Lane wrote:
>
> David Rowley writes:
> > I've now read over the entire patch and have noted down the following:
>
> Thanks for the review! Attached is a v8 responding to most of your
> comments. Anything not quoted below I just accepted.
Thanks for the speedy t
David Rowley writes:
> I also did some more benchmarking of the patch. ...
> Which makes the patched version 2.2% faster than master on that run.
BTW, further on the subject of performance --- I'm aware of at least
these topics for follow-on patches:
* Fix places that are maintaining arrays para
On 2019-Jul-03, Tom Lane wrote:
> David Rowley writes:
> > 10. I wonder if we can reduce a bit of pain for extension authors by
> > back patching a macro that wraps up a lnext() call adding a dummy
> > argument for the List.
>
> I was wondering about a variant of that yesterday; specifically,
>
David Rowley writes:
> I've now read over the entire patch and have noted down the following:
Thanks for the review! Attached is a v8 responding to most of your
comments. Anything not quoted below I just accepted.
> 1. MergeAttributes() could do with a comment mention why you're not
> using fo
On Tue, Jul 2, 2019 at 5:12 PM Tom Lane wrote:
> Oleksandr Shulgin writes:
> > Not related to the diff v6..v7, but shouldn't we throw additionally a
> > memset() with '\0' before calling pfree():
>
> I don't see the point of that. In debug builds CLOBBER_FREED_MEMORY will
> take care of it, and
On Tue, 2 Jul 2019 at 11:27, Tom Lane wrote:
> My previous patch would have had you replace this with a loop using
> an integer list-position index. You can still do that if you like,
> but it's less change to convert the loop to a foreach(), drop the
> prev/next variables, and replace the list_d
Oleksandr Shulgin writes:
> Not related to the diff v6..v7, but shouldn't we throw additionally a
> memset() with '\0' before calling pfree():
I don't see the point of that. In debug builds CLOBBER_FREED_MEMORY will
take care of it, and in non-debug builds I don't see why we'd expend
the cycles.
On Tue, Jul 2, 2019 at 1:27 AM Tom Lane wrote:
>
> So I think this is a win, and attached is v7.
>
Not related to the diff v6..v7, but shouldn't we throw additionally a
memset() with '\0' before calling pfree():
+ListCell *newelements;
+
+newelements = (ListCell *)
+
I spent some time experimenting with the idea mentioned upthread of
adding a macro to support deletion of a foreach loop's current element
(adjusting the loop's state behind the scenes). This turns out to work
really well: it reduces the complexity of fixing existing loops around
element deletions
Hi,
On 7/1/19 2:44 PM, Tom Lane wrote:
Yup, here's a rebase against HEAD (and I also find that check-world shows
no problems).
Thanks - no further comments.
This is pretty much of a pain to maintain, since it changes
the API for lnext() which is, um, a bit invasive. I'd like to make a
deci
Jesper Pedersen writes:
> This needs a rebase. After that check-world passes w/ and w/o
> -DDEBUG_LIST_MEMORY_USAGE.
Yup, here's a rebase against HEAD (and I also find that check-world shows
no problems). This is pretty much of a pain to maintain, since it changes
the API for lnext() which is,
Hi,
On 5/25/19 11:48 AM, Tom Lane wrote:
The cfbot noticed a set-but-not-used variable that my compiler hadn't
whined about. Here's a v5 to pacify it. No other changes.
This needs a rebase. After that check-world passes w/ and w/o
-DDEBUG_LIST_MEMORY_USAGE.
There is some unneeded MemoryC
David Rowley writes:
> On Fri, 14 Jun 2019 at 13:54, Bruce Momjian wrote:
>> Have you tested the performance impact?
> I did some and posted earlier in the thread:
> https://postgr.es/m/cakjs1f8h2vs8m0cgfsgfivfkjvudu5-mzo1gjb2uf0m8_9v...@mail.gmail.com
> It came out only slightly slower over th
On Fri, 14 Jun 2019 at 13:54, Bruce Momjian wrote:
>
> On Sat, May 25, 2019 at 11:48:47AM -0400, Tom Lane wrote:
> > I wrote:
> > > Here's a new version of the Lists-as-arrays patch.
> >
> > The cfbot noticed a set-but-not-used variable that my compiler hadn't
> > whined about. Here's a v5 to pac
On Sat, May 25, 2019 at 11:48:47AM -0400, Tom Lane wrote:
> I wrote:
> > Here's a new version of the Lists-as-arrays patch.
>
> The cfbot noticed a set-but-not-used variable that my compiler hadn't
> whined about. Here's a v5 to pacify it. No other changes.
Have you tested the performance impac
David Rowley writes:
> If we're doing an API break for this, wouldn't it be better to come up
> with something that didn't have to shuffle list elements around every
> time one is deleted?
Agreed that as long as there's an API break anyway, we could consider
more extensive changes for this use-ca
I wrote:
> Here's a new version of the Lists-as-arrays patch.
The cfbot noticed a set-but-not-used variable that my compiler hadn't
whined about. Here's a v5 to pacify it. No other changes.
regards, tom lane
reimplement-List-as-array-5.patch.gz
Description: reimplemen
On Sat, 25 May 2019 at 12:53, Tom Lane wrote:
> Now, it turns out that the new formulation of foreach() is really
> strictly equivalent to
>
> for (int pos = 0; pos < list_length(list); pos++)
> {
> whatever-type item = list_nth(list, pos);
> ...
>
Here's a new version of the Lists-as-arrays patch. It's rebased up to
HEAD, and I also realized that I could fix the problem with multiple
evaluation of the List arguments of foreach etc. by using structure
assignment. So that gets rid of a large chunk of the semantic gotchas
that were in the pre
David Rowley writes:
> On Tue, 5 Mar 2019 at 12:54, Andres Freund wrote:
>> Yes, I think you have a point that progress here would be good and that
>> it's worth some pain. But the names will make even less sense if we just
>> shunt in an array based approach under the already obscure list
>> API
David Rowley writes:
> ... With list_concat() I find that pretty scary
> anyway. Using it means we can have a valid list that does not get it's
> length updated when someone appends a new item. Most users of that do
> list_copy() to sidestep that and other issues... which likely is
> something we
On Tue, 5 Mar 2019 at 12:54, Andres Freund wrote:
> Yes, I think you have a point that progress here would be good and that
> it's worth some pain. But the names will make even less sense if we just
> shunt in an array based approach under the already obscure list
> API.
If we feel strongly about
Hi,
On 2019-03-05 12:42:47 +1300, David Rowley wrote:
> So you think targetlists are the only case to benefit from an array
> based list? (Ignoring the fact that I already showed another case)
No, that's not what I'm trying to say at all. I think there's plenty
cases where it'd be beneficial. In
On Tue, 5 Mar 2019 at 11:11, Andres Freund wrote:
> On 2019-03-04 16:28:40 -0500, Tom Lane wrote:
> > Andres Freund writes:
> > > I don't buy this. I think e.g. redisgning the way we represent
> > > targetlists would be good (it's e.g. insane that we recompute
> > > descriptors out of them all th
Hi,
On 2019-03-04 16:28:40 -0500, Tom Lane wrote:
> Andres Freund writes:
> > I don't buy this. I think e.g. redisgning the way we represent
> > targetlists would be good (it's e.g. insane that we recompute
> > descriptors out of them all the time), and would reduce their allocator
> > costs.
>
On Mon, Mar 4, 2019 at 2:04 PM Bruce Momjian wrote:
> On Mon, Mar 4, 2019 at 12:44:41PM -0500, Robert Haas wrote:
> > I think that's a separate but also promising thing to attack, and I
> > agree that it'd take a lot of work to get there. I don't think that
> > the problem with either parse-tree
On Mon, Mar 4, 2019 at 01:11:35PM -0500, Tom Lane wrote:
> Robert Haas writes:
> > I think the reason why you're not seeing a performance benefit is
> > because the problem is not that lists are generically a more expensive
> > data structure than arrays, but that there are cases when they are
>
On Mon, Mar 4, 2019 at 12:44:41PM -0500, Robert Haas wrote:
> I think that's a separate but also promising thing to attack, and I
> agree that it'd take a lot of work to get there. I don't think that
> the problem with either parse-tree-copying or list usage is that no
> performance benefits are
Andres Freund writes:
> I don't buy this. I think e.g. redisgning the way we represent
> targetlists would be good (it's e.g. insane that we recompute
> descriptors out of them all the time), and would reduce their allocator
> costs.
Maybe we're not on the same page here, but it seems to me that
Hi,
On 2019-03-04 13:11:35 -0500, Tom Lane wrote:
> The concern I have is mostly about the use of lists as core infrastructure
> in parsetree, plantree, etc data structures. I think any idea that we'd
> replace those piecemeal is borderline insane: it's simply not worth it
> from a notational and
Hi,
On 2019-03-03 13:29:04 -0500, Tom Lane wrote:
> The cases I've been looking at suggest to me that we'd make far
> more progress on the excessive-palloc'ing front if we could redesign
> things to reduce unnecessary copying of parsetrees. Right now the
> planner does an awful lot of copying bec
Hi,
On 2019-03-02 18:11:43 -0500, Tom Lane wrote:
> On test cases like "pg_bench -S" it seems to be pretty much within the
> noise level of being the same speed as HEAD.
I think that might be because it's bottleneck is just elsewhere
(e.g. very context switch heavy, very few lists of any length).
Robert Haas writes:
> I think the reason why you're not seeing a performance benefit is
> because the problem is not that lists are generically a more expensive
> data structure than arrays, but that there are cases when they are
> more expensive than arrays. If you only ever push/pop at the fron
On Sun, Mar 3, 2019 at 1:29 PM Tom Lane wrote:
> > My main observation was from when the expression evaluation was using
> > lists all over. List iteration overhead was very substantial there. But
> > that's not a problem anymore, because all of those are gone now due to
> > the expression rewrite
On Mon, 4 Mar 2019 at 07:29, Tom Lane wrote:
>
> Andres Freund writes:
> > I still regularly see list overhead matter in production workloads. A
> > lot of it being memory allocator overhead, which is why I'm concerned
> > with a rewrite that doesn't reduce the number of memory allocations.
>
> W
Andres Freund writes:
> On 2019-03-02 18:11:43 -0500, Tom Lane wrote:
>> I wonder what test cases Andres has been looking at that convince
>> him that we need a reimplementation of Lists.
> My main observation was from when the expression evaluation was using
> lists all over. List iteration over
Hi,
On 2019-03-02 18:11:43 -0500, Tom Lane wrote:
> I wonder what test cases Andres has been looking at that convince
> him that we need a reimplementation of Lists.
My main observation was from when the expression evaluation was using
lists all over. List iteration overhead was very substantial
Here's a v3 incorporating Andres' idea of trying to avoid a separate
palloc for the list cell array. In a 64-bit machine we can have up
to five ListCells in the initial allocation without actually increasing
space consumption at all compared to the old code. So only when a List
grows larger than
Here's a rebased version of the main patch.
David Rowley writes:
> The only thing that I did to manage to speed the patch up was to ditch
> the additional NULL test in lnext(). I don't see why that's required
> since lnext(NULL) would have crashed with the old implementation.
I adopted this ide
David Rowley writes:
> On Thu, 28 Feb 2019 at 09:26, Tom Lane wrote:
>> 0001 below does this. I found a couple of places that could use
>> forfive(), as well. I think this is a clear legibility and
>> error-proofing win, and we should just push it.
> I've looked over this and I agree that it's
Andrew Gierth writes:
> To get a reliable measurement of timing changes less than around 3%,
> what you have to do is this: pick some irrelevant function and add
> something like an asm directive that inserts a variable number of NOPs,
> and do a series of test runs with different values.
Good po
> "David" == David Rowley writes:
David> I went and had a few adventures with this patch to see if I
David> could figure out why the small ~1% regression exists.
Just changing the number of instructions (even in a completely unrelated
place that's not called during the test) can generate p
David Rowley writes:
> I went and had a few adventures with this patch to see if I could
> figure out why the small ~1% regression exists.
Thanks for poking around!
> ... I had
> suspected it was the lcons() calls being expensive because then need
> to push the elements up one place each time, n
On Tue, 26 Feb 2019 at 18:34, Tom Lane wrote:
>
> David Rowley writes:
> > Using the attached patch (as text file so as not to upset the CFbot),
> > which basically just measures and logs the time taken to run
> > pg_plan_query. ...
> > Surprisingly it took 1.13% longer. I did these tests on an
David Rowley writes:
> On Thu, 28 Feb 2019 at 09:26, Tom Lane wrote:
>> 0002 below does this. I'm having a hard time deciding whether this
>> part is a good idea or just code churn. It might be more readable
>> (especially to newbies) but I can't evaluate that very objectively.
> I'm less deci
On Thu, 28 Feb 2019 at 09:26, Tom Lane wrote:
>
> I wrote:
> > I did find a number of places where getting rid of explicit lnext()
> > calls led to just plain cleaner code. Most of these were places that
> > could be using forboth() or forthree() and just weren't. There's
> > also several places
On 2019-Feb-27, Tom Lane wrote:
> I'm particularly unsure about whether we need two macros; though the
> way I initially tried it with just list_cell_is_last() seemed kind of
> double-negatively confusing in the places where the test needs to be
> not-last. Also, are these macro names too long, a
Robert Haas writes:
> On Wed, Feb 27, 2019 at 3:27 PM Tom Lane wrote:
>> 0001 below does this. I found a couple of places that could use
>> forfive(), as well. I think this is a clear legibility and
>> error-proofing win, and we should just push it.
> It sounds like some of these places might
On Wed, Feb 27, 2019 at 3:27 PM Tom Lane wrote:
> 0001 below does this. I found a couple of places that could use
> forfive(), as well. I think this is a clear legibility and
> error-proofing win, and we should just push it.
It sounds like some of these places might need a bigger restructuring
1 - 100 of 130 matches
Mail list logo