On Fri, Mar 20, 2020 at 1:20 PM Pengzhou Tang wrote:
> Hi,
>
> I happen to notice that "set enable_sort to false" cannot guarantee the
> planner to use hashagg in test groupingsets.sql,
> the following comparing results of sortagg and hashagg seems to have no
> m
Hi,
I happen to notice that "set enable_sort to false" cannot guarantee the
planner to use hashagg in test groupingsets.sql,
the following comparing results of sortagg and hashagg seems to have no
meaning.
Thanks,
Pengzhou
On Thu, Mar 19, 2020 at 7:36 AM Jeff Davis wrote:
>
> Committed.
>
> Th
Thanks you to review this patch.
On Thu, Mar 19, 2020 at 10:09 AM Tomas Vondra
wrote:
> Hi,
>
> unfortunately this got a bit broken by the disk-based hash aggregation,
> committed today, and so it needs a rebase. I've started looking at the
> patch before that, and I have it rebased on e00912e11
/CAG4reAQ8rFCc%2Bi0oju3VjaW7xSOJAkvLrqa4F-NYZzAG4SW7iQ%40mail.gmail.com
Thanks,
Pengzhou
On Fri, Mar 13, 2020 at 3:16 AM Andres Freund wrote:
> Hi,
>
>
> On 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:
> > When reading the grouping sets codes, I find that the additional size of
On Fri, Mar 13, 2020 at 8:34 AM Andrew Gierth
wrote:
> > "Justin" == Justin Pryzby writes:
>
> > On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:
> >> Indeed, that's incorrect. Causes the number of buckets for the
> >> hashtable to be set higher - the size is just used for t
>
> On 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:
> > When reading the grouping sets codes, I find that the additional size of
> > the hash table for hash aggregates is always zero, this seems to be
> > incorrect to me, attached a patch to fix it, please help to c
Hi hacker,
When reading the grouping sets codes, I find that the additional size of
the hash table for hash aggregates is always zero, this seems to be
incorrect to me, attached a patch to fix it, please help to check.
Thanks,
Pengzhou
0001-Set-numtrans-correctly-when-building-hash-aggregate-.p
rel_size() is called by the planner, it uses
rte->scanCols and 'stadiskfrac' to adjust the
rel->pages, please see set_plain_rel_page_estimates().
0003-ZedStore.patch is an example of how zedstore uses extended ANALYZE
API, I paste it here anywhere, in case someone
is interest
Thanks to reviewing those patches.
Ha, I believe you meant to say a "normal aggregate", because what's
> performed above gather is no longer "grouping sets", right?
>
> The group key idea is clever in that it helps "discriminate" tuples by
> their grouping set id. I haven't completely thought this
> I am wondering whether a simple auto-updatable view can have a
> conditional update instead rule.
>
> Well, the decision reached in [1] was that we wouldn't allow that. We
> could decide to allow it now as a new feature enhancement, but it
> wouldn't be a back-patchable bug-fix, and to be honest
a99c42f291421572aef2:
- There is a catch if you try to use conditional rules for view
+ There is a catch if you try to use conditional rules for complex view
Does that mean we should support conditional rules for a simple view?
Regards,
Pengzhou Tang
fraction statistic of each column and planner use it to
estimate the IO cost based on
the selected columns.
Thanks,
Pengzhou
From 8a8f6d14d1a1ddc0be35582d4a17af50ffce986a Mon Sep 17 00:00:00 2001
From: Pengzhou Tang
Date: Wed, 20 Nov 2019 06:42:37 -0500
Subject: [PATCH 1/3] ANALYZE tableam API c
Hi Hackers,
I hit an error when updating a view with conditional INSTEAD OF rules, the
reproduce steps are list below:
CREATE TABLE t1(a int, b int);
CREATE TABLE t2(a int, b int);
CREATE VIEW v1 AS SELECT * FROM t1 where b > 100;
INSERT INTO v1 values(1, 110);
SELECT * FROM t1;
CREATE OR R
/parallel_groupingsets
<https://github.com/greenplum-db/postgres/tree/parallel_groupingsets_3>_3
On Mon, Sep 30, 2019 at 5:41 PM Pengzhou Tang wrote:
> Hi Richard & Tomas:
>
> I followed the idea of the second approach to add a gset_id in the
> targetlist of the first stage of
> grouping se
Hi Hackers,
Table AM routine already provided two custom functions to fetch sample
blocks and sample tuples,
however, the total blocks the ANALYZE can scan are still restricted to the
number of physical blocks
in a table, this doesn't work well for storages which organize blocks in
different ways
Hi Richard & Tomas:
I followed the idea of the second approach to add a gset_id in the
targetlist of the first stage of
grouping sets and uses it to combine the aggregate in final stage. gset_id
stuff is still kept
because of GROUPING() cannot uniquely identify a grouping set, grouping
sets may co
16 matches
Mail list logo