On Tue, Mar 03, 2020 at 08:46:24AM -0800, Jeff Davis wrote:
> On Tue, 2020-03-03 at 09:49 +0800, Adam Lee wrote:
> > Master: 12729ms 12970ms 12999ms
> > With my patch(a pointer): 12965ms 13273ms 13116ms
> > With your patch(flexible array): 12906ms 12991ms 13043ms
>
>
On Fri, Feb 28, 2020 at 12:38:55PM -0800, Jeff Davis wrote:
> On Fri, 2020-02-28 at 14:16 +0800, Adam Lee wrote:
> > I noticed another difference, I was using palloc0(), which could be
> > one of the
> > reason, but not sure.
>
> I changed the palloc0()'s in you
ynamically for the upcoming
> Hash Aggregation work[0]. HashAgg doesn't know in advance how many
> tapes it will need, though only a limited number are actually active at
> a time.
>
> This was proposed and originally written by Adam Lee[1] (extract only
> the changes to logt
when this panic
happens? If so, it's the same issue as "5) nbucket == 0", which passes a
zero size to allocator when creates that endup-with-0x18 hashtable.
Sorry my testing env goes weird right now, haven't reproduced it yet.
--
Adam Lee
ould say just "grouping"?
As I see it, it does traverse all hash sets, fill the hash table and
spill if needed, for each tuple.
The segfault is probably related to this and MixedAggregate, I'm looking
into it.
--
Adam Lee
Hi,
I get the error `can not find file ""` when I hack the bgworkers, the
root case is a zero initialized bgworker was registered. It's not
interesting, but I'd like to refine the error message a bit.
Thanks.
--
Adam Lee
>From 5f78d92b2c2b9ed0b3cbfdfe09e1461fbe05196d M
er tracking free space for HashAgg at all" you mentioned in last
mail, I suppose these two options will lost the disk space saving
benefit since some blocks are not reusable then?
--
Adam Lee
6.502 ms
(6 rows)
```
Even though, I'm not sure which API is better, because we should avoid
the respilling as much as we could in the planner, and hash join uses
the bare BufFile.
Attached my hacky and probably not robust diff for your reference.
--
Adam Lee
diff --git a/src/backend/exe
number of groups but a very small
number of tuples in a single group like the test you did. It would be a
challenge.
BTW, Jeff, Greenplum has a test for hash agg spill, I modified a little
to check how many batches a query uses, it's attached, not sure if it
would help.
--
Adam Lee
create schem
to be sure to get the
> right attributes.
>
> Regards,
> Jeff Davis
Melanie and I tried this, had a installcheck passed patch. The way how
we verify it is composing a wide table with long unnecessary text
columns, then check the size it writes on every iteration.
Please check ou
On Wed, Dec 04, 2019 at 06:55:43PM -0800, Jeff Davis wrote:
>
> Thanks very much for a great review! I've attached a new patch.
Hi,
About the `TODO: project needed attributes only` in your patch, when
would the input tuple contain columns not needed? It seems like anything
you can project has to
Hi, hackers
As the $Subject, does anyone have one? I'd like to refer to it, and
write an example for people who is also looking for the document.
Thanks.
--
Adam Lee
> - Heikki
Yes.
My next thought is to call unlikely() here, but we don't have it...
https://www.postgresql.org/message-id/CABRT9RC-AUuQL6txxsoOkLxjK1iTpyexpbizRF4Zxny1GXASGg%40mail.gmail.com
--
Adam Lee
https://github.com/greenplum-db/gpdb/issues/8262
I filed a bug report for gcc, they think it's expected.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91395
Since high version gcc thinks it's supposed to report warning, we need to make
these two variables volatile? Or have I missed s
k I/O could be downsides.
Greenplum spills all the groups by writing the partial aggregate states,
reset the memory context, process incoming tuples and build in-memory
hash table, then reload and combine the spilled partial states at last,
how does this sound?
--
Adam Lee
On Tue, Dec 26, 2017 at 11:48:58AM -0800, Robert Haas wrote:
> On Thu, Dec 21, 2017 at 10:10 PM, Adam Lee wrote:
> > I have an issue that COPY from a FIFO, which has no writers, could not be
> > canceled, because COPY invokes AllocateFile() -> fopen() -> blocking open().
>
r instance, just seems not
right.
My plan is to write a new function which nonblocking opens FIFOs just
for COPY, and not waits writers, what do you think?
--
Adam Lee
17 matches
Mail list logo