On Mon, Dec 12, 2016 at 6:14 PM, Andres Freund wrote:
>
>
> For Q1 I think the bigger win is JITing the transition function
> invocation in advance_aggregates/transition_function - that's IIRC where
> the biggest bottleneck lies.
>
Yeah, we bundle the agg core into our expr work... no point other
Andres,
> dev (no jiting):
> Time: 30343.532 ms
> dev (jiting):
> SET jit_tuple_deforming = on;
> SET jit_expressions = true;
>
> Time: 24439.803 ms
FYI, ~20% improvement for TPCH Q1 is consistent with what we find when we
only jit expression.
Cheers,
-cktan
Hi Hackers,
I am looking for some help in creating LEFT/RIGHT/FULL sort-merge-join.
Does anyone have a complete and reproducible script that would generate
those plans? Can I find it in the regression test suite? If not, how do you
exercise those code path for QA purposes?
Thanks!
-cktan
On 14 June 2015 at 23:51, Tomas Vondra wrote:
> The current state, where HashAgg just blows up the memory, is just not
>>> reasonable, and we need to track the memory to fix that problem.
>>>
>>
>> Meh. HashAgg could track its memory usage without loading the entire
>> system with a penalty.
>>
http://vldb.org/pvldb/vol5/p1790_andrewlamb_vldb2012.pdf
In sketch:
There is the concept of a Write-Optimized-Store (WOS) and
Read-optimized-store (ROS), and a TupleMover that moves records from WOS to
ROS (some what like vacuum), and from ROS to WOS for updates. It seems to
me that heap is natur
You're right. I misread the problem description.
On Tue, May 26, 2015 at 3:13 AM, Petr Jelinek wrote:
> On 26/05/15 11:59, CK Tan wrote:
>
>> It has to do with the implementation of slot_getattr, which tries to do
>> the deform on-demand lazily.
>>
>> if you d
It has to do with the implementation of slot_getattr, which tries to do the
deform on-demand lazily.
if you do select a,b,c, the execution would do slot_getattr(1) and deform
a, and then slot_getattr(2) which reparse the tuple to deform b, and
finally slot_getattr(3), which parse the tuple yet aga
cation to data types is easy to understand -- money and double
>> types are faster than Numeric (and no one on this planet has a bank
>> account that overflows the money type, not any time soon)."[1] And
>> "Replaced NUMERIC fields representing currency with MONEY"[2].
>
Hi Mark,
Vitesse DB won't be open-sourced, or it would have been a contrib
module to postgres. We should take further discussions off this list.
People should contact me directly if there is any questions.
Thanks,
ck...@vitessedata.com
On Fri, Oct 17, 2014 at 10:55 PM, Mark Kirkwood
wrote:
>
Postgres -- the implementation needs to be as
non-invasive as possible.
Regards,
-cktan
On Fri, Oct 17, 2014 at 8:40 PM, David Gould wrote:
> On Fri, 17 Oct 2014 13:12:27 -0400
> Tom Lane wrote:
>
>> CK Tan writes:
>> > The bigint sum,avg,count case in the ex
Happy to contribute to that decision :-)
On Fri, Oct 17, 2014 at 11:35 AM, Tom Lane wrote:
> Andres Freund writes:
>> On 2014-10-17 13:12:27 -0400, Tom Lane wrote:
>>> Well, that's pretty much cheating: it's too hard to disentangle what's
>>> coming from JIT vs what's coming from using a differ
d be ideal.
Thanks,
-cktan
> On Oct 17, 2014, at 6:43 AM, Merlin Moncure wrote:
>
>> On Fri, Oct 17, 2014 at 8:14 AM, Merlin Moncure wrote:
>>> On Fri, Oct 17, 2014 at 7:32 AM, CK Tan wrote:
>>> Hi everyone,
>>>
>>> Vitesse DB 9.3.5.S is Post
by email to ck...@vitessedata.com .
Thank you for your help.
--
CK Tan
Vitesse Data, Inc.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
CPU at all. If we could predetermine that there is not
any triggers for a relation, inserts into that relation could then
follow a different path that inserts N-tuples at a time.
Regards,
-cktan
On May 13, 2007, at 4:54 PM, Tom Lane wrote:
"CK Tan" <[EMAIL PROTECTED]> writes:
Hi All,
COPY/INSERT are also bottlenecked on record at a time insertion into
heap, and in checking for pre-insert trigger, post-insert trigger and
constraints.
To speed things up, we really need to special case insertions without
triggers and constraints, [probably allow for unique constr
Sorry, 16x8K page ring is too small indeed. The reason we selected 16
is because greenplum db runs on 32K page size, so we are indeed
reading 128K at a time. The #pages in the ring should be made
relative to the page size, so you achieve 128K per read.
Also agree that KillAndReadBuffer coul
The patch has no effect on scans that do updates. The
KillAndReadBuffer routine does not force out a buffer if the dirty
bit is set. So updated pages revert to the current performance
characteristics.
-cktan
GreenPlum, Inc.
On May 10, 2007, at 5:22 AM, Heikki Linnakangas wrote:
Zeugswett
Hi,
In reference to the seq scans roadmap, I have just submitted a patch
that addresses some of the concerns.
The patch does this:
1. for small relation (smaller than 60% of bufferpool), use the
current logic
2. for big relation:
- use a ring buffer in heap scan
- pin firs
18 matches
Mail list logo