2009/9/2 Олег Царев <zabiva...@gmail.com>: > After week-lengthed investigation, now i 'm sure - my level of > qualification not enough for implementation task "GROUPING SETS". > I require documentation about the executor and the planner, i can't > understand scheme of work by source code. > Many code, many cases, but very little information "what is it" and > "how thos work". May be i stupid.
GROUPING SETS is too hard begin. Grouping planner isn't really readable code. It is reason, whay I did GROUPING SETS over CTE and why I would to share code with CTE. regards Pavel > Sorry. > > 2009/8/14 Pavel Stehule <pavel.steh...@gmail.com>: >> 2009/8/14 Олег Царев <zabiva...@gmail.com>: >>> 2009/8/14 Hitoshi Harada <umi.tan...@gmail.com>: >>>> 2009/8/14 Pavel Stehule <pavel.steh...@gmail.com>: >>>>> 2009/8/13 Hitoshi Harada <umi.tan...@gmail.com>: >>>>>> 2009/8/14 Pavel Stehule <pavel.steh...@gmail.com>: >>>>>>> I prefered using CTE, because this way was the most short to small >>>>>>> bugs less prototype - with full functionality. >>>>>> >>>>>> You could make it by query rewriting, but as you say the best cleanest >>>>>> way is total refactoring of existing nodeAgg. How easy to implement is >>>>>> not convincing. >>>>>> >>>>> >>>>> I agree. Simply I am not have time and force do it. I would to >>>>> concentrate on finishing some plpgsql issues, and then I have to do >>>>> some other things than PostgreSQL. There are fully functional >>>>> prototype and everybody is welcome to continue in this work. >>>>> >>>> >>>> I see your situation. Actually your prototype is good shape to be >>>> discussed in both ways. But since you've been focusing on this feature >>>> it'd be better if you keep your eyes on this. >>>> >>>> So, Oleg, do you continue on this? >>>> >>>> >>>> Regards, >>>> >>>> >>>> -- >>>> Hitoshi Harada >>>> >>> >>>> I'd imagine such like: >>>> >>>> select a, b, count(*) from x group by rollup(a, b); >>>> >>>> PerGroup all = init_agg(), a = init_agg(), ab = init_agg(); >>>> while(row = fetch()){ >>>> if(group_is_changed(ab, row)){ >>>> result_ab = finalize_agg(ab); >>>> ab = init_agg(); >>>> } >>>> if(group_is_changed(a, row)){ >>>> result_a = finalize_agg(a); >>>> a = init_agg(); >>>> } >>>> advance_agg(all, row); >>>> advance_agg(a, row); >>>> advance_agg(ab, row); >>>> } >>>> result_all = finalize_agg(all); >>> Fun =) My implementation of rollup in DBMS qd work as your imagine there! =) >>> Also, multiply sort of source we take for CUBE implementation, but >>> this hard for support (sort in group by - it's bloat). >>> As result we have merge implementation of group by, rollup, and window >>> functions with some common code - it's way for grouping of source, >>> Hash implementation group xxx on different hash-tables (with different >>> keys) it's very expensive (require many memory for keys). >>> I hope continue my work, after end of time trouble on work =( (bad >>> TPC-H perfomance) >>> >> >> I thing, so you are afraid too much about memory. Look on current >> postgres. Any hash grouping is faster than sort grouping. Try and see. >> PostgreSQL isn't embeded database. So there are not main goal an using >> less memory. The goal is has features with clean, readable and >> maintainable source code. >> > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers