On 15 January 2016 at 00:19, Julien Rouhaud <julien.rouh...@dalibo.com> wrote:
> > + * Technically we could look at UNIQUE indexes too, > however we'd also > + * have to ensure that each column of the unique index had > a NOT NULL > > > s/had/has/ > > > + * constraint, however since NOT NULL constraints > currently are don't > > > s/are // > Both fixed. Thanks. > > + /* > > + * If we found any surplus Vars in the GROUP BY clause, then > > we'll build > > + * a new GROUP BY clause without these surplus Vars. > > + */ > > + if (anysurplus) > > + { > > + List *new_groupby = NIL; > > + > > + foreach (lc, root->parse->groupClause) > > + { > > + SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); > > + TargetEntry *tle; > > + Var *var; > > + > > + tle = get_sortgroupclause_tle(sgc, > root->parse->targetList); > > + var = (Var *) tle->expr; > > + > > + if (!IsA(var, Var)) > > + continue; > > + [...] > > > > if var isn't a Var, it needs to be added in new_groupby. > > > > > > Yes you're right, well, at least I've written enough code to ensure that > > it's not needed. > > Oh yes, I realize that now. > > I meant to say "I've not written enough code" ... > > Technically we could look inside non-Vars and check if the Expr is just > > made up of Vars from a single relation, and then we could also make that > > surplus if we find other Vars which make up the table's primary key. I > > didn't make these changes as I thought it was a less likely scenario. It > > wouldn't be too much extra code to add however. I've went and added an > > XXX comment to explain that there might be future optimisation > > opportunities in the future around this. > > > > Agreed. > > > I've attached an updated patch. > > > > + /* don't try anything unless there's two Vars */ > + if (varlist == NULL || list_length(varlist) < 2) > + continue; > > To be perfectly correct, the comment should say "at least two Vars". > > Changed per discussion from you and Geoff > Except the small remaining typos, this patch looks very fine to me. I > flag it as ready for committer. Many thanks for your followup review. I've attached an updated patch to address what you highlighted above. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
prune_group_by_clause_59f15cf_2016-01-15.patch
Description: Binary data
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers