Tom Lane wrote:
> Josh Berkus <[EMAIL PROTECTED]> writes:
> > For example, the following query is not possible to
> > "workaround" in PostgreSQL:
>
> > select teams_desc.team_id, team_name, team_code, notes,
> > min(teams_tree.treeno) as lnode, max(teams_tree.treeno) as rnode,
> > parent.team_id
Tom,
> I think you are leaping to conclusions about why there's a speed
> difference. Or maybe I'm too dumb to see how an index could be used
> to speed these min/max operations --- but I don't see that one would
> be useful. Certainly not an index on treeno alone. Would you care to
> explain e
Josh Berkus <[EMAIL PROTECTED]> writes:
> For example, the following query is not possible to
> "workaround" in PostgreSQL:
> select teams_desc.team_id, team_name, team_code, notes,
> min(teams_tree.treeno) as lnode, max(teams_tree.treeno) as rnode,
> parent.team_id as parent_id, count(*)/2 as tl
Tom,
> In the end, the only reasonable way to handle this kind of thing is
> to teach the query planner about it. Considering the small number
> of cases that are usefully optimizable (basically only MIN and MAX
> on a single table without any WHERE or GROUP clauses), and the ready
> availability
On Sat, Feb 01, 2003 at 15:21:24 -0500,
Greg Stark <[EMAIL PROTECTED]> wrote:
> Tom Lane <[EMAIL PROTECTED]> writes:
>
> That just means you need some way for aggregates to declare which records they
> need. The only values that seem like they would be useful would be "first
> record" "last reco
Tom Lane <[EMAIL PROTECTED]> writes:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Hmm...any chance, then, of giving aggregate functions a means of
> > asking which table(s) and column(s) the original query referred to so
> > that it could do proper optimization on its own?
>
> You can't usefully
Kevin Brown <[EMAIL PROTECTED]> writes:
> Hmm...any chance, then, of giving aggregate functions a means of
> asking which table(s) and column(s) the original query referred to so
> that it could do proper optimization on its own?
You can't usefully do that without altering the aggregate paradigm.
Tom Lane wrote:
> Sean Chittenden <[EMAIL PROTECTED]> writes:
> > Now, there are some obvious problems:
>
> You missed the real reason why this will never happen: it completely
> kills any prospect of concurrent updates. If transaction A has issued
> an update on some row, and gone and modified t
> > Now, there are some obvious problems:
>
> You missed the real reason why this will never happen: it completely
> kills any prospect of concurrent updates. If transaction A has
> issued an update on some row, and gone and modified the relevant
> aggregate cache entries, what happens when trans
Sean Chittenden <[EMAIL PROTECTED]> writes:
> Now, there are some obvious problems:
You missed the real reason why this will never happen: it completely
kills any prospect of concurrent updates. If transaction A has issued
an update on some row, and gone and modified the relevant aggregate
cache
> > I have a table which is very large (~65K rows). I have
> > a column in it which is indexed, and I wish to use for
> > a join. I'm finding that I'm using a sequential scan
> > for this when selecting a MIN.
>
> Due to Postgres' system of extensible aggregates (i.e. you can write
> your own aggr
11 matches
Mail list logo