Re: [PERFORM] How to investiage slow insert problem

2013-08-20 Thread Rural Hunter
Hi Jeff, Thanks a lot for such a detailed guide! 于 2013/8/21 8:24, Jeff Janes 写道: OK, that certainly does sound like network problems and not disk contention. But what I don't see is why it would be listed as "active" in pg_stat_activity. If it is blocked on a network connection, I would

[PERFORM] How to investiage slow insert problem

2013-08-20 Thread Jeff Janes
On Mon, Aug 19, 2013 at 10:30 PM, Rural Hunter > wrote: > 于 2013/8/20 12:34, Jeff Janes 写道: > > > How long had they been hanging there? It makes a big difference whether > > there are several hanging there at one moment, but a few milliseconds later > > there are several different ones, versus th

Re: [PERFORM] queries with DISTINCT / GROUP BY giving different plans

2013-08-20 Thread Tom Lane
Tomas Vondra writes: > Not quite sure how to parse this (not a native speaker here, sorry). > Does that mean we want to keep it as it is now (because fixing it would > cause even worse errors with low estimates)? Or do we want to fix > hashed_distinct so that it behaves like hashed_grouping? We n

Re: [PERFORM] queries with DISTINCT / GROUP BY giving different plans

2013-08-20 Thread Tomas Vondra
On 20.8.2013 23:02, Tom Lane wrote: > Tomas Vondra writes: > >> Anyway, I still don't understand why the same logic around >> hash_agg_entry_size should not apply to choose_hashed_grouping as >> well? Well, it would make it slower in this particular corner case, >> but wouldn't it be more correc

Re: [PERFORM] queries with DISTINCT / GROUP BY giving different plans

2013-08-20 Thread Tom Lane
Tomas Vondra writes: > On 20.8.2013 18:24, Tom Lane wrote: >> No, I don't think so. I'm pretty sure the reason >> choose_hashed_distinct is like that is that I subconsciously assumed >> hash_agg_entry_size would produce zero for numAggs = 0; but in fact >> it does not and should not, because ther

Re: [PERFORM] Can query planner prefer a JOIN over a high-cost Function?

2013-08-20 Thread Tom Lane
David McNett writes: > Is there any way for me to influence the query planner so that it can > know that the JOIN is far less expensive than the function for planning? > The COST attribute on the function appears to have no effect. I think what you're missing is an index on examples.code_id, whic

[PERFORM] Can query planner prefer a JOIN over a high-cost Function?

2013-08-20 Thread David McNett
I've got a normalized data table from which I'm trying to select a small subset of rows determined by both traditional filtering as well as the result of a cpu-expensive function that I've defined. What I'm seeing is that the query planner always attempts to defer the de-normalizing JOIN over the

Re: [PERFORM] queries with DISTINCT / GROUP BY giving different plans

2013-08-20 Thread Tomas Vondra
On 20.8.2013 18:24, Tom Lane wrote: > Tomas Vondra writes: >> I've managed to get the data to a different machine, and I've spent >> some time on debugging it. > > Great, thanks for looking into it! > >> It seems that the difference is in evaluating hashentrysize [ >> choose_hashed_distinct omit

Re: [PERFORM] queries with DISTINCT / GROUP BY giving different plans

2013-08-20 Thread Tom Lane
Tomas Vondra writes: > I've managed to get the data to a different machine, and I've spent some > time on debugging it. Great, thanks for looking into it! > It seems that the difference is in evaluating hashentrysize > [ choose_hashed_distinct omits hash_agg_entry_size() ] > but the hashentrysiz

Re: [PERFORM] queries with DISTINCT / GROUP BY giving different plans

2013-08-20 Thread Tomas Vondra
On 16.8.2013 21:36, Tomas Vondra wrote: > > Now, the difference between 24 and 56 is caused by hash_agg_entry_size. > It's called with numAggs=0 but returns 32. I'm wondering if it should > return 0 in such cases, i.e. something like this: > > Size > hash_agg_entry_size(int numAggs) >

Re: [PERFORM] DBT5 execution failed due to undefined symbol: PQescapeLiteral

2013-08-20 Thread amulsul
Hi Laurenz Albe , Thanks for reply. Your guess was correct :). It worked for me exporting LD_LIBRARY_PATH to postgres9.2 lib directory. Thanks again for your help. Regards, Amul Sul -- View this message in context: http://postgresql.1045698.n5.nabble.com/DBT5-execution-failed-due-to-unde