Re: [PERFORM] slow query performance

2010-06-09 Thread Anj Adu
The plan is unaltered . There is a separate index on theDate as well as one on node_id I have not specifically disabled sequential scans. This query performs much better on 8.1.9 on a similar sized table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 ) On Wed, Jun 9, 2010 at 7:55 PM,

Re: [PERFORM] slow query performance

2010-06-09 Thread Tom Lane
Robert Haas writes: > On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu wrote: >> Link to plan >> >> http://explain.depesz.com/s/kHa > Your problem is likely related to the line that's showing up in red: > Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on > dev4_act_dy_fact_2010_05_t3 a (cost=0.00

Re: [PERFORM] slow query performance

2010-06-09 Thread Robert Haas
On Thu, Jun 3, 2010 at 4:37 PM, Anj Adu wrote: > Link to plan > > http://explain.depesz.com/s/kHa Your problem is likely related to the line that's showing up in red: Index Scan using dev4_act_dy_fact_2010_05_t3_thedate on dev4_act_dy_fact_2010_05_t3 a (cost=0.00..94041.89 rows=204276 width=60)

Re: [PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4?

2010-06-09 Thread Robert Haas
On Wed, Jun 9, 2010 at 6:56 AM, Max Williams wrote: > Any input? I can reproduce these numbers consistently. If you need more > information then just let me know. By the way, I am a new postgresql user so > my experience is limited. Maybe different compile options? If we'd really slowed things d

Re: [PERFORM] requested shared memory size overflows size_t

2010-06-09 Thread Robert Haas
On Wed, Jun 2, 2010 at 9:26 PM, Bob Lunney wrote: > Your other option, of course, is a nice 64-bit linux variant, which won't > have this problem at all. Although, even there, I think I've heard that after 10GB you don't get much benefit from raising it further. Not sure if that's accurate or n

Re: [PERFORM] No hash join across partitioned tables?

2010-06-09 Thread Tom Lane
Robert Haas writes: > In going back through emails I had marked as possibly needing another > look before 9.0 is released, I came across this issue again. As I > understand it, analyze (or analyse) now collects statistics for both > the parent individually, and for the parent and its children tog

Re: [PERFORM] No hash join across partitioned tables?

2010-06-09 Thread Robert Haas
On Tue, Mar 2, 2010 at 12:23 PM, Tom Lane wrote: > Robert Haas writes: >> On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane wrote: >>> Partially.  There are stats now but autovacuum is not bright about >>> when to update them. > >> Is that something you're planning to fix for 9.0?  If not, we at least >

Re: [PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4?

2010-06-09 Thread Pierre C
Can you give the config params for those : fsync = synchronous_commit = wal_sync_method = Also, some "vmstat 1" output during the runs would be interesting. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql

[PERFORM] Large (almost 50%!) performance drop after upgrading to 8.4.4?

2010-06-09 Thread Max Williams
Hi, I was doing some benchmarking while changing configuration options to try to get more performance out of our postgresql servers and noticed that when running pgbench against 8.4.3 vs 8.4.4 on identical hardware and configuration there is a large difference in performance. I know tuning is a

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-09 Thread Pierre C
Within the data to import most rows have 20 till 50 duplicates. Sometime much more, sometimes less. In that case (source data has lots of redundancy), after importing the data chunks in parallel, you can run a first pass of de-duplication on the chunks, also in parallel, something like : C

Re: [PERFORM] How to insert a bulk of data with unique-violations very fast

2010-06-09 Thread Torsten Zühlsdorff
Pierre C schrieb: Within the data to import most rows have 20 till 50 duplicates. Sometime much more, sometimes less. In that case (source data has lots of redundancy), after importing the data chunks in parallel, you can run a first pass of de-duplication on the chunks, also in parallel, s