Re: [PERFORM] change sample size for statistics

2011-06-28 Thread Robert Haas
On Mon, Jun 13, 2011 at 6:33 PM, Willy-Bas Loos wrote: > On Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus wrote: >> >> It's not 10%.  We use a fixed sample size, which is configurable on the >> system, table, or column basis. > > It seems that you are referring to "alter column set statistics" and >

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Greg Smith
On 06/28/2011 07:50 PM, Craig McIlwee wrote: I was thinking that shared buffers controlled the amount of data, primarily table and index pages, that the database could store in memory at once. Based on that assumption, I thought that a larger value would enable an entire table + index to be in

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Greg Smith
On 06/28/2011 07:26 PM, Craig McIlwee wrote: Yes, the data import is painfully slow but I hope to make up for that with the read performance later. You can probably improve that with something like this: shared_buffers=512MB checkpoint_segments=64 Maybe bump up maintenance_work_mem too, if th

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Tomas Vondra
Dne 29.6.2011 01:50, Craig McIlwee napsal(a): >> > work_mem: 512MB >> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query >> > plan and took the same amount of time to execute give or take a few >> > seconds >> >> shared_buffers doesn't normally impact the query plan; it impacts

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Tomas Vondra
Dne 29.6.2011 01:26, Craig McIlwee napsal(a): >> Dne 28.6.2011 23:28, Craig McIlwee napsal(a): >> Are you sure those two queries are exactly the same? Because the daily >> case output says the width is 50B, while the half-month case says it's >> 75B. This might be why the sort/aggregate steps are s

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Craig McIlwee
> On 06/28/2011 05:28 PM, Craig McIlwee wrote: > > Autovacuum is disabled for these tables since the data is never > > updated. The tables that we are testing with at the moment will not > > grow any larger and have been both clustered and analyzed. > > Note that any such prep to keep from ever

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Craig McIlwee
> Dne 28.6.2011 23:28, Craig McIlwee napsal(a): > > Daily table explain analyze: http://explain.depesz.com/s/iLY > > Half month table explain analyze: http://explain.depesz.com/s/Unt > > Are you sure those two queries are exactly the same? Because the daily > case output says the width is 50B, whi

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Greg Smith
On 06/28/2011 05:28 PM, Craig McIlwee wrote: Autovacuum is disabled for these tables since the data is never updated. The tables that we are testing with at the moment will not grow any larger and have been both clustered and analyzed. Note that any such prep to keep from ever needing to main

Re: [PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Tomas Vondra
Dne 28.6.2011 23:28, Craig McIlwee napsal(a): > Daily table explain analyze: http://explain.depesz.com/s/iLY > Half month table explain analyze: http://explain.depesz.com/s/Unt Are you sure those two queries are exactly the same? Because the daily case output says the width is 50B, while the half-

[PERFORM] Slow performance when querying millions of rows

2011-06-28 Thread Craig McIlwee
Hello, I have a handful of queries that are performing very slowly.  I realize that I will be hitting hardware limits at some point, but want to make sure Ive squeezed out every bit of performance I can before calling it quits. Our database is collecting traffic data at the rate of about 3 mill

Re: [PERFORM] Long Running Update - My Solution

2011-06-28 Thread Harry Mantheakis
Hello Kevin > If you use EXPLAIN with both statements... Yes, the plans are indeed very different. Here is the statement, set to update up to 100,000 records, which took about 5 seconds to complete: UPDATE table_A SET field_1 = table_B.field_1 , field_2 = table_B.field_2 FROM table_B