Re: [PERFORM] REPOST: Nested loops row estimates always too high

2007-09-23 Thread Ow Mun Heng
On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote: > (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE) > > I am noticing that my queries are spending a lot of time in nested loops. > The table/index row estimates are not bad, but the nested loops can be off > b

Re: [PERFORM] Possible explanations for catastrophic performance deterioration?

2007-09-23 Thread Gregory Stark
"Carlos Moreno" <[EMAIL PROTECTED]> writes: > I'm now thinking that the problem with my logic is that the system does > not keep anything in memory (or not all tuples, in any case), since it > is only counting, so it does not *have to* keep them That's really not how it works. When Postgres talks

Re: [PERFORM] Possible explanations for catastrophic performance deterioration?

2007-09-23 Thread Jonah H. Harris
On 9/23/07, Carlos Moreno <[EMAIL PROTECTED]> wrote: > Yes, that part I understand --- I think I now know what the error is in > my logic. I was thinking as follows: We read 2GB of which 1900MB are > dead tuples. But then, once they're read, the system will only keep > in memory the 100MB that a

Re: [PERFORM] Possible explanations for catastrophic performance deterioration?

2007-09-23 Thread Carlos Moreno
Alvaro Herrera wrote: Carlos Moreno wrote: That is: the first time I run the query, it has to go through the disk; in the normal case it would have to read 100MB of data, but due to bloating, it actually has to go through 2GB of data. Ok, but then, it will load only 100MB (the ones that

Re: [PERFORM] Possible explanations for catastrophic performance deterioration?

2007-09-23 Thread Alvaro Herrera
Carlos Moreno wrote: > That is: the first time I run the query, it has to go through the > disk; in the normal case it would have to read 100MB of data, but due > to bloating, it actually has to go through 2GB of data. Ok, but > then, it will load only 100MB (the ones that are not "uncollected

Re: [PERFORM] Possible explanations for catastrophic performance deterioration?

2007-09-23 Thread Carlos Moreno
Jonah H. Harris wrote: On 9/23/07, Carlos Moreno <[EMAIL PROTECTED]> wrote: Wait a second --- am I correct in understanding then that the bloating you guys are referring to occurs *in memory*?? No, bloating occurs on-disk; but this does affect memory. Bloat means that even though your table d

Re: [PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Jonah H. Harris
On 9/23/07, Carlos Moreno <[EMAIL PROTECTED]> wrote: > Wait a second --- am I correct in understanding then that the bloating > you guys are referring to occurs *in memory*?? No, bloating occurs on-disk; but this does affect memory. Bloat means that even though your table data may take up 1G afte

Re: [PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Carlos Moreno
I don't understand this argument --- the newer system has actually less memory than the old one; how could it fit there and not on the old one? Plus, how could dropping-recreating the database on the same machine change the fact that the entire dataset entirely fit or not in memory?? Because

Re: [PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Alvaro Herrera
Carlos Moreno wrote: >> , but my guess is that the total >> data size about enough to fit in shared_buffers or kernel cache. On >> the new system (or dropped/recreated database), it would've all or >> mostly fit in memory which would make things like count(*) work >> quickly. > > I don't underst

Re: [PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Carlos Moreno
Jonah H. Harris wrote: You didn't specify the database size Oops, sorry about that one --- the full backup is a 950MB file. The entire database should fit in memory (and the effective_cache_size was set to 2GB for the machine with 4GB of memory) , but my guess is that the total data size

Re: [PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Tom Lane
"Jonah H. Harris" <[EMAIL PROTECTED]> writes: > My guess is that a vacuum full would've brought the other database > back up to speed. Yeah, table bloat is what it sounds like to me too. > In the future, you probably want to set fillfactor > to a reasonable amount to account for updates-to-blocks

[PERFORM] zero value in statistics collector's result

2007-09-23 Thread Yinan Li
Hi, When I use the statistics collector to see the number of IO, I always get zero in almost all of columns. I really want to know the reason for that. The result of statistics view: # select * from pg_statio_user_tables; relid | schemaname | relname | heap_blks_read | heap_blks_hit | id

Re: [PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Jonah H. Harris
You didn't specify the database size, but my guess is that the total data size about enough to fit in shared_buffers or kernel cache. On the new system (or dropped/recreated database), it would've all or mostly fit in memory which would make things like count(*) work quickly. On the old database,

[PERFORM] Possible explanations for catastrophic performace deterioration?

2007-09-23 Thread Carlos Moreno
I recently had a puzzling experience (performace related). Had a DB running presumably smoothly, on a server with Dual-Core Opteron and 4GB of RAM (and SATA2 drives with Hardware RAID-1). (PostgreSQL 8.2.4 installed from source, on a FC4 system --- databases with no encoding --- initdb -E SQL_