On Feb 25, 2008, at 2:59 PM, Stephen Denne wrote:
Please know that I'm very new at advising PostgreSQL users how they
should tune their system...
I'd never have known it if you hadn't said anything
My understanding of your vacuum verbose output was that it was
pointing out that max_
On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
So should I do a vacuum full and then hope this doesn't
happen again?
Or should I run a VACUUM FULL after each aggregation run?
If your usage pattern results in generating all of that unused space
in one transaction, and no further inserts
On Feb 24, 2008, at 4:27 PM, Scott Marlowe wrote:
On Sun, Feb 24, 2008 at 6:05 PM, Sean Leach <[EMAIL PROTECTED]> wrote:
On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:
What version pgsql is this? If it's pre 8.0 it might be worth
looking
into migrating for perf
On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:
The fact that your indexes are bloated but your table is not makes me
wonder if you're not running a really old version of pgsql that had
problems with monotonically increasing indexes bloating over time and
requiring reindexing.
That problem h
On Feb 24, 2008, at 1:18 PM, Stephen Denne wrote:
If you always get around a third of the rows in your table written
in the last day, you've got to be deleting about a third of the rows
in your table every day too. You might have a huge number of dead
rows in your table, slowing down the se
On Feb 24, 2008, at 11:10 AM, Tom Lane wrote:
Sean Leach <[EMAIL PROTECTED]> writes:
Now - here is prod:
db=> select count(1) from u_counts;
count
-
3292215
(1 row)
-> Seq Scan on u_counts c (cost=0.00..444744.45
rows=1106691 width=4) (actual t
; Seq Scan on res r (cost=0.00..15.88 rows=67
width=10) (actual time=0.046..0.258 rows=60 loops=1)
Filter: (((rtype)::text = 'udns'::text) AND
(location = 1))
Total runtime: 13201.046 ms
(9 rows)
Time: 13204.686 ms
On Feb 24, 2008, at 9:50 AM, To
I have a table, that in production, currently has a little over 3
million records in production. In development, the same table has
about 10 million records (we have cleaned production a few weeks
ago). One of my queries joins this table with another, and in
development, the particular co