On Feb 25, 2008, at 2:59 PM, Stephen Denne wrote:
Please know that I'm very new at advising PostgreSQL users how they
should tune their system...
I'd never have known it if you hadn't said anything
My understanding of your vacuum verbose output was that it was
pointing out that max_
Sean Leach wrote
> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
> >
> >
> > Have you checked Scott Marlowe's note:
> >
> >>> unless you've got a long running transaction
> >
> > How come those 2 million dead rows are not removable yet? My guess
> > (based on a quick search of the mailing lis
Sean Leach wrote
> On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
> >
> >> So should I do a vacuum full and then hope this doesn't
> >> happen again?
> >> Or should I run a VACUUM FULL after each aggregation run?
> >
> > If your usage pattern results in generating all of that
> unused space
>
On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:
So should I do a vacuum full and then hope this doesn't
happen again?
Or should I run a VACUUM FULL after each aggregation run?
If your usage pattern results in generating all of that unused space
in one transaction, and no further inserts
Sean Leach wrote:
> On Feb 24, 2008, at 4:27 PM, Scott Marlowe wrote:
>
> >
> > Urg. Then I wonder how your indexes are bloating but your table is
> > not... you got autovac running? No weird lock issues? It's a side
> > issue right now since the table is showing as non-bloated (unless
> > you
On Sun, 24 Feb 2008, Tom Lane wrote:
Sean Leach <[EMAIL PROTECTED]> writes:
I have a table, that in production, currently has a little over 3
million records in production. In development, the same table has
about 10 million records (we have cleaned production a few weeks
ago).
You mean the o
On Feb 24, 2008, at 4:27 PM, Scott Marlowe wrote:
On Sun, Feb 24, 2008 at 6:05 PM, Sean Leach <[EMAIL PROTECTED]> wrote:
On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:
What version pgsql is this? If it's pre 8.0 it might be worth
looking
into migrating for performance and maintenance
On Sun, Feb 24, 2008 at 6:05 PM, Sean Leach <[EMAIL PROTECTED]> wrote:
> On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:
>
> >
> > What version pgsql is this? If it's pre 8.0 it might be worth looking
> > into migrating for performance and maintenance reasons.
>
> It's the latest 8.3.0 rel
On Feb 24, 2008, at 4:03 PM, Scott Marlowe wrote:
The fact that your indexes are bloated but your table is not makes me
wonder if you're not running a really old version of pgsql that had
problems with monotonically increasing indexes bloating over time and
requiring reindexing.
That problem h
The fact that your indexes are bloated but your table is not makes me
wonder if you're not running a really old version of pgsql that had
problems with monotonically increasing indexes bloating over time and
requiring reindexing.
That problem has been (for the most part) solved by some hacking Tom
On Feb 24, 2008, at 1:18 PM, Stephen Denne wrote:
If you always get around a third of the rows in your table written
in the last day, you've got to be deleting about a third of the rows
in your table every day too. You might have a huge number of dead
rows in your table, slowing down the se
Tom Lane wrote
> Sean Leach <[EMAIL PROTECTED]> writes:
> > Now - here is prod:
>
> > db=> select count(1) from u_counts;
> >count
> > -
> > 3292215
> > (1 row)
>
>
> > -> Seq Scan on u_counts c (cost=0.00..444744.45
> > rows=1106691 width=4) (actual time=1429.996..78
On Feb 24, 2008, at 11:10 AM, Tom Lane wrote:
Sean Leach <[EMAIL PROTECTED]> writes:
Now - here is prod:
db=> select count(1) from u_counts;
count
-
3292215
(1 row)
-> Seq Scan on u_counts c (cost=0.00..444744.45
rows=1106691 width=4) (actual time=1429.996..7893.17
Sean Leach <[EMAIL PROTECTED]> writes:
> Now - here is prod:
> db=> select count(1) from u_counts;
>count
> -
> 3292215
> (1 row)
> -> Seq Scan on u_counts c (cost=0.00..444744.45
> rows=1106691 width=4) (actual time=1429.996..7893.178 rows=1036015
> loops=1)
>
Nope, seems like that would make sense but dev is 10 mill, prod is 3
million. Also including random_page_cost below. Thanks for any help.
Here is dev:
db=> analyze u_counts;
ANALYZE
Time: 15775.161 ms
db=> select count(1) from u_counts;
count
--
10972078
(1 row)
db=> show rand
Sean Leach <[EMAIL PROTECTED]> writes:
> I have a table, that in production, currently has a little over 3
> million records in production. In development, the same table has
> about 10 million records (we have cleaned production a few weeks
> ago).
You mean the other way around, to judge b
16 matches
Mail list logo