Thanks for your information. I am using postgresql 8.4 and this
version should have already supported HOT. The frequently updated
columns are not indexed columns. So, the frequent updates should not
create many dead records. I also did a small test. If I don't execute
vacuum, the number of pages of
In my experiment, I need about 1~3 min to finish the analyze operation
on the big table (which depends on the value of vacuum_cost_delay). I
am not surprised because this table is a really big one (now, it has
over 200M records).
However, the most of my concerns is the behavior of analyze/vacuum.
Hi,
Thanks for your response. I've checked it again and found that the
main cause is the execution of ANALYZE. As I have mentioned, I have
two tables: table A is a big one (around 10M~100M records) for log
data and table B is a small one (around 1k records) for keeping some
current status. There a
Hi,
I have a question about the behavior of autovacuum. When I have a big
table A which is being processed by autovacuum, I also manually use
(full) vacuum to clean another table B. Then I found that I always got
something like “found 0 removable, 14283 nonremovable row”. However,
if I stop the au
ormance (especially for the insert part).
kuopo.
On Mon, Jul 19, 2010 at 11:37 PM, Jorge Montero <
jorge_mont...@homedecorators.com> wrote:
> Large tables, by themselves, are not necessarily a problem. The problem is
> what you might be trying to do with them. Depending on th
? Thanks in advance.
kuopo.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance