[PERFORM] vacuumdb command
Hi I used the vacuumdb command. But in its output I cann't see VACUUM. The last part of output is DETAIL: 0 dead row versions cannot be removed yet. There were 0 unused item pointers. 1 pages contain useful free space. 0 pages are entirely empty. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: free space map contains 768 pages in 392 relations DETAIL: A total of 6720 page slots are in use (including overhead). 6720 page slots are required to track all free space. Current limits are: 153600 page slots, 1000 relations, using 965 kB. I think if the process is complete then last part of output is VACUUM. Is it means the process is not complete? Pls help me to clear my doubts. -- Regards Soorjith P
Re: [PERFORM] vacuumdb command
soorjith p wrote: > I used the vacuumdb command. But in its output I cann't see VACUUM. > > The last part of output is > > DETAIL: 0 dead row versions cannot be removed yet. > There were 0 unused item pointers. > 1 pages contain useful free space. > 0 pages are entirely empty. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: free space map contains 768 pages in 392 relations > DETAIL: A total of 6720 page slots are in use (including overhead). > 6720 page slots are required to track all free space. > Current limits are: 153600 page slots, 1000 relations, using 965 kB. > > > I think if the process is complete then last part of output is VACUUM. > Is it means the process is not complete? No. It is complete. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] table partitioning & max_locks_per_transaction
Tom -- Thanks for the pointers and advice. We've started by doubling max_locks and halving shared_buffers, we'll see how it goes. Brian On Oct 10, 2009, at 7:56 PM, Tom Lane wrote: Brian Karlak writes: "out of shared memory HINT: You might need to increase max_locks_per_transaction" You want to do what it says ... 1) We've already tuned postgres to use ~2BG of shared memory -- which is SHMAX for our kernel. If I try to increase max_locks_per_transaction, postgres will not start because our shared memory is exceeding SHMAX. How can I increase max_locks_per_transaction without having my shared memory requirements increase? Back off shared_buffers a bit? 2GB is certainly more than enough to run Postgres in. 2) Why do I need locks for all of my subtables, anyways? I have constraint_exclusion on. The query planner tells me that I am only using three tables for the queries that are failing. Why are all of the locks getting allocated? Because the planner has to look at all the subtables and make sure that they in fact don't match the query. So it takes AccessShareLock on each one, which is the minimum strength lock needed to be sure that the table definition isn't changing underneath you. Without *some* lock it's not really safe to examine the table at all. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance