On Sun, Jan 30, 2011 at 04:56:29PM +0200, Herouth Maoz wrote: > > Unless my eyes were deceiving me, this was not the case. Sure, there have > been heavy transactions during that time (e.g. the daily backup of the > database, and the daily inserts into other tables, which take a long time, > and a few selects which I haven't been able to find an optimal index for). > But this is the query I use to see these processes (ran from a superuser): > > SELECT usename, procpid, query_start, client_addr, client_port, > current_query,waiting > FROM pg_stat_activity > WHERE query_start < now() - interval '3 seconds' > AND xact_start is not null order by xact_start > > Any long transactions should be caught by it, but most of the time, all I see > are vacuum workers.
Well, what's your I/O on the disk? Have you tuned vacuum? Maybe you're just saturating the ability of the table to be vacuumed, or else vacuum is being told to back off? > Yes, I do delete many tuples from that table. My mode of usage is > like this: I have a small table called billing which receives new > data every night. I want to keep that table small so that those > nightly updates don't take an overly long time, because all data > (several such tables) has to be ready in the database by the next > morning. Therefore, once a week on the weekend, I move a week's > worth of data to billing__archive (the table we are discussing), and > delete a week's worth from its end. Now, the indexes on that table > would make this impossible to do within the weekend, so what I do is > drop all the indexes before I do the inserts, and then recreate > them, and then do the deletes. Without looking at the details of your database, I have to say that the above sounds to me like more work than letting the system handle this itself. I have a suspicion that what you really want to do is trickle out the changes rather than trying to do things in big batches this way. > If a vacuum takes me several days (let alone over a week!) than a > VACUUM FULL is out of the question. VACUUM FULL locks the table > completely and that table is essential to our customer care. If push > comes to shove, I think I'd rather dump that table, drop it, and > restore it over the weekend, which I believe will be faster than a > VACUUM FULL. Yes, I think so too. And I bet at the current state of affairs, that's a good bet. Whatever the situation, I suspect things are too bad off to be worth trying to get through a vacuum with. > One other important question: a tuple marked by VACUUM as reusable > (not VACUUM FULL which restores it to the operating system) - can > its space ever be used by another table, or can it only be used for > new inserts into the same table? It's managed by postgres, but given your churn rate on these tables I'd be tempted to set a fillfactor with a lot of room, and let the tables be "big" (i.e. with a lot of empty space) so that their long term storage footprint is stable. A -- Andrew Sullivan a...@crankycanuck.ca -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general