I haven't had issues about the max_fsm_pages. The problem I had was caused
because I was not committing (or rolling back) after JDBC select calls.
I didn't think that was necessary because the select calls didnt modify
database. But, once I modified my selects to rollback after the
data is read from the result set, autovacuum worked fine.

I was hoping that with autovacumming working well, I don't need the cron
tasks etc to do the vacuum full periodically.. But I read a few days
back that reindexdb is recommended if there is a lot of inserts/deletes.
I have to experiment with that yet.

I'll check what I have set for max_fsm_pages tomorrow.

--- Francisco Reyes <[EMAIL PROTECTED]> wrote:

> Matthew T. O'Connor writes:
> 
> > In many instances the default thresholds for autovacuum are too 
> > conservative.  You can try making it more aggressive and see what 
> > happens, you can do this generally, or on a table specific basis.
> 
> I think something is very wrong with the estimates I am getting back. After 
> 3 times increasing the max_fsm_pages, every time I get the same error.. 
> about needing to increase max_fsm_pages.. and always the error is 
> recommending exactly the same number of additional fsm_pages.
> 
> I see other threads with other people having simmilar problems. Going to see 
> if I can find a resolution in the archives.


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to