Martha Stewart called it a Good Thing when [EMAIL PROTECTED] ("Marc G. 
Fournier") wrote:
> I know one person was talking about being able to target only those
> that pages that have changes, instead of the whole table ... but some
> sort of "load monitoring" that checks # of active connections and
> tries to find 'lulls'?

I have some "log table purging" processes I'd like to put in place; it
would be really slick to be able to get some statistics from the
system as to how busy the DB has been in the last little while.  

The nice, adaptive algorithm:

- Loop forever

  - Once a minute, evaluate how busy things seem, giving some metric X

   -> If X is "high" then purge 10 elderly tuples from table log_table
   -> If X is "moderate" then purge 100 elderly tuples from table
      log_table
   -> If X is "low" then purge 1000 elderly tuples from table
      log_table

The trouble is in measuring some form of "X."

Some reasonable approximations might include:
 - How much disk I/O was recorded in the last 60 seconds?
 - How many application transactions (e.g. - invoices or such) were 
   issued in the last 60 seconds (monitoring a sequence could be
   good enough).
-- 
output = reverse("gro.mca" "@" "enworbbc")
http://linuxfinances.info/info/slony.html
?OM ERROR

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to