On Tue, Jan 21, 2014 at 3:43 PM, Andres Freund <and...@2ndquadrant.com> wrote: > I personally think this isn't worth complicating the code for.
You're probably right. However, I don't see why the bar has to be very high when we're considering the trade-off between taking some emergency precaution against having a PANIC shutdown, and an assured PANIC shutdown. Heikki said somewhere upthread that he'd be happy with a solution that only catches 90% of the cases. That is probably a conservative estimate. The schemes discussed here would probably be much more effective than that in practice. Sure, you can still poke holes in them. For example, there has been some discussion of arbitrarily large commit records. However, this is the kind of thing just isn't that relevant in the real world. I believe that in practice the majority of commit records are all about the same size. I do not believe that the two acceptable outcomes here are either that we continue to always PANIC shutdown (i.e. do nothing), or promise to never PANIC shutdown. There is likely to be a third way, which is that the probability of a PANIC shutdown is, at the macro level, reduced somewhat from the present probability of 1.0. People are not going to develop a lackadaisical attitude about running out of disk space on the pg_xlog partition if we do so. They still have plenty of incentive to make sure that that doesn't happen. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers