On Sat, Jul 14, 2012 at 7:54 PM, Josh Berkus <j...@agliodbs.com> wrote: > So, here's the core issue with degraded mode. I'm not mentioning this > to block any patch anyone has, but rather out of a desire to see someone > address this core problem with some clever idea I've not thought of. > The problem in a nutshell is: indeterminancy. > > Assume someone implements degraded mode. Then: > > 1. Master has one synchronous standby, Standby1, and two asynchronous, > Standby2 and Standby3. > > 2. Standby1 develops a NIC problem and is in and out of contact with > Master. As a result, it's flipping in and out of synchronous / degraded > mode. > > 3. Master fails catastrophically due to a RAID card meltdown. All data > lost. > > At this point, the DBA is in kind of a pickle, because he doesn't know: > > (a) Was Standby1 in synchronous or degraded mode when Master died? The > only log for that was on Master, which is now gone. > > (b) Is Standby1 actually the most caught up standby, and thus the > appropriate new master for Standby2 and Standby3, or is it behind? > > With the current functionality of Synchronous Replication, you don't > have either piece of indeterminancy, because some external management > process (hopefully located on another server) needs to disable > synchronous replication when Standby1 develops its problem. That is, if > the master is accepting synchronous transactions at all, you know that > Standby1 is up-to-date, and no data is lost. > > While you can answer (b) by checking all servers, (a) is particularly > pernicious, because unless you have the application log all "operating > in degraded mode" messages, there is no way to ever determine the truth.
Good explanation. In brief, the problem here is that you can only rely on the no-transaction-loss guarantee provided by synchronous replication if you can be certain that you'll always be aware of it when synchronous replication gets shut off. Right now that is trivially true, because it has to be shut off manually. If we provide a facility that logs a message and then shuts it off, we lose that certainty, because the log message could get eaten en route by the same calamity that takes down the master. There is no way for the master to WAIT for the log message to be delivered and only then degrade. However, we could craft a mechanism that has this effect. Suppose we create a new GUC with a name like synchronous_replication_status_change_command. If we're thinking about switching between synchronous replication and degraded mode automatically, we first run this command. If it returns 0, then we're allowed to switch, but if it returns anything else, then we're not allowed to switch (but can retry the command after a suitable interval). The user is responsible for supplying a command that records the status change somewhere off-box in a fashion that's sufficiently durable that the user has confidence that the notification won't subsequently be lost. For example, the user-supplied command could SSH into three machines located in geographically disparate data centers and create a file with a certain name on each one, returning 0 only if it's able to reach at least two of them and create the file on all the ones it can reach. If the master dies, but at least two out of the those three machines are still alive, we can be certain of determining with confidence whether the master might have been in degraded mode at the time of the crash. More or less paranoid versions of this scheme are possible depending on user preferences, but the key point is that for the no-transaction-loss guarantee to be of any use, there has to be a way to reliably know whether that guarantee was in effect at the time the master died in a fire. Logging isn't enough, but I think some more sophisticated mechanism can get us there. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers