On Thu, May 23, 2013 at 11:34 AM, Fujii Masao <masao.fu...@gmail.com> wrote: > On Thu, May 23, 2013 at 8:55 PM, Robert Haas <robertmh...@gmail.com> wrote: >> On Thu, May 23, 2013 at 7:10 AM, Heikki Linnakangas >> <hlinnakan...@vmware.com> wrote: >>> 1. Scan the WAL log of the old cluster, starting from the point where >>> the new cluster's timeline history forked off from the old cluster. For each >>> WAL record, make a note of the data blocks that are touched. This yields a >>> list of all the data blocks that were changed in the old cluster, after the >>> new cluster forked off. >> >> Suppose that a transaction is open and has written tuples at the point >> where WAL forks. After WAL forks, the transaction commits. Then, it >> hints some of the tuples that it wrote. There is no record in WAL >> that those blocks are changed, but failing to revert them leads to >> data corruption. > > Yes in asynchronous replication case. But in synchronous replication case, > after WAL forks, hint bits would not be set if their corresponding commit > record > is not replicated to the standby. The transaction commit keeps waiting > for the reply > from the standby before updating clog. So, this data corruption would not > happen > in sync case.
Not necessarily. SyncRepWaitForLSN() can be interrupted via a cancel signal. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers