> If this is right, #2, #3, #4, and #6 feel similar except > that they're protecting against failures of different (but > still all incomplete) subsets of the hardware on the slave, right?
Right. Actually, the biggest difference with #6 has nothing to do with protecting against failures. It has rather to do with the ease of writing applications in the context of hot standby. You can close your connection, open a connection to a different server, and know that your transactions will be reflected there. On the other hand, I'd be surprised if it didn't come with a substantial performance penalty, so it may not be too practical in real life even if it sounds good on paper. #1 , #3, and #5 don't feel that useful to me. In the case of #1, sending your WAL over the network and then not checking that it got there is sort of silly: the likelihood of packet loss on the network has got to be several orders of magnitude more likely than a failure on the master. #3 and #5 just don't seem to provide any real benefits over their immediate predecessors. Honestly, I think the most useful thing is probably going to be asynchronous replication: in other words, when a commit is requested on the master, we write WAL and return success. In the background, we stream the WAL to a secondary, which writes it and applies it. This will give us a secondary which is mostly up to date (and can run queries, with hot standby) without killing performance. The other options are going to be for environments where losing a transaction is really, really bad, or (in the case of #6) read-mostly environments where it's useful to spread the query load out across several servers, but the overhead associated with waiting for the rare write transactions to apply everywhere is tolerable. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers