On 04.10.2010 10:03, Markus Wanner wrote:
On 09/30/2010 04:54 PM, Yeb Havinga wrote:
Heikki Linnakangas wrote:
You do realize that to be able to guarantee zero data loss, the master
will have to stop committing new transactions if the streaming stops
for any reason, like a network glitch. Maybe that's a tradeoff you
want, but I'm asking because that point isn't clear to many people.
If there's a network glitch, it'd probably affect networked client
connections as well, so it would mean no extra degration of service.
Agreed.
I think the network glitch example is too general, it could affect any
part of the whole network. Even just the connection between the master
and the standby, in which case all client connections would keep up.
Let's quickly think about that scenario. AFAIU in such a case, the
standby would continue to answer read-only queries, independent of what
the master does, right?
Right.
Or does the standby stop processing read-only
queries in case it looses connection to the master?
As far as the current proposals go, no.
It seems to me the later is required, if we let the master continue to
commit transactions. Otherwise the standby would serve stale data to its
clients without knowing.
Yep. If you want to guarantee that a hot standby doesn't return stale
data, if the connection is lost you need to either stop processing
read-only queries in the standby, or stop processing commits in the master.
Note that this assumes that you use the 'replay' synchronization level.
In the weaker levels, read-only queries can always return stale data.
With 'replay' and hot standby combination, you'll want to set
max_standby_archive_delay to a very low value, or a read-only query can
cause master to stop processing commits (or the standby to stop
accepting new queries, if that's preferred).
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers