On Fri, Oct 8, 2010 at 12:29 PM, Josh Berkus wrote:
> On 10/07/2010 06:38 PM, Robert Haas wrote:
>>
>> Yes, let's please just implement something simple and get it
>> committed. k = 1. Two GUCs (synchronous_standbys = name, name, name
>> and synchronous_waitfor = none|recv|fsync|apply), SUSET so
On 10/07/2010 06:38 PM, Robert Haas wrote:
Yes, let's please just implement something simple and get it
committed. k = 1. Two GUCs (synchronous_standbys = name, name, name
and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
change it per txn. Done. We can revise it *the day aft
On Fri, Oct 8, 2010 at 4:29 AM, Yeb Havinga wrote:
> Robert Haas wrote:
>>
>> Yes, let's please just implement something simple and get it
>> committed. k = 1. Two GUCs (synchronous_standbys = name, name, name
>> and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
>> change it per
Robert Haas wrote:
Yes, let's please just implement something simple and get it
committed. k = 1. Two GUCs (synchronous_standbys = name, name, name
and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
change it per txn. Done. We can revise it *the day after it's
committed* if we
On Fri, Oct 8, 2010 at 10:38 AM, Robert Haas wrote:
> Yes, let's please just implement something simple and get it
> committed. k = 1. Two GUCs (synchronous_standbys = name, name, name
> and synchronous_waitfor = none|recv|fsync|apply)
For my cases, I'm OK with this as the first commit, for now
On 07.10.2010 21:33, Josh Berkus wrote:
1) This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added
On 07.10.2010 23:56, Greg Stark wrote:
On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
wrote:
The standby name is a GUC in the standby's configuration file:
standby_name='bostonserver'
Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file
On Thu, Oct 7, 2010 at 7:15 PM, Greg Smith wrote:
> Josh Berkus wrote:
>>
>> This version of Standby Registration seems to add One More Damn Place
>> You Need To Configure Standby (OMDPYNTCS) without adding any
>> functionality you couldn't get *without* having a list on the master.
>> Can someone
Josh Berkus wrote:
This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added by this approach
vs. no
On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
wrote:
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>
Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file. That doesn't preclude this idea sinc
On Thu, Oct 7, 2010 at 2:33 PM, Josh Berkus wrote:
>> I think they work together fine. Greg's idea is that you list the
>> important standbys, and a synchronization guarantee that you'd like to
>> have for at least one of them. Simon's idea - at least at 10,000 feet
>> - is that you can take a p
> I think they work together fine. Greg's idea is that you list the
> important standbys, and a synchronization guarantee that you'd like to
> have for at least one of them. Simon's idea - at least at 10,000 feet
> - is that you can take a pass on that guarantee for transactions that
> don't nee
On Thu, Oct 7, 2010 at 1:27 PM, Heikki Linnakangas
wrote:
> Let me check that I got this right, and add some details to make it more
> concrete: Each standby is given a name. It can be something like "boston1"
> or "testserver". It does *not* have to be unique across all standby servers.
> In the
On Thu, Oct 7, 2010 at 1:45 PM, Josh Berkus wrote:
> On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
>> The standby name is a GUC in the standby's configuration file:
>>
>> standby_name='bostonserver'
>>
>> The list of important nodes is also a GUC, in the master's configuration
>> file:
>>
>> sync
On Thu, Oct 7, 2010 at 1:39 PM, Dave Page wrote:
> On 10/7/10, Heikki Linnakangas wrote:
>> On 06.10.2010 19:26, Greg Smith wrote:
>>> Now, the more relevant question, what I actually need in order for a
>>> Sync Rep feature in 9.1 to be useful to the people who want it most I
>>> talk to. That w
On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>
> The list of important nodes is also a GUC, in the master's configuration
> file:
>
> synchronous_standbys='bostonserver, oxfordserver'
This seems t
On 10/7/10, Heikki Linnakangas wrote:
> On 06.10.2010 19:26, Greg Smith wrote:
>> Now, the more relevant question, what I actually need in order for a
>> Sync Rep feature in 9.1 to be useful to the people who want it most I
>> talk to. That would be a simple to configure setup where I list a subse
On 06.10.2010 19:26, Greg Smith wrote:
Now, the more relevant question, what I actually need in order for a
Sync Rep feature in 9.1 to be useful to the people who want it most I
talk to. That would be a simple to configure setup where I list a subset
of "important" nodes, and the appropriate ackn
On Wed, Oct 6, 2010 at 12:26 PM, Greg Smith wrote:
> Now, the more relevant question, what I actually need in order for a Sync
> Rep feature in 9.1 to be useful to the people who want it most I talk to.
> That would be a simple to configure setup where I list a subset of
> "important" nodes, and
Josh Berkus wrote:
However, I think we're getting way the heck away from how far we
really want to go for 9.1. Can I point out to people that synch rep
is going to involve a fair bit of testing and debugging, and that
maybe we don't want to try to implement The World's Most Configurable
Stand
On Tue, Oct 5, 2010 at 2:30 PM, Simon Riggs wrote:
> On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
>> Much of the engineering we are doing centers around use cases that are
>> considerably more complex than what most people will do in real life.
>
> Why are we doing it then?
Because some
On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
> Much of the engineering we are doing centers around use cases that are
> considerably more complex than what most people will do in real life.
Why are we doing it then?
What I have proposed behaves identically to Oracle Maximum Availability
On Tue, Oct 5, 2010 at 12:40 PM, Simon Riggs wrote:
>> Well, you only need to have the file at all on nodes you want to fail
>> over to. And aren't you going to end up rejiggering the config when
>> you fail over anyway, based on what happened? I mean, suppose you
>> have three servers and you r
On Tue, 2010-10-05 at 11:46 -0400, Robert Haas wrote:
> On Tue, Oct 5, 2010 at 10:46 AM, Simon Riggs wrote:
> > On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
> >> >>
> >> >> When you have one server functioning at each site you'll block until
> >> >> you get a third machine back, rather th
On Tue, Oct 5, 2010 at 10:46 AM, Simon Riggs wrote:
> On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
>> >>
>> >> When you have one server functioning at each site you'll block until
>> >> you get a third machine back, rather than replicating to both sites
>> >> and remaining functional.
>>
On Tue, 2010-10-05 at 09:56 -0500, Kevin Grittner wrote:
> Simon Riggs wrote:
>
> > Is it a common use case that people have more than 3 separate
> > servers for one application, which is where the difference shows
> > itself.
>
> I don't know how common it is, but we replicate circuit court d
Another check: does specifying replication by server in such detail mean
we can't specify robustness at the transaction level? If we gave up that
feature, it would be a greatloss for performance tuning.
It's orthagonal. The kinds of configurations we're talking about simply
define what it wi
Simon Riggs wrote:
> Is it a common use case that people have more than 3 separate
> servers for one application, which is where the difference shows
> itself.
I don't know how common it is, but we replicate circuit court data
to two machines each at two sites. That way a disaster which took
On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
> >>
> >> When you have one server functioning at each site you'll block until
> >> you get a third machine back, rather than replicating to both sites
> >> and remaining functional.
> >
> > And that is so important a consideration that you woul
On Tue, Oct 5, 2010 at 10:33 AM, Simon Riggs wrote:
> On Tue, 2010-10-05 at 09:07 -0500, Kevin Grittner wrote:
>> Simon Riggs wrote:
>> > Robert Haas wrote:
>> >> Simon Riggs wrote:
>> >>> Josh Berkus wrote:
>> >>> Quorum commit, even with configurable vote weights, can't
>> >>> handle a
On Tue, 2010-10-05 at 09:07 -0500, Kevin Grittner wrote:
> Simon Riggs wrote:
> > Robert Haas wrote:
> >> Simon Riggs wrote:
> >>> Josh Berkus wrote:
> >>> Quorum commit, even with configurable vote weights, can't
> >>> handle a requirement that a particular commit be replicated
> >>>
On 10/05/2010 04:07 PM, Kevin Grittner wrote:
> When you have one server functioning at each site you'll block until
> you get a third machine back, rather than replicating to both sites
> and remaining functional.
That's not a very likely failure scenario, but yes.
What if the admin wants to add
Simon Riggs wrote:
> Robert Haas wrote:
>> Simon Riggs wrote:
>>> Josh Berkus wrote:
>>> Quorum commit, even with configurable vote weights, can't
>>> handle a requirement that a particular commit be replicated
>>> to (A || B) && (C || D).
>> Good point.
>>>
>>> Asking for quorum_
On Tue, 2010-10-05 at 08:57 -0400, Robert Haas wrote:
> On Tue, Oct 5, 2010 at 8:34 AM, Simon Riggs wrote:
> > On Mon, 2010-10-04 at 12:45 -0700, Josh Berkus wrote:
> >> >>> Quorum commit, even with configurable vote weights, can't handle a
> >> >>> requirement that a particular commit be replicat
On Tue, Oct 5, 2010 at 8:34 AM, Simon Riggs wrote:
> On Mon, 2010-10-04 at 12:45 -0700, Josh Berkus wrote:
>> >>> Quorum commit, even with configurable vote weights, can't handle a
>> >>> requirement that a particular commit be replicated to (A || B) && (C
>> >>> || D).
>> >> Good point.
>
> Askin
On Mon, 2010-10-04 at 12:45 -0700, Josh Berkus wrote:
> >>> Quorum commit, even with configurable vote weights, can't handle a
> >>> requirement that a particular commit be replicated to (A || B) && (C
> >>> || D).
> >> Good point.
Asking for quorum_commit = 3 would cover that requirement.
Not ex
On Mon, 2010-10-04 at 14:25 -0500, David Christensen wrote:
> Is there any benefit to be had from having standby roles instead of
> individual names? For instance, you could integrate this into quorum
> commit to express 3 of 5 "reporting" standbys, 1 "berlin" standby and
> 1 "tokyo" standby from
Josh Berkus writes:
Quorum commit, even with configurable vote weights, can't handle a
requirement that a particular commit be replicated to (A || B) && (C
|| D).
>>> Good point.
So I've been trying to come up with something manually and failed. I
blame the fever — without it maybe
On 10/04/2010 11:32 PM, Robert Haas wrote:
> I think in the end
> this is not much different from standby registration; you still have
> registrations, they just represent groups of machines instead of
> single machines.
Such groups are often easy to represent in CIDR notation, which would
reduce
On Mon, Oct 4, 2010 at 3:25 PM, David Christensen wrote:
>
> On Oct 4, 2010, at 2:02 PM, Robert Haas wrote:
>
>> On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner wrote:
>>> On 10/04/2010 05:20 PM, Robert Haas wrote:
Quorum commit, even with configurable vote weights, can't handle a
require
>>> Quorum commit, even with configurable vote weights, can't handle a
>>> requirement that a particular commit be replicated to (A || B) && (C
>>> || D).
>> Good point.
If this is the only feature which standby registration is needed for,
has anyone written the code for it yet? Is anyone planni
On Mon, Oct 4, 2010 at 3:25 PM, David Christensen wrote:
>
> On Oct 4, 2010, at 2:02 PM, Robert Haas wrote:
>
>> On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner wrote:
>>> On 10/04/2010 05:20 PM, Robert Haas wrote:
Quorum commit, even with configurable vote weights, can't handle a
require
On Oct 4, 2010, at 2:02 PM, Robert Haas wrote:
> On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner wrote:
>> On 10/04/2010 05:20 PM, Robert Haas wrote:
>>> Quorum commit, even with configurable vote weights, can't handle a
>>> requirement that a particular commit be replicated to (A || B) && (C
>>>
On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner wrote:
> On 10/04/2010 05:20 PM, Robert Haas wrote:
>> Quorum commit, even with configurable vote weights, can't handle a
>> requirement that a particular commit be replicated to (A || B) && (C
>> || D).
>
> Good point.
>
> Can the proposed standby reg
On 10/04/2010 05:20 PM, Robert Haas wrote:
> Quorum commit, even with configurable vote weights, can't handle a
> requirement that a particular commit be replicated to (A || B) && (C
> || D).
Good point.
Can the proposed standby registration configuration format cover such a
requirement?
Regards
On Mon, Oct 4, 2010 at 3:08 AM, Markus Wanner wrote:
> On 10/01/2010 05:06 PM, Dimitri Fontaine wrote:
>> Wait forever can be done without standby registration, with quorum commit.
>
> Yeah, I also think the only reason for standby registration is ease of
> configuration (if at all). There's no te
On Thu, Sep 30, 2010 at 11:32 PM, Heikki Linnakangas
wrote:
> The standby can already use restore_command to fetch WAL files from the
> archive. I don't see why the master should be involved in that.
To make the standby use restore_command to do that, you have to specify
something like scp in arc
On 29.09.2010 11:46, Fujii Masao wrote:
Aside from standby registration itself, I have another thought for C). Keeping
many WAL files in pg_xlog of the master is not good design in the first place.
I cannot believe that pg_xlog in most systems has enough capacity to store many
WAL files for the s
On Thu, Sep 23, 2010 at 6:49 PM, Dimitri Fontaine
wrote:
> Automatic registration is a good answer to both your points A)
> monitoring and C) wal_keep_segments, but needs some more thinking wrt
> security and authentication.
Aside from standby registration itself, I have another thought for C). K
Heikki Linnakangas writes:
> There's two separate concepts here:
>
> 1. Automatic registration. When a standby connects, its information gets
> permanently added to standby.conf file
>
> 2. Unregistered standbys. A standby connects, and its information is not in
> standby.conf. It's let in anyway,
On 23/09/10 12:49, Dimitri Fontaine wrote:
Heikki Linnakangas writes:
The consensus seems to be use a configuration file called
standby.conf. Let's use the "ini file format" for now [1].
What about automatic registration of standbys? That's not going to fly
with the unique global configuratio
Heikki Linnakangas writes:
> Having mulled through all the recent discussions on synchronous replication,
> ISTM there is pretty wide consensus that having a registry of all standbys
> in the master is a good idea. Even those who don't think it's *necessary*
> for synchronous replication seem to a
On 23/09/10 12:32, Dimitri Fontaine wrote:
Heikki Linnakangas writes:
Hmm, that situation can arise if there's a network glitch which leads the
standby to disconnect, but the master still considers the connection as
alive. When the standby reconnects, the master will see two simultaneous
connec
Heikki Linnakangas writes:
> Hmm, that situation can arise if there's a network glitch which leads the
> standby to disconnect, but the master still considers the connection as
> alive. When the standby reconnects, the master will see two simultaneous
> connections from the same standby. In that s
Heikki Linnakangas wrote:
> (starting yet another thread to stay focused)
>
> Having mulled through all the recent discussions on synchronous
> replication, ISTM there is pretty wide consensus that having a registry
> of all standbys in the master is a good idea. Even those who don't think
> it
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas
wrote:
>>> Should we allow multiple standbys with the same name to connect to
>>> the master?
>>
>> No. The point of naming them is to uniquely identify them.
>
> Hmm, that situation can arise if there's a network glitch which leads the
> stan
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas
wrote:
>> No. The point of naming them is to uniquely identify them.
>
> Hmm, that situation can arise if there's a network glitch which leads the
> standby to disconnect, but the master still considers the connection as
> alive. When the stand
On 22/09/10 16:54, Robert Haas wrote:
On Wed, Sep 22, 2010 at 8:21 AM, Fujii Masao wrote:
What if the number of standby entries in standby.conf is more than
max_wal_senders? This situation is allowed if we treat standby.conf
as just access control list like pg_hba.conf. But if we have to ensure
On Wed, Sep 22, 2010 at 8:21 AM, Fujii Masao wrote:
> What if the number of standby entries in standby.conf is more than
> max_wal_senders? This situation is allowed if we treat standby.conf
> as just access control list like pg_hba.conf. But if we have to ensure
> that all the registered standbys
On Wed, Sep 22, 2010 at 5:43 PM, Heikki Linnakangas
wrote:
> So let's put synchronous replication aside for now, and focus on standby
> registration first. Once we have that, the synchronous replication patch
> will be much smaller and easier to review.
Though I agree with standby registration, I
(starting yet another thread to stay focused)
Having mulled through all the recent discussions on synchronous
replication, ISTM there is pretty wide consensus that having a registry
of all standbys in the master is a good idea. Even those who don't think
it's *necessary* for synchronous replic
61 matches
Mail list logo