Fujii,
>
> You mean that "remaster" is, after promoting one of standby servers,
> to make
> remaining standby servers reconnect to new master and resolve the
> timeline
> gap without the shared archive? Yep, that's one of my TODO items, but
> I'm not
> sure if I have enough time to implement that
> It might be easy to detect the situation where the standby has
> connected to itself,
> e.g., by assigning ID for each instance and checking whether IDs of
> two servers
> are the same. But it seems not easy to detect the
> circularly-connected
> two or more
> standbys.
Well, I think it would b
On Fri, May 18, 2012 at 3:57 AM, Joshua Berkus wrote:
> Yeah, I don't know how I produced the crash in the first place, because of
> course the self-replica should block all writes, and retesting it I can't get
> it to accept a write. Not sure how I did it in the first place.
>
> So the bug is
Yeah, I don't know how I produced the crash in the first place, because of
course the self-replica should block all writes, and retesting it I can't get
it to accept a write. Not sure how I did it in the first place.
So the bug is just that you can connect a server to itself as its own replica.
On Thu, May 17, 2012 at 10:42 PM, Ants Aasma wrote:
> On Thu, May 17, 2012 at 3:42 PM, Joshua Berkus wrote:
>> Even more fun:
>>
>> 1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
>> standby_mode = on )
>>
>> 2) Connect the server to *itself* as a replica.
>>
>> 3) This will
On Thu, May 17, 2012 at 12:01 PM, Joshua Berkus wrote:
>
>> > And: if we still have to ship logs, what's the point in even having
>> > cascading replication?
>>
>> At least cascading replication (1) allows you to adopt more flexible
>> configuration of servers,
>
> I'm just pretty shocked. The la
On Thu, May 17, 2012 at 3:42 PM, Joshua Berkus wrote:
> Even more fun:
>
> 1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
> standby_mode = on )
>
> 2) Connect the server to *itself* as a replica.
>
> 3) This will work and report success, up until you do your first write.
>
>
Jim, Fujii,
Even more fun:
1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
standby_mode = on )
2) Connect the server to *itself* as a replica.
3) This will work and report success, up until you do your first write.
4) Then ... segfault!
- Original Message -
> > And: if we still have to ship logs, what's the point in even having
> > cascading replication?
>
> At least cascading replication (1) allows you to adopt more flexible
> configuration of servers,
I'm just pretty shocked. The last time we talked about this, at the end of the
9.1 development
On Thu, May 17, 2012 at 1:07 AM, Thom Brown wrote:
> On 16 May 2012 11:36, Fujii Masao wrote:
>> On Wed, May 16, 2012 at 2:29 AM, Thom Brown wrote:
>>> On 15 May 2012 13:15, Fujii Masao wrote:
On Wed, May 16, 2012 at 1:36 AM, Thom Brown wrote:
> However, this isn't true when I restart
On 5/16/12 10:53 AM, Fujii Masao wrote:
On Wed, May 16, 2012 at 3:43 AM, Joshua Berkus wrote:
Before restarting it, you need to do pg_basebackup and make a base
backup
onto the standby again. Since you started the standby without
recovery.conf,
a series of WAL in the standby has gotten incons
Well, that is a form of testing. :)
My point was that we need some kind of regression tests around all the new
replication stuff, and if you had some scripts that would be a useful starting
point. But it sounds like you haven't gotten that far with it, so...
On 5/15/12 10:12 AM, Joshua Berkus
On 16 May 2012 11:36, Fujii Masao wrote:
> On Wed, May 16, 2012 at 2:29 AM, Thom Brown wrote:
>> On 15 May 2012 13:15, Fujii Masao wrote:
>>> On Wed, May 16, 2012 at 1:36 AM, Thom Brown wrote:
However, this isn't true when I restart the standby. I've been
informed that this should wo
On Wed, May 16, 2012 at 3:43 AM, Joshua Berkus wrote:
>
>> Before restarting it, you need to do pg_basebackup and make a base
>> backup
>> onto the standby again. Since you started the standby without
>> recovery.conf,
>> a series of WAL in the standby has gotten inconsistent with that in
>> the m
On Wed, May 16, 2012 at 3:42 AM, Joshua Berkus wrote:
> Fujii,
>
> Wait, are you telling me that we *still* can't remaster from streaming
> replication?
What's the "remaster"?
> And: if we still have to ship logs, what's the point in even having cascading
> replication?
At least cascading rep
On Wed, May 16, 2012 at 2:29 AM, Thom Brown wrote:
> On 15 May 2012 13:15, Fujii Masao wrote:
>> On Wed, May 16, 2012 at 1:36 AM, Thom Brown wrote:
>>> However, this isn't true when I restart the standby. I've been
>>> informed that this should work fine if a WAL archive has been
>>> configured
> Before restarting it, you need to do pg_basebackup and make a base
> backup
> onto the standby again. Since you started the standby without
> recovery.conf,
> a series of WAL in the standby has gotten inconsistent with that in
> the master.
> So you need a fresh backup to restart the standby.
Y
Fujii,
Wait, are you telling me that we *still* can't remaster from streaming
replication? Why wasn't that fixed in 9.2?
And: if we still have to ship logs, what's the point in even having cascading
replication?
- Original Message -
> On Wed, May 16, 2012 at 1:36 AM, Thom Brown wrote
On 15 May 2012 13:15, Fujii Masao wrote:
> On Wed, May 16, 2012 at 1:36 AM, Thom Brown wrote:
>> However, this isn't true when I restart the standby. I've been
>> informed that this should work fine if a WAL archive has been
>> configured (which should be used anyway).
>
> The WAL archive should
On Mon, May 14, 2012 at 4:04 AM, Josh Berkus wrote:
> Doing some beta testing, managed to produce this issue using the daily
> snapshot from Tuesday:
>
> 1. Created master server, loaded it with a couple dummy databases.
>
> 2. Created standby server.
>
> 3. Did pg_basebackup -x stream on standby
On Wed, May 16, 2012 at 1:36 AM, Thom Brown wrote:
> However, this isn't true when I restart the standby. I've been
> informed that this should work fine if a WAL archive has been
> configured (which should be used anyway).
The WAL archive should be shared by master-replica and replica-replica,
On 13 May 2012 16:08, Josh Berkus wrote:
> More issues: promoting intermediate standby breaks replication.
>
> To be a bit blunt here, has anyone tested cascading replication *at all*
> before this?
>
> So, same setup as previous message.
>
> 1. Shut down master-master.
>
> 2. pg_ctl promote maste
Jim,
I didn't get as far as running any tests, actually. All I did was try to set
up 3 servers in cascading replication. Then I tried shutting down
master-master and promoting master-replica. That's it.
- Original Message -
> On May 13, 2012, at 3:08 PM, Josh Berkus wrote:
> > More i
On May 13, 2012, at 3:08 PM, Josh Berkus wrote:
> More issues: promoting intermediate standby breaks replication.
>
> To be a bit blunt here, has anyone tested cascading replication *at all*
> before this?
Josh, do you have scripts that you're using to do this testing? If so can you
post them so
On 13 May 2012 20:23, Josh Berkus wrote:
> More issues: the pg_basebackup -x stream on the cascading replica won't
> complete until the xlog rotates on the master. (again, this is
> Tuesday's snapshot).
This is already on the open items list:
http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_I
More issues: promoting intermediate standby breaks replication.
To be a bit blunt here, has anyone tested cascading replication *at all*
before this?
So, same setup as previous message.
1. Shut down master-master.
2. pg_ctl promote master-replica
3. replication breaks. error message on replic
More issues: the pg_basebackup -x stream on the cascading replica won't
complete until the xlog rotates on the master. (again, this is
Tuesday's snapshot).
Servers:
.226 == master-master, the writeable master
.227 == master-replica, a direct replica of master-master
.228 == replica-replica, a cas
27 matches
Mail list logo