[GENERAL] pg_start_backup does not actually allow for consistent, file-level backup

2015-06-08 Thread otheus uibk
The manual and in this mailing list, the claim is made that consistent, file-level backups may be made by bracketing the file-copy operation with the postgresql pg_start_backup and pg_stop_backup operations. Many people including myself have found that in some circumstances, using "tar" to copy th

Re: [GENERAL] pg_start_backup does not actually allow for consistent, file-level backup

2015-06-08 Thread otheus uibk
Thank you, all. The manual for 9.4 is indeed clearer on this point than the 9.1 version.

Re: [GENERAL] pg_start_backup does not actually allow for consistent, file-level backup

2015-06-08 Thread otheus uibk
On Mon, Jun 8, 2015 at 3:13 PM, otheus uibk wrote: > Thank you, all. The manual for 9.4 is indeed clearer on this point than > the 9.1 version. > Just to nit-pick, I see nowhere in either version of the manual the indication that it is normal for postgresql to continue to update fil

[GENERAL] Unexpected trouble from pg_basebackup

2016-10-04 Thread otheus uibk
I recently updated my systems from pg 9.1.8 to 9.5.3. A pg_dumpall was used to migrate the data. Now I'm trying to re-establish replication between master and slave. I'm getting stuck. When I run pg_basebackup (via a script which worked flawlessly on 9.1.8, AND via command line, ala "manual mode")

Re: [GENERAL] Unexpected trouble from pg_basebackup

2016-10-04 Thread otheus uibk
After a 3 to 4 minute delay, pg_basebackup started doing it's thing and finished within a few minutes. So now the question is: why the startup delay?

[GENERAL] Feature request: separate logging

2016-11-18 Thread otheus uibk
A glaring weakness in Postgresql for production systems is that the administrator has no way of controlling what types of logs go where. There are at least two types of logs: errors and statement logs. (I could also add: connection, syntax error, query duration, audit). It has becomes increasingly

[GENERAL] Replaying xlogs from beginning

2016-02-17 Thread otheus uibk
I'm looking for answers to this question, but so far haven't turned up a usable answer. Perhaps I'm asking it the wrong way. I want to replay the xlogs from the beginning of time up until a particular time. The problem is, the time is before the first base backup. But I have all the xlogs since th

Re: [GENERAL] Replaying xlogs from beginning

2016-02-17 Thread otheus uibk
I came up with an answer to the _second_ question (how do I do this from a new instance?). In the new instance directory: 1. Hack the system ID in the global/pg_control file to that of the original instance. 1a. Use pg_controlinfo to get the hex version of the control id: $ pg_controldata

Re: [GENERAL] Replaying xlogs from beginning

2016-02-25 Thread otheus uibk
> You're assuming that the only significant aspect of initdb's output that can vary from run to run is the database system ID. I prefer to call it "optimistic prediction". But yes. :) > If you're lucky this technique will work, but it's not reliable and not supported. You really need to take an

[GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-15 Thread otheus uibk
I've been working with PG 9.1.8 for two years now, mainly asynchronous replication. Recently, an IT admin of another group contested that the PG's asynchronous replication can result in loss of data in a 1-node failure. After re-readinG the documentation, I cannot determine to what extent this is t

Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-16 Thread otheus uibk
Thomas, thanks for your input... But I'm not quite getting the answer I need > But what precisely is the algorithm and timing involved with streaming > WALs? > > > > Is it: > > * client issues COMMIT > > * master receives commit > > * master processes transaction internally > > * maste

Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-16 Thread otheus uibk
Apologies for the double-reply... This is to point out the ambiguity between the example you gave and stated documentation. On Wednesday, March 16, 2016, Thomas Munro wrote: > > Waiting for the transaction to be durably stored (flushed to disk) on > two servers before COMMIT returns means that y

Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-19 Thread otheus uibk
hly)), the WAL may end up sleeping (between iterations of 5 and 6). On Wed, Mar 16, 2016 at 10:21 AM, otheus uibk wrote: > Section 25.2.5. "The standby connects to the primary, which streams WAL > records to the standby as they're generated, without waiting for the WAL > file to

Re: [GENERAL] How to Qualifying or quantify risk of loss in asynchronous replication

2016-03-20 Thread otheus uibk
On Wed, Mar 16, 2016 at 11:51 PM, Adrian Klaver wrote: > > I thought it was already clear: Perhaps "Clarity is in the eye of the beholder". If you are very familiar with the internals and operation of the software, the documentation is clear. It's like hindsight; it's always "20/20". > http: