The manual and in this mailing list, the claim is made that consistent,
file-level backups may be made by bracketing the file-copy operation with
the postgresql pg_start_backup and pg_stop_backup operations. Many people
including myself have found that in some circumstances, using "tar" to copy
th
Thank you, all. The manual for 9.4 is indeed clearer on this point than
the 9.1 version.
On Mon, Jun 8, 2015 at 3:13 PM, otheus uibk wrote:
> Thank you, all. The manual for 9.4 is indeed clearer on this point than
> the 9.1 version.
>
Just to nit-pick, I see nowhere in either version of the manual the
indication that it is normal for postgresql to continue to update fil
I recently updated my systems from pg 9.1.8 to 9.5.3. A pg_dumpall was used
to migrate the data. Now I'm trying to re-establish replication between
master and slave. I'm getting stuck.
When I run pg_basebackup (via a script which worked flawlessly on 9.1.8,
AND via command line, ala "manual mode")
After a 3 to 4 minute delay, pg_basebackup started doing it's thing and
finished within a few minutes. So now the question is: why the startup
delay?
A glaring weakness in Postgresql for production systems is that the
administrator has no way of controlling what types of logs go where. There
are at least two types of logs: errors and statement logs. (I could also
add: connection, syntax error, query duration, audit). It has becomes
increasingly
I'm looking for answers to this question, but so far haven't turned up a
usable answer. Perhaps I'm asking it the wrong way.
I want to replay the xlogs from the beginning of time up until a particular
time. The problem is, the time is before the first base backup. But I have
all the xlogs since th
I came up with an answer to the _second_ question (how do I do this from a
new instance?).
In the new instance directory:
1. Hack the system ID in the global/pg_control file to that of the original
instance.
1a. Use pg_controlinfo to get the hex version of the control id:
$ pg_controldata
> You're assuming that the only significant aspect of initdb's output that
can vary from run to run is the database system ID.
I prefer to call it "optimistic prediction". But yes. :)
> If you're lucky this technique will work, but it's not reliable and not
supported. You really need to take an
I've been working with PG 9.1.8 for two years now, mainly asynchronous
replication. Recently, an IT admin of another group contested that the
PG's asynchronous replication can result in loss of data in a 1-node
failure. After re-readinG the documentation, I cannot determine to what
extent this is t
Thomas, thanks for your input... But I'm not quite getting the answer I
need
> But what precisely is the algorithm and timing involved with streaming
> WALs?
> >
> > Is it:
> > * client issues COMMIT
> > * master receives commit
> > * master processes transaction internally
> > * maste
Apologies for the double-reply... This is to point out the ambiguity
between the example you gave and stated documentation.
On Wednesday, March 16, 2016, Thomas Munro
wrote:
>
> Waiting for the transaction to be durably stored (flushed to disk) on
> two servers before COMMIT returns means that y
hly)), the WAL may end up
sleeping (between iterations of 5 and 6).
On Wed, Mar 16, 2016 at 10:21 AM, otheus uibk wrote:
> Section 25.2.5. "The standby connects to the primary, which streams WAL
> records to the standby as they're generated, without waiting for the WAL
> file to
On Wed, Mar 16, 2016 at 11:51 PM, Adrian Klaver
wrote:
>
> I thought it was already clear:
Perhaps "Clarity is in the eye of the beholder". If you are very familiar
with the internals and operation of the software, the documentation is
clear. It's like hindsight; it's always "20/20".
> http:
14 matches
Mail list logo