Greetings,
* Rhhh Lin (ruanline...@hotmail.com) wrote:
> I would actually be an advocate for using a proper archive_command in order
> to facilitate a proper (Per the documentation) PITR and backup strategy.
Glad to hear it.
> However, a colleague had suggested such a creative approach (Possibl
On Tue, Oct 31, 2017 at 9:53 AM, Rhhh Lin wrote:
> I would actually be an advocate for using a proper archive_command in order
> to facilitate a proper (Per the documentation) PITR and backup strategy.
You should avoid using your own fancy archive command. There are
things that WAL-E for this pur
To: Rhhh Lin
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Backup strategy using 'wal_keep_segments'
Greetings,
* Rhhh Lin (ruanline...@hotmail.com) wrote:
> A colleague recently suggested that instead of implementing an
> 'archive_command' to push archivable WALs to
Greetings,
* Rhhh Lin (ruanline...@hotmail.com) wrote:
> A colleague recently suggested that instead of implementing an
> 'archive_command' to push archivable WALs to a secondary location (for
> further backup to tape for example), we could instead persist the WAL files
> in their current locat
Thanks very much for your reply Michael.
I note that it looks like pgbarman employs pg_receivexlog; I will check it out.
Regards,
Ruan
From: Michael Paquier
Sent: 22 October 2017 22:17:01
To: Rhhh Lin
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL
On Mon, Oct 23, 2017 at 5:57 AM, Rhhh Lin wrote:
> Is this approach feasible? Assuming obviously, we have sufficient disk space
> to facilitate 1000 WAL files etc.
You expose yourself to race conditions with such methods if a
checkpoint has the bad idea to recycle past segments that your logic
is
If you could publish a brief howto on this I would be most grateful. I bet
many others would too.
On Mon, Feb 23, 2009 at 2:56 PM, Bryan Murphy wrote:
> On Sun, Feb 22, 2009 at 7:30 PM, Tim Uckun wrote:
> >> 1. It's OK if we lose a few seconds (or even minutes) of transactions
> >> should one
On Sun, Feb 22, 2009 at 7:30 PM, Tim Uckun wrote:
>> 1. It's OK if we lose a few seconds (or even minutes) of transactions
>> should one of our primary databases crash.
>> 2. It's unlikely we'll need to load a backup that's more than a few days
>> old.
>
> How do you handle failover and falling ba
>
> 1. It's OK if we lose a few seconds (or even minutes) of transactions
> should one of our primary databases crash.
> 2. It's unlikely we'll need to load a backup that's more than a few days
> old.
>
How do you handle failover and falling back to the primary once it's up?
On Tue, 18 Jan 2005 22:31:43 +, Adam Witney <[EMAIL PROTECTED]> wrote:
> On 18/1/05 8:38 pm, "Lonni J Friedman" <[EMAIL PROTECTED]> wrote:
>
> > On Tue, 18 Jan 2005 18:23:23 +, Adam Witney <[EMAIL PROTECTED]> wrote:
> >>
> >> Hi,
> >>
> >> I am setting up the backup strategy for my databas
On 18/1/05 8:38 pm, "Lonni J Friedman" <[EMAIL PROTECTED]> wrote:
> On Tue, 18 Jan 2005 18:23:23 +, Adam Witney <[EMAIL PROTECTED]> wrote:
>>
>> Hi,
>>
>> I am setting up the backup strategy for my database.
>>
>> The database contains around 25 tables containing quite a lot of data that
>>
On Tue, 18 Jan 2005 18:23:23 +, Adam Witney <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I am setting up the backup strategy for my database.
>
> The database contains around 25 tables containing quite a lot of data that
> does not change very much (and when it does it is changed by me). And aroun
12 matches
Mail list logo