Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread CS DBA
On 04/13/2016 08:46 AM, Edson Richter wrote: Em 13/04/2016 11:18, Adrian Klaver escreveu: On 04/13/2016 06:58 AM, Edson Richter wrote: Another trouble I've found: I've used "pg_dump" and "pg_restore" to create the new CustomerTest database in my cluster. Immediately, replication started to

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread Adrian Klaver
On 04/13/2016 07:46 AM, Edson Richter wrote: Em 13/04/2016 11:18, Adrian Klaver escreveu: On 04/13/2016 06:58 AM, Edson Richter wrote: Another trouble I've found: I've used "pg_dump" and "pg_restore" to create the new CustomerTest database in my cluster. Immediately, replication started to r

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread Edson Richter
Em 13/04/2016 11:18, Adrian Klaver escreveu: On 04/13/2016 06:58 AM, Edson Richter wrote: Another trouble I've found: I've used "pg_dump" and "pg_restore" to create the new CustomerTest database in my cluster. Immediately, replication started to replicate the 60Gb data into slave, causing big

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread Adrian Klaver
On 04/13/2016 06:58 AM, Edson Richter wrote: Another trouble I've found: I've used "pg_dump" and "pg_restore" to create the new CustomerTest database in my cluster. Immediately, replication started to replicate the 60Gb data into slave, causing big trouble. Does mark it as "template" avoids re

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-13 Thread Edson Richter
gt; *Assunto: *Re: [GENERAL] Fastest way to duplicate a quite large database On 04/12/2016 07:51 AM, Edson Richter wrote: > Same machine, same cluster - just different database name. Hmm, running tests against the same cluster you are running the production database would seem to be a perfor

RES: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Edson Richter
De: Adrian Klaver Enviado:terça-feira, 12 de abril de 2016 12:04 Para: Edson Richter; pgsql-general@postgresql.org Assunto: Re: [GENERAL] Fastest way to duplicate a quite large database On 04/12/2016 07:51 AM, Edson Richter wrote: > Same machine, same cluster - just different database name.

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Louis Battuello
> On Apr 12, 2016, at 11:14 AM, John R Pierce wrote: > > On 4/12/2016 7:55 AM, John McKown wrote: >> Hum, I don't know exactly how to do it, but on Linux, you could put the >> "Customer" database in a tablespace which resides on a BTRFS filesystem. >> BTRFS can do a quick "snapshot" of the fil

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread John McKown
On Tue, Apr 12, 2016 at 10:14 AM, John R Pierce wrote: > On 4/12/2016 7:55 AM, John McKown wrote: > >> Hum, I don't know exactly how to do it, but on Linux, you could put the >> "Customer" database in a tablespace which resides on a BTRFS filesystem. >> BTRFS can do a quick "snapshot" of the file

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread John R Pierce
On 4/12/2016 7:55 AM, John McKown wrote: Hum, I don't know exactly how to do it, but on Linux, you could put the "Customer" database in a tablespace which resides on a BTRFS filesystem. BTRFS can do a quick "snapshot" of the filesystem except, tablespaces aren't standalone, and there's no

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Adrian Klaver
On 04/12/2016 07:51 AM, Edson Richter wrote: Same machine, same cluster - just different database name. Hmm, running tests against the same cluster you are running the production database would seem to be a performance hit against the production database and potentially dangerous should the t

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Louis Battuello
> On Apr 12, 2016, at 10:51 AM, Edson Richter wrote: > > Same machine, same cluster - just different database name. > > Atenciosamente, > > Edson Carlos Ericksson Richter > > Em 12/04/2016 11:46, John R Pierce escreveu: >> On 4/12/2016 7:25 AM, Edson Richter wrote: >>> >>> I have a database

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread John McKown
On Tue, Apr 12, 2016 at 9:25 AM, Edson Richter wrote: > Hi! > > I have a database "Customer" with about 60Gb of data. > I know I can backup and restore, but this seems too slow. > > Is there any other option to duplicate this database as "CustomerTest" as > fast as possible (even fastar than back

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Edson Richter
Same machine, same cluster - just different database name. Atenciosamente, Edson Carlos Ericksson Richter Em 12/04/2016 11:46, John R Pierce escreveu: On 4/12/2016 7:25 AM, Edson Richter wrote: I have a database "Customer" with about 60Gb of data. I know I can backup and restore, but this se

Re: [GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread John R Pierce
On 4/12/2016 7:25 AM, Edson Richter wrote: I have a database "Customer" with about 60Gb of data. I know I can backup and restore, but this seems too slow. Is there any other option to duplicate this database as "CustomerTest" as fast as possible (even fastar than backup/restore) - better if in

[GENERAL] Fastest way to duplicate a quite large database

2016-04-12 Thread Edson Richter
Hi! I have a database "Customer" with about 60Gb of data. I know I can backup and restore, but this seems too slow. Is there any other option to duplicate this database as "CustomerTest" as fast as possible (even fastar than backup/restore) - better if in one operation (something like "copy da