On 04/13/2016 08:46 AM, Edson Richter wrote:
Em 13/04/2016 11:18, Adrian Klaver escreveu:
On 04/13/2016 06:58 AM, Edson Richter wrote:
Another trouble I've found: I've used "pg_dump" and "pg_restore" to
create the new CustomerTest database in my cluster. Immediately,
replication started to
On 04/13/2016 07:46 AM, Edson Richter wrote:
Em 13/04/2016 11:18, Adrian Klaver escreveu:
On 04/13/2016 06:58 AM, Edson Richter wrote:
Another trouble I've found: I've used "pg_dump" and "pg_restore" to
create the new CustomerTest database in my cluster. Immediately,
replication started to r
Em 13/04/2016 11:18, Adrian Klaver escreveu:
On 04/13/2016 06:58 AM, Edson Richter wrote:
Another trouble I've found: I've used "pg_dump" and "pg_restore" to
create the new CustomerTest database in my cluster. Immediately,
replication started to replicate the 60Gb data into slave, causing big
On 04/13/2016 06:58 AM, Edson Richter wrote:
Another trouble I've found: I've used "pg_dump" and "pg_restore" to
create the new CustomerTest database in my cluster. Immediately,
replication started to replicate the 60Gb data into slave, causing big
trouble.
Does mark it as "template" avoids re
gt;
*Assunto: *Re: [GENERAL] Fastest way to duplicate a quite large database
On 04/12/2016 07:51 AM, Edson Richter wrote:
> Same machine, same cluster - just different database name.
Hmm, running tests against the same cluster you are running the
production database would seem to be a perfor
De: Adrian Klaver
Enviado:terça-feira, 12 de abril de 2016 12:04
Para: Edson Richter; pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Fastest way to duplicate a quite large database
On 04/12/2016 07:51 AM, Edson Richter wrote:
> Same machine, same cluster - just different database name.
> On Apr 12, 2016, at 11:14 AM, John R Pierce wrote:
>
> On 4/12/2016 7:55 AM, John McKown wrote:
>> Hum, I don't know exactly how to do it, but on Linux, you could put the
>> "Customer" database in a tablespace which resides on a BTRFS filesystem.
>> BTRFS can do a quick "snapshot" of the fil
On Tue, Apr 12, 2016 at 10:14 AM, John R Pierce wrote:
> On 4/12/2016 7:55 AM, John McKown wrote:
>
>> Hum, I don't know exactly how to do it, but on Linux, you could put the
>> "Customer" database in a tablespace which resides on a BTRFS filesystem.
>> BTRFS can do a quick "snapshot" of the file
On 4/12/2016 7:55 AM, John McKown wrote:
Hum, I don't know exactly how to do it, but on Linux, you could put
the "Customer" database in a tablespace which resides on a BTRFS
filesystem. BTRFS can do a quick "snapshot" of the filesystem
except, tablespaces aren't standalone, and there's no
On 04/12/2016 07:51 AM, Edson Richter wrote:
Same machine, same cluster - just different database name.
Hmm, running tests against the same cluster you are running the
production database would seem to be a performance hit against the
production database and potentially dangerous should the t
> On Apr 12, 2016, at 10:51 AM, Edson Richter wrote:
>
> Same machine, same cluster - just different database name.
>
> Atenciosamente,
>
> Edson Carlos Ericksson Richter
>
> Em 12/04/2016 11:46, John R Pierce escreveu:
>> On 4/12/2016 7:25 AM, Edson Richter wrote:
>>>
>>> I have a database
On Tue, Apr 12, 2016 at 9:25 AM, Edson Richter
wrote:
> Hi!
>
> I have a database "Customer" with about 60Gb of data.
> I know I can backup and restore, but this seems too slow.
>
> Is there any other option to duplicate this database as "CustomerTest" as
> fast as possible (even fastar than back
Same machine, same cluster - just different database name.
Atenciosamente,
Edson Carlos Ericksson Richter
Em 12/04/2016 11:46, John R Pierce escreveu:
On 4/12/2016 7:25 AM, Edson Richter wrote:
I have a database "Customer" with about 60Gb of data.
I know I can backup and restore, but this se
On 4/12/2016 7:25 AM, Edson Richter wrote:
I have a database "Customer" with about 60Gb of data.
I know I can backup and restore, but this seems too slow.
Is there any other option to duplicate this database as "CustomerTest"
as fast as possible (even fastar than backup/restore) - better if in
Hi!
I have a database "Customer" with about 60Gb of data.
I know I can backup and restore, but this seems too slow.
Is there any other option to duplicate this database as "CustomerTest"
as fast as possible (even fastar than backup/restore) - better if in one
operation (something like "copy da
15 matches
Mail list logo