De: Adrian Klaver
Enviado:terça-feira, 12 de abril de 2016 12:04
Para: Edson Richter; pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Fastest way to duplicate a quite large database

On 04/12/2016 07:51 AM, Edson Richter wrote:
> Same machine, same cluster - just different database name.

Hmm, running tests against the same cluster you are running the 
production database would seem to be a performance hit against the 
production database and potentially dangerous should the tests trip a 
bug that crashes the server.

>
> Atenciosamente,
>
> Edson Carlos Ericksson Richter
>
> Em 12/04/2016 11:46, John R Pierce escreveu:
>> On 4/12/2016 7:25 AM, Edson Richter wrote:
>>>
>>> I have a database "Customer" with about 60Gb of data.
>>> I know I can backup and restore, but this seems too slow.
>>>
>>> Is there any other option to duplicate this database as
>>> "CustomerTest" as fast as possible (even fastar than backup/restore)
>>> - better if in one operation (something like "copy database A to B")?
>>> I would like to run this everyday, overnight, with minimal impact to
>>> prepare a test environment based on production data.
>>
>>
>> copy to the same machine, or copy to a different test server?
>> different answers.
>>
>>
>>
>
>
>


-- 
Adrian Klaver
adrian.kla...@aklaver.com


Hi Adrian,

Thanks for your insight. This is not a “test system” in the way I’m testing the 
database server code.
This is kind of “pre-production evaluation”, the stage were customer will say 
“yes” or “no” for publishing a new version of our system into production.

Also, server is plenty of RAM and processor cores, so I don’t foresee any kind 
of trouble here.

The is risk is lower than running a heavy reporting system over the database 
server.

The point is that customers want to test the new version of our system as close 
as possible of the production environment.

Thanks,

Edson

Reply via email to