On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar <sameer.ku...@ashnik.com>
wrote:

>
>
> On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbake...@gmail.com>
> wrote:
>
>> Hi guys,
>>
>> I'm using PostgreSQL 9.2 and I got one master and one slave with
>> streaming replication.
>>
>> Currently, I got a backup script that runs daily from the master, it
>> generates a dump file with 30GB of data.
>>
>> I changed the script to start running from the slave instead the master,
>> and I'm getting this errors now:
>>
>> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>>> failed.
>>> pg_dump: Error message from server: ERROR:  canceling statement due to
>>> conflict with recovery
>>> DETAIL:  User was holding a relation lock for too long.
>>
>>
> Looks like while your pg_dump sessions were trying to fetch the data,
> someone fired a DDL or REINDEX or VACUUM FULL on the master database.
>
>>
>> Isn't that possible? I can't run pg_dump from a slave?
>>
>
> Well you can do that, but it has some limitation. If you do this quite
> often, it would be rather better to have a dedicated standby for taking
> backups/pg_dumps. Then you can set max_standby_streaming_delay and
> max_standby_archiving_delay to -1. But I would not recommend doing this if
> you use your standby for other read queries or for high availability.
>
> Another option would be avoid such queries which causes Exclusive Lock on
> the master database during pg_dump.
>

Another work around could be to pause the recovery, execute the pg_dump and
then, resume the recovery process. Not sure if this work around has been
considered.

You can consider executing "pg_xlog_replay_pause()" before executing
pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump
process completes.

Regards,
Venkata B N

Fujitsu Australia

Reply via email to