2016-08-17 15:31 GMT+12:00 Sameer Kumar <sameer.ku...@ashnik.com>: > > > On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbake...@gmail.com> > wrote: > >> Hi guys, >> >> I'm using PostgreSQL 9.2 and I got one master and one slave with >> streaming replication. >> >> Currently, I got a backup script that runs daily from the master, it >> generates a dump file with 30GB of data. >> >> I changed the script to start running from the slave instead the master, >> and I'm getting this errors now: >> >> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() >>> failed. >>> pg_dump: Error message from server: ERROR: canceling statement due to >>> conflict with recovery >>> DETAIL: User was holding a relation lock for too long. >> >> > Looks like while your pg_dump sessions were trying to fetch the data, > someone fired a DDL or REINDEX or VACUUM FULL on the master database. > >> >> Isn't that possible? I can't run pg_dump from a slave? >> > > Well you can do that, but it has some limitation. If you do this quite > often, it would be rather better to have a dedicated standby for taking > backups/pg_dumps. Then you can set max_standby_streaming_delay and > max_standby_archiving_delay to -1. But I would not recommend doing this if > you use your standby for other read queries or for high availability. > > Another option would be avoid such queries which causes Exclusive Lock on > the master database during pg_dump. >
Sameer, yeah I was just reading this thread: https://www.postgresql.org/message-id/AANLkTinLg%2BbpzcjzdndsnGGNFC%3DD1OsVh%2BhKb85A-s%3Dn%40mail.gmail.com Well.. I thought it was possible, but as the DB is big, this dump takes a long time and it won't work. I also could increase those parameters you showed, but won't do that as I only have one slave. cheers