On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.car...@swarm64.com> wrote:
> Can you have a look at dump with parallel option. Parallel option will > take a lock on table while invoking lockTableForWorker. May be this is > not required for foreign tables. > Thoughts? > > I tried with -j and found no issue. I guess that the foreign table needs > locking anyway to prevent anyone to modify it while is being dumped. > > I'm able to get the problem with the following steps: Bring up a postgres setup with servers running in 5432 & 5433 port. Execute the following commands in Server1 configured on 5432 port: - CREATE EXTENSION postgres_fdw; - CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres'); - create user user1 password '123'; - alter user user1 with superuser; - CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123'); Execute the following commands in Server2 configured on 5433 port: - create user user1 password '123'; - alter user user1 with superuser; Execute the following commands in Server2 configured on 5433 port as user1 user: - create schema test; - create table test.test1(id int); - insert into test.test1 values(10); Execute the following commands in Server1 configured on 5432 port as user1 user: - CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1'); Without parallel option, the operation is successful: - ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data foreign_server With parallel option it fails: - ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server pg_dump: error: could not obtain lock on relation "public.foreign_table1" This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table. pg_dump: error: a worker process died unexpectedly There may be simpler steps than this to reproduce the issue, i have not try to optimize it. Regards, Vignesh EnterpriseDB: http://www.enterprisedb.com