If I exclude the large tables(>30GB) in the parallel dump it does succeed and 
normal dump also succeeds. So I am not sure if the network is at fault. Is 
there any other option that might help to make parallel dump usable for large 
tables?

thanks
shanker

-----Original Message-----
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Saturday, February 14, 2015 9:00 AM
To: r...@iol.ie
Cc: Shanker Singh; pgsql-general@postgresql.org
Subject: Re: [GENERAL] parallel dump fails to dump large tables

"Raymond O'Donnell" <r...@iol.ie> writes:
> On 14/02/2015 15:42, Shanker Singh wrote:
>> Hi,
>> I am having problem using parallel pg_dump feature in postgres 
>> release 9.4. The size of the table is large(54GB). The dump fails 
>> with the
>> error: "pg_dump: [parallel archiver] a worker process died 
>> unexpectedly". After this error the pg_dump aborts. The error log 
>> file gets the following message:
>> 
>> 2015-02-09 15:22:04 PST [8636]: [2-1] 
>> user=pdroot,db=iii,appname=pg_dump
>> STATEMENT:  COPY iiirecord.varfield (id, field_type_tag, marc_tag, 
>> marc_ind1, marc_ind2, field_content, field_group_id, occ_num, 
>> record_id) TO stdout;
>> 2015-02-09 15:22:04 PST [8636]: [3-1] 
>> user=pdroot,db=iii,appname=pg_dump
>> FATAL:  connection to client lost

> There's your problem - something went wrong with the network.

I'm wondering about SSL renegotiation failures as a possible cause of the 
disconnect --- that would explain why it only happens on large tables.

                        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to