I tried dumping the largest table that is having problem using –j1 flag in 
parallel dump. This time I got error on the console “File size limit exceeded” 
but the system allows
Unlimited file size. Also the pg_dump without –j flag goes through fine. Do you 
guys know what’s going on with parallel dump? The system is 64 bit centos(
2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 
x86_64 GNU/Linux) with ext4 file system.

limit
cputime      unlimited
filesize     unlimited
datasize     unlimited
stacksize    10240 kbytes
coredumpsize 0 kbytes
memoryuse    unlimited
vmemoryuse   unlimited
descriptors  25000
memorylocked 64 kbytes
maxproc      1024

From: Sterfield [mailto:sterfi...@gmail.com]
Sent: Sunday, February 22, 2015 8:50 AM
To: Shanker Singh
Cc: Tom Lane; r...@iol.ie; pgsql-general@postgresql.org
Subject: Re: [GENERAL] parallel dump fails to dump large tables



2015-02-20 14:26 GMT-08:00 Shanker Singh 
<ssi...@iii.com<mailto:ssi...@iii.com>>:
I tried turning off ssl renegotiation by setting "ssl_renegotiation_limit = 0" 
in postgresql.conf but it had no effect. The parallel dump still fails on large 
tables consistently.

Thanks
Shanker

HI,
Maybe you could try to setup an SSH connection between the two servers, with a 
keepalive option, and left it open for a long time (at least the duration of 
your backup), just to test if your ssh connection is still being cut after some 
time.
That way, you will be sure if the problem is related to SSH or related to 
Postgresql.
Thanks,
Guillaume

-----Original Message-----
From: Tom Lane [mailto:t...@sss.pgh.pa.us<mailto:t...@sss.pgh.pa.us>]
Sent: Saturday, February 14, 2015 9:00 AM
To: r...@iol.ie<mailto:r...@iol.ie>
Cc: Shanker Singh; 
pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>
Subject: Re: [GENERAL] parallel dump fails to dump large tables
"Raymond O'Donnell" <r...@iol.ie<mailto:r...@iol.ie>> writes:
> On 14/02/2015 15:42, Shanker Singh wrote:
>> Hi,
>> I am having problem using parallel pg_dump feature in postgres
>> release 9.4. The size of the table is large(54GB). The dump fails
>> with the
>> error: "pg_dump: [parallel archiver] a worker process died
>> unexpectedly". After this error the pg_dump aborts. The error log
>> file gets the following message:
>>
>> 2015-02-09 15:22:04 PST [8636]: [2-1]
>> user=pdroot,db=iii,appname=pg_dump
>> STATEMENT:  COPY iiirecord.varfield (id, field_type_tag, marc_tag,
>> marc_ind1, marc_ind2, field_content, field_group_id, occ_num,
>> record_id) TO stdout;
>> 2015-02-09 15:22:04 PST [8636]: [3-1]
>> user=pdroot,db=iii,appname=pg_dump
>> FATAL:  connection to client lost

> There's your problem - something went wrong with the network.

I'm wondering about SSL renegotiation failures as a possible cause of the 
disconnect --- that would explain why it only happens on large tables.

                        regards, tom lane


--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to