, Aggarwal, Ajay wrote:
Our replication timeout is default 60 seconds. If we increase the replication
time to say 180 seconds, we see better results but backups still fail
occasionally.
so increase it to 300 seconds, or whatever. thats an upper limit, it needs to
be big enough that you DONT get into
If I don't include WAL files as part of my backup, I do not run into this
issue. But a backup without WAL files is not what I want.
From: Aggarwal, Ajay
Sent: Monday, March 10, 2014 9:46 PM
To: Haribabu Kommi
Cc: pgsql-general@postgresql.org
Subjec
force it to use direct
I/O.
From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 8:31 PM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup
On Tue, Mar 11, 2014 at 7:07 AM
207.248 ops/sec
Non-Sync'ed 8kB writes:
write 202216.900 ops/sec
From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 1:42 AM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re:
Our environment: Postgres version 9.2.2 running on CentOS 6.4
Our backups using pg_basebackup are frequently failing with following error
"pg_basebackup: could not send feedback packet: server closed the connection
unexpectedly
This probably means the server terminated abnormally
What happens when the PQexec(..) result is too big to return in one single
response message? Say you are reading a table with millions of entries. Is
there a way to write one single PQexec() request but read the result in
chunks :
"give me the first 500 tuples",
"give me the next 500",
an