Syncing just WAL archive directory every minute should not be a problem at
all (running rsync every minute for a data directory is not recommended).
As said earlier, we had configured warm standby for a db of size 2 TB and
wal archive generation was in 100s.
We did not encounter any issues in run
just another update since the system is up and running and one more question
:p
the secondary server is able to restore the wal archives practically
immediately after they arrive. i have set a rsync cron job to send the new
wals every 5 minutes. the procedure to transfer the files and to restore
t
just an update from my tests
i restored from the backup. the db is about 2.5TB and the wal archives were
about 300GB. the recovery of the db was completed after 3 hours. thx to all
for your help
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archi
the network transfer does not bother me for now. i will first try to do the
whole procedure without compression, so as not to waste any cpu util and
time for compressing and decompressing. through the 4Gbps ethernet, the
200GB of the day can be transferred in a matter of minutes. so i will try it
a
Considering the size of WAL archives = 200GB
Compressing them using gzip (you can use this command in a shell script and
place it in archive_command as well) would possibly reduce the size to as
low as 10 - 20 GB.
Please let us know the results.
Thanks
Venkat
On Tue, Sep 6, 2011 at 1:03 PM, Mir
The network bandwidth between the servers is definitely not an issue. What is
bothering me is the big size of the wal archives, which goes up to 200GB per
day and if the standby server will be able to replay all these files. The
argument that; since the master can do it and also do various other ta
In my experience, I had configured a warm standby for 2 TB Postgres Cluster
(PostgreSQL 8.4).
Note : I do not know your database size and WAL archive generation rate.
Important considerations i made were as follows -
1. WAL archives transfer from production to standy depends on the network
bandw
the nodes communicate through 4Gbps ethernet so i dont think there is an
issue there. probably some kind of misconfiguration of DRBD has occured. i
will check on that tommorow. thx a lot :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp
On September 5, 2011, MirrorX wrote:
> thx a lot for your answer.
>
> actually DRBD is the solution i am trying to avoid, since i think the
> performance is degrading a lot (i ve used it in the past). and also i
> have serious doubts if the data is corrupted in case of the master's
> failure, if
thx a lot for your answer.
actually DRBD is the solution i am trying to avoid, since i think the
performance is degrading a lot (i ve used it in the past). and also i have
serious doubts if the data is corrupted in case of the master's failure, if
not all blocks have been replicated to they second
MirrorX wrote:
> my bad...
> i read in the manual that the recovery process is constant and runs all the
> time. so the question now is
> how many wals can this procedure handle? for example can it handle 100-200G
sure, if the master can handle that it's no problem for the client (same
hardware
my bad...
i read in the manual that the recovery process is constant and runs all the
time. so the question now is
how many wals can this procedure handle? for example can it handle 100-200G
every day? if it cannot, any other suggestions for HA ?thx in advance
--
View this message in context:
ht
hello all,
i would like your advice in the following matter. If i am not wrong, by
implementing a warm standby (pg 8.4) the wal archives are being sent to the
fail over server and when the time comes the fail over who already has a
copy of the /data of the primary and all the wal archives, starts
13 matches
Mail list logo