Well, I am mainly concerned with catastrophic  failure. If 1st (main) 
datacenter fails majorly (say fire, earthquake, db server dies etc), I need to 
be able to restore websites/data quickly in another location. If I get a data 
loss of say 6-12 hours during a major failure (which should never occur), I am 
ok with that.

Ben <[EMAIL PROTECTED]> wrote: On Sat, 30 Dec 2006, Dennis wrote:

> I was thinking of maybe just having 2nd location receive a PG dump (full 
> or incremental) every so often (an hour to 6 hours) and if the main 
> location fails majorly, restore the PG cluster from the dump and switch 
> DNS settings on the actual sites. I can make sure all website files are 
> always in sync on both locations.

Well, first off, you can just rsync your archived WAL files. That may be 
easier than playing with pg_dump:

http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html

But second, and more important given your data-loss desires, if you do it 
this way you have a window where you can experience data loss. 
Specifically, after a transaction is committed, that commit will be at 
risk until the next transfer has completed.


 __________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Reply via email to