Hi Hackers,

In my Project i have to handle a Database with 600 GByte Text only, distributed on 4 Tablespaces on multiple Harddisks and Remote SAN's connected via Gigaethernet to the Remote SAN-Storage.

I need more flexibility by doing Backups of my big Database, but the built in Online Backupsystem dont work for my Setup good enought for me. I Can not accept 16 MByte big WAL's Files for securing it on Tape. 16 MByte Data loss on a Crash Situation is Fatal and no helpfully
(1 MByte to). I wish to have a continoues Backup without any data losses.

My Idea:
- 1 A Complete Database Backup from Scratch (its implemented right now)
- 2 Online streaming Backup thadt, updates my Basebackup continously every Time Changes was made (the half functionality is allready implemented) - 3 Able to redirect the Online Streamingbackup Files to an Remote Servermachine (FTP) (the ARCHIVE Param in postgresql.conf can do thadt allready but the Method is not 100% Continously, big Holes of Datalosses can occur) - 4 Version Management of Multiple Backuplines by Timestamp (is not yet implemented) - 5 Recovery Option inside the PSQL-Client, for doing the Desaster Recovery. (is not yet implemented)

Benefitts:

All Users of Hugh Databases (Missioncritical and allways Online) can bring up its Databases with the same information with differences 1-5 Sec. before the Crash occurs!

ps:
At EMCSoftware there is an Tool thadt has can do thadt for ORACLE and MSSQL but there
is not Option for Postgres avaiable );

Sry for my bad english and i hope there is some one thadt can understand the Problem.

Apoc


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to