Arno Lehmann schrieb: > Hi, Hi :) Thanks for answering!
> Ok, just to make sure I understand correctly: You back up to file volumes? Exactly. Diskspace is getting way too cheap ;-) >> Now my question is, how would you accomplish this? > > rsync. Feel's good to hear rsync twice, because this is what I suggested in first place, just felt "too simple". > Not necessarily. You can implement something where you dump the > catalog for the catalog backup (which you probably already do) but, > instead of deleting that database dump, you keep it. > > Now, in case of a serious problem in Bacula or your catalog database, > after a days backups you have the complete set of data you need to > create a new Bacula instance, with all the existing data. > > If you rsync that whole file set (which would best include your > configuration files, bootstrap files, and source code of the Bacula > you're running, along with the ./configure line) to your other two > sites, you should have what you want: > > The ability to quickly set up a new Bacula instance with all the data > from the original one. Yes, It sounds like this is the easiest way. > In case you only lose your volumes and need to access them, it should > be easily possible to either copy the remote volumes to the local > site, or even access them remotely. Right although it should be hard enough to lose a RAID 6, but as I said, we're talking paranoia here. > >> Plus I don't feel comfortable to >> encapsulate the File-Tapes. > > No need to do that, as you can recreate the existing file structure > easily. > >> Would that become a problem? As far as I >> see it, I wouldn't be able to restore client-based. I'd have to restore >> a whole tape, and this way, restore all clients backup on ServerA. > > Erm... no. Unless you're talking about bextract without a bootstrap > file. Bootstrap files are important :-) I was talking about overwriting the tape file, which includes multiple clients backup. >> Not >> great, but hey, If my whole live-datacenter gets destroyed, this sounds >> like a reasonable kind of work. >> >> I see another option, which would be to just copy the files via nfs or >> iSCSI, or something, which I actually would prefer, because it would >> work without encapsulation. I'd end up with a "dumb" backup, with which >> I would be able to restore my original director by hand. > > Yup, that's close to what I suggested, but rsync is a useful tool here > as it will not blindly copy unmodified files each time. > >> Remarks? What do you think? Did I miss something? It doesn't sound great >> to me, but would work, as far as my (limited) knowledge goes. > > Well, the source code, configuration files, and everything else you > need to set up a new Bacula instance. A little embarrassing for me, but I am using Ubuntu's 2.0.3 packages, so no need for that :-( I was close to switching to source code, but in april I'll have 2.2.6 with Ubuntu Hardy. So laziness wins, the packages are actually compiled very well. > If you want to set up a second, emergency, DIR things are only a tiny > bit more complicated: I'd do that by setting up a slave catalog > database remotely, install and test a Bacula DIR and SD there, sync > the volumes and configuration, and only start the DIR and SD when > necessary, i.e. when your primary site fails and when you do your > routine test (which you run, hopefully :-) The catalog, in such a > scenario, should always be up to date without the need to transfer > whole dumps continously. Another quite apealing option. I'll have to consider this in quiet moment. > >> 2.) Backup using a second director, so "split" the backup. >> Each of the above mentioned backup servers is going to be a full bacula >> install anyway (because it will act as the backupserver for the site it >> is located at), so I could use it. > > Well, that makes things more interesting... basically, I'd recommend > to only rune on DIR as it keeps the setup much easier to manage. > > If, for example due to strict separation between the facilities, you > have to run several DIRs, I'd simply multiply my primary layout, i.e. > treat all the DIRs as independent instances. > > You can, though, have SDs local at the facilities and would need only > a bit of thought and planning to be able to move the volumes around, > in case of an emergency. > >> I could, let's say, do a weekly full >> backup directly to the other server. This way it wouldn't be any kind of >> "hand work", but I'd split up my backup, and double my configuration >> (actually it would be times three, for 3 servers), because all servers >> need the client configuration, and all client's need to know the 3 servers. >> >> 3.) Clone the backup servers >> This is an option I pretty much dismissed already. While it would make >> sure that I have a fully working backup system,it conflicts with the >> idea of each backupserver being master for it's site, and only copying >> *some* but not *all* client's backup. >> A slightly modified version of this would be to use 3 directors and >> storage demons on each box, having a clone of each dir and sd on each >> machine. Still not very much appealing. > > > I'd do it with one DIR, three SDs, and two standby/emergency DIRs with > slave catalog DBs. Yes I'm really tempted to follow that advice. I'll have to look into some issues regarding the networking (sd + dir will be using RFC1918 addresses, but the servers will be spread over different AS), but it still looks like the best solution to my problem. >> >> I'm not sure, if I have to be able to restore easily from all servers >> but the original box. I mean, the backup is really just for bad bad >> baaaaad situations, not for everyday use. But if there is an option that >> would include an easy restore, hey, you have my attention! > > Ok... just make sure that your client addresses work across all three > sites, and you have the SD setups you'll need. In case one SD is lost, > and you need to restore its data, just change the address it is known > under in the DIR configuration and you're ready. Sound's good, right? > > ;-) > >> >> So, bottom line is - I can't figure out how I can manage this, I need >> some ideas, so please, dear list, fill my brain with ideas that will >> give me a murder headache for a bunch of days! > > Actually, I hope this was clear enough to not push you into a headache :-) Yes, thank you! :-) I really have to thank you and Dan again! I was talking to a colleague of mine this afternoon, telling him I was going to ask the mailing list about this issue. And I told him about these two bacula experts that are extremely active on the list, from whom I hoped to receive an answer. You may guess who I was talking about ;-) Thanks a bunch! You guys rock! Cheers, Philipp ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users