nathan r. hruby wrote: > Hi, > > I have a small HA cluster using shared storage. The shared storage > mounting is handled by the cluster software and follows the services that > require the shared storage points. Obviously I'd like to back up this > shared storage. > > Since these disks move around, I'd like to not just point bacula at each > box and backup everything including the shared storage, as then restores > become a problem (same files spread across multiple host jobs..) > > My solution, I think, is to simply run a separate director configured for > the hostname of the HA service it is running and backup just the HA data.
I've been meaning to do this too.. It should be quite easy for me.. our cluster is a failover one, with shared storage that's only mounted on one machine at any one time. It has an IP address which gets failed over with the storage, so it should just be a case of splitting the jobs (which is actually already done) and pointing them at the clustered address instead of the machines' 'own' IP addresses. If that makes sense... I'd split your jobs into three: Job { FileSet = "Server" Client = "server-a-fd" ... } Job { FileSet = "Server" Client = "server-b-fd" ... } Job { FileSet = "SharedStorage" Client = "cluster-fd" ... } Doing a restore shouldn't be tricky really.. you can restore any job to any client (well, unless you want to restore Windows files to a non-Windows box and you didn't use the "Portable" directive) -- Russell Howe [EMAIL PROTECTED] ------------------------------------------------------- This SF.Net email is sponsored by: Power Architecture Resource Center: Free content, downloads, discussions, and more. http://solutions.newsforge.com/ibmarch.tmpl _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users