Robert LeBlanc wrote: > I don't think you want to cluster the bacula FD. Each node of the cluster is > very unique and if you try to fail over the back-up program, you will get > some really strange results. All of a sudden a computer that you think has > one name and identification and config in the cluster now has another. This > is assuming that you are backing up the entire machine. If you are only > backing up the data disk, this might be ok. In such a situation you will > probably want a client on each node and then one that "fails over".
Yes, this is the configuration I'm looking for. Let's take a clustered file server, for a simple example. Maybe with two cluster groups. Server #1 called 'green' Server #2 called 'blue' Cluster group called 'User & Dept filestores' with a disk or two, network name ('usrdeptsrv'), IP address (192.168.0.100) and file shares ('homedrives', 'deptstore'). Cluster group called 'Archive storage' with another disk or two, and again a network name ('archivesrv'), IP address (192.168.0.200) and file share ('archive') Plus a group for the quorum, which isn't really relevant to the backup scenario. Now, what I'm aiming for is to be able to back up green and blue's system state and configuration, so I can install a filedaemon on them, calling them something like green-fd and blue-fd. However, I can't really use those to back up the clustered file stores, as I don't know whether green or blue will have the drives. I could just run the backups for the clustered drives on both the servers, and the one which has the drives will succeed and the one without the drives will fail, but it's not an ideal scenario. The other thing you can do is to set green-fd and blue-fd with the same password, then create another Client resource in the director's config called (say) 'archivesrv-fd' with the IP address which is linked to the archive storage. That way you can be sure that the connection is being made to the node which has the files, and you'll be able to authenticate as the credentials are the same for both nodes. Still, that's not ideal and it isn't really obvious what's going on. > What > I've done for clusters is just put the client on each node and which ever > one has the data disk backs it up. All of our clusters have fail back so the > data usually resides on the primary node, but if it doesn't then I can > restore it from the standby node job if the job ran when the standby node > owned the disk. You can easily restore to the standby node if the primary > node is dead. Honestly, I think you are go through a lot more work than it > is worth. If you have a really good reason, I'd like to know. Well, my reason is that it really shouldn't be hard, and it should be the Right Way(tm) to do it. Install green-fd to listen on green's IP address, blue-fd to listen on blue's IP address (instead of having them listen on 0.0.0.0:9102!), and then one fd per cluster group, failing over with the cluster. It keeps a clean separation between the director and the config of the servers. It looks like it should be relatively easy, but I think maybe with my messing around trying to get it to work, I may have left trails of destruction in the parts of the registry to do with service registrations and got Windows all confuddled. -- Russell Howe [EMAIL PROTECTED] ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users