On 10/24/2016 04:15 PM, Josh Fisher wrote: ... snipped ...
Yes, this is more or less what I've been doing up until now. The good news is, it seems I don't have to anymore. Here's what I have working now: corosync/pacemaker cluster with node A @ 1.2.3.4, node B @ 1.2.3.5, and cluster ip @ 1.2.3.1, shared storage mounted a /raid on the active node. node A bacula-fd.conf: FileDaemon { name = nodea-fd ... } node B bacula-fd.conf: FileDaemon { name = nodeb-fd ... } bacula-dir config: Client { name = nodea-fd address = 1.2.3.4 ... } Client { name = nodeb-fd address = 1.2.3.5 ... } Client { name = cluster-fd address = 1.2.3.1 ... } Job { name = nodea-etc client = nodea-fd fileset = etc } Job { name = nodeb-etc client = nodeb-fd fileset = etc } Job { name = cluster-raid client = cluster-fd fileset = raid } -- and it's happily spooling the 21GB /raid right now. What seems to be happening is bacula is connecting to the cluster address (checked with lsof -i), completely ignoring FD name "cluster-fd" and is backing up "fileset = raid" from "nodea-fd". Which is great, if not checking FD name is a bug, *please* don't fix it. :) So all you need is start FDs at boot listening on * and the director will automagically get the shared filesystem off of the node that happens to have it mounted. (Of course the backup will fail if the cluster fails over or the connection is otherwise disrupted.) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
------------------------------------------------------------------------------ The Command Line: Reinvented for Modern Developers Did the resurgence of CLI tooling catch you by surprise? Reconnect with the command line and become more productive. Learn the new .NET and ASP.NET CLI. Get your free copy! http://sdm.link/telerik
_______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users