On Wednesday 12 September 2007 18:03, David Boyes wrote: > > On Wednesday 12 September 2007 17:03, David Boyes wrote: > > > I'm not sure I understand you completely. If you have a storage pool > > > defined in one SD and another storage pool defined in a separate SD, > > you > > > > should be able to migrate from one SD to the other, as long as the > > > director in a particular instance knows about both SDs. We do that > > now, > > > > and it seems to work OK (particularly if you migrate from a disk > > pool in > > > > one location to a disk pool in the central location and then migrate > > to > > > > tape at the central location). We run a separate SD on a different > > port > > > > for each satellite location. > > > > I guess I don't understand what you are saying above, because it > > sounds > > > like > > you can write a backup to SD1, then do a migration job that reads the > > volume > > on SD1 and writes it to SD2. Is that what you are doing? and if so, > > can > > > you explain how?
The only thing that makes sense is that you are describing a hypothetical situation rather than a real working case, but when I read your text, you seem to be saying that it is a real working case, which has me perplexed -- see below. > > The scenario is: one director (dir1) on host1, one SD (sd1) on host2, > second SD (sd2) on host3. Both SDs contain specific storage pools > (disk-based pool1 on host1, tape-based pool2 on host2), and are defined > to the director in the normal way, specifying TCP hostname and port in > the SD resource definition. The nextpool value for pool1 points to > pool2. OK, so far. > > Backups controlled by dir1 go from client FD to pool1 as normal. > Migration job on dir1 is scheduled as normal. Dir1 reads volumes in > pool1 and migrates them to pool2. OK, so far. > Dir1 knows that pool1 is managed by > sd1, and pool2 is managed by sd2, and how to communicate with the > appropriate SD. OK, so far. > So far, it seems to just work. Dir1 schedules mounts > from pool1 and pool2, copies data, and updates the catalog just like > it's supposed to. At this point, I don't get it, unless you added new code or I don't understand my own code (happens sometimes). In the current Migration code, you *could* specify reading in one SD and writing to another as you indicate above, and the Dir would *probably* try to obey. The problem is that in the SD for a migration job, it expects to have both the read device and the write device known in the same SD. It has no way to read data and send it to another SD. In other words, using your example above, if Dir1 sends a migration request to SD1, SD1 expects to read from a device it has locally and write to a local device. It knows nothing about SD2. > > If we have a large remote location, we can add a local SD (sd3) to the > picture. Client backups at the remote site go to a disk pool controlled > by sd3, and then to a pool on sd1, then to a pool on sd2 via normal > migration. The director in control of the process could run on a local > machine at the remote location and control the local and remote SDs, or > the director could run on host1 remotely and control the SDs (the amount > of network traffic generated by a director is miniscule compared to the > traffic from FD to SD, and migration can be controlled in a way that the > network utilization is more friendly). > > For us, the whole idea relies on the ability to easily separate Bacula > functions onto separate hosts and that pools (and thus pool volumes) are > associated with a SD (this is one reason why I argued about this > strongly a while back -- that jobs should deal always and only with > pools, and let the pool selection determine what SD and device is > needed). > What we can't do yet is share the SDs between director > instances (eg have dir2 talk to sd1 and sd2; the SDs get confused about > whose database to update). Can you explain this more in detail. I'm probably forgetting some detail, but the only requirement that I can think of at the moment is that to have two directors talk to the same SD requires each of the directors to have a separate catalog. The problem then is how to manage the Volumes since a Volume cannot (well should not) appear in two databases. As long as the two directors always use different Volumes, things should work fine. Regards, Kern > Each director we create needs a separate set > of SDs to talk to, either by sitting on a different TCP port or on a > different machine at the usual port. > > This works well for us because in our case host1, host2 and host3 are > Linux virtual machines on the mainframe and we can create new machines > easily and attach virtual tape drives from the VTS to any virtual > machine in a controlled manner via the VM tape daemon in contrib/vm. The > creation of a new "Bacula server image" is a matter of cloning a > template of a virtual machine (about 17 seconds) and doing a small > amount of customization for the appropriate Bacula component (eg, > director) role it will serve. We also have memory-speed virtual networks > between the SD images and the director, so network data transfer isn't > really an issue for us (the traffic stays inside the mainframe). > > I'm not sure how well our setup would work with physical machines. > Running the SDs on different ports would get to be a pain to manage, and > you'd need a external arbiter tool controlling access to physical drives > that most Unix systems don't have. ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users