Hallo Arno! <[EMAIL PROTECTED]> aka Arno Lehmann schrieb mit Datum Thu, 28 Feb 2008 17:38:55 +0100 in m2n.bacula.users:
|> The only thing that has changed is two values in the catalog |> which mark the old job as migrated and the new job as the target |> of that migration. Now if one would someway change these two values |> back to default, then one would have two jobs with different |> job-ids but the same content. Certainly, some piece of the Bacula |> code might be incompatible with such an action - as I said, it seems |> currently not supported. But there is not much work needed to get |> it implemented. | |Restores - currently, Bacula doesn't know how to handle restores where |the needed data is available in more than one place. Yes... could be. Have You tried it? And as I said concerning the concept: the copy instance of the backup will not have file recods in the database - it will only have the job record, with a different jobid. There will not be a choice - restores will always be done from the primary instance. The copy instance comes only to play if we loose the primary instance, and then it will only be migrated back to the replacement media of the primary instance. And such a setup should be suitable if you want a second copy for emergency (cartridge broken etc.) - but if you want a second copy for load-sharing etc., well that's a different matter. |> So, if I were in the need of such functionality, and as the Bacula |> license allows us to make modifications to it as we seem appropriate, |> I would just make it working - on my own risk, on my own |> responsibility. (Respectively one could hire somebody who is willing |> and able to take the responsibility.) | |I hope you would submit it as a patch, as that would save Kern and |others from doing things again in the near future :-) Aaaahhhh!!! :-))) Now, the point is: I do not need it currently. I am perfectly happy with just running two backups and having the "since" values tuned in a way that they will always overlap and never gap. And, to be clear: for me this here is pure hobby, it is sports. Or maybe it is even worse: maybe I am contractually prohibited to do development work on a piece like Bacula. So I will do exactly as is mandatory within my Code of Honour: I will share those patches and fixes that I have to implement in order to get Bacula back up my own private home computers the way I like it. And that's all. |> No - I'm speaking about the problem of getting the daily amount of |> data thru some drives R/W heads within 24 hours. There are sites |> ... |I would even recommend to keep Bacula in mind if you need to handle |such a scenario... I don't think we're far from being reliable enough |for such a beast. You would? Well, then maybe I see problems where there are none - or maybe I have just seen way too many sig-11s... |Just out of curiosity - which backup solutions do you know that handle |this with resonable performance? TSM? |I mean, the disk throughput problem is a universal one, and my (or |rather, my customer's) experience is that you can build really fast |disk arrays, given enough resources. Maybe I didn't make myself very clear - I am not concerned about the physical throughput - that can be considered later. I am concerned about the logic. But maybe I am just missing the the right idea, so just give me a clue, please: When I create a disk storage object, I define a "Storage Device" as the name of a directory on disk. Within this directory, there will be "Volumes" created. But it is always only possible to access one Volume at a time - and *either* for reading *or* for writing. So it is never possible to read from any disk storage pool at the same time while it is written. To make it more clear: I do *not* want to write and read to the *same Volume*! I want to fill one Volume, then close it and write to the next Volume, and at that time read the previous Volume. Now imagine, we have a tape library, and an autochanger with two drives. So it should be perfectly legal to write to one Volume in one drive, and at the same time read another Volume in the other drive. But Bacula will not let me do such: in the Director the Autochanger gets defined as only one storage device, and as soon as I try to read and write at the same time on this device (even to different StoragePools within the Library), I will get this: >Fatal error: Job canceled. Attempt to read and write same device. And that is why I am asking: how can a disk storage object be designed to overcome this bottleneck? rgds, PMc ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users