On Mon, 30 Jan 2006, Arno Lehmann wrote:
Sh*t. Probably too late now, but taking the job status mails and extracting
some sort of volume use logs which contain at least the information which
job, on which volume, was written to which tape and print that might be
interesting... but of course everybody relies on the catalog and the correct
setup :-|
Exactly.
1: The program will be able to take a "Pool" argument and check every
single tape in that pool, preferably checking an autochanger's index
and loading/unloading tapes unsupervised as needed, requesting
changeouts if the necessary tapes are not in the changer.
Basic volume management, in my opinion.
The pool argument should not be required, though - there might be cases where
you need to rescan your whole tape collection without knowing which tapes
belong to which pool.
That's exactly my situation, but it's as easy to say pool=all.
2: The program must be able to run while Bacula is running, using tape
drives when Bacula is not using them. This is necessary because a large
pool or group of pools may take weeks to fully resync and tieing up a
drive for that period of time will (of course) interfere with backups.
Definitely. Using Baculas current autochanger support, you can set up drives
not to be selected automatically. The other ones would be the ones you use
for restores and resync, right?
Under 1.38, yes. I'm still on 1.36 (holding off updating unti i get the
sync + restore done.)
This will require some sort of cooperation with Bacula, perhaps
checking the Bacula execute queue at the end of each tape scanned
and standing aside until the queue is empty, or the drive is free again.
I would suggest to implement that sort of job in the normal Bacula operations
- done by the SD, controlled by the DIR.
That'd add complexity.
The currnt volme management tools
are mainly disaster recovery tools, but the scenario you describe is not
exactly what I'd consider desaster recovery. Rather, it's more or less a
day-to-day volume / catalog management task. Well, in your case it's
month-to-month.
Exactly - consistentcy checking etc etc,
3a: Alternatively, the program will "quickly" verify that a file block on
tape is correct by reading in the first "N" records and checking that
they tally with the database records of their positions.
Sounds like a command option to me.
That's what I had in mind
Moving on to "unknown" areas of tape
4: The program will "hunt" bootstrap records, if there is enough data in
these to be able to rebuild database entries
That's the part I don't understand - where do you hunt bootstrap records?
Everything known in the catalog is already handled in steps 3 and 3a, right?
So, would you feed it all the .bsr files you have stored on disk or somewhere
else, or are the bootstrap records also written to tape and might be read
there?
Bootstraps are written to tape - this would at least build an index of the
jobs so that Bsync then knows how to sequentially read in files.
4a: Having ingested the bootstrap records, the program will either fully
verify the files' existance on tape, or "quickly" do it, using the
behaviour described in 3a
Having looked at the Bootstraps, I realise they only carry job information
and not file information, so this behaviour is effectively not available.
Well, even if I might never need your suggested enhancement, I can understand
the need for it.
The critical thing is being able to do the verification/resync without
interfering with backups. 2 weeks downtime is a "really bad thing"
AB
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users