On Thu, 14 Dec 2006, Kern Sibbald wrote:

>> That isn't a good idea:
>>    A 1Tb full backup may take several days to run, in LTO2 spool/despool
>>    time alone. Incrementals tend to run daily...
>
> I just described how it is implemented.

Something else I've spotted....

If max concurrent jobs is set to 1 and a job is running.

1: A second started job decides upon level to run (failed backup, etc)
    and then waits on maximum jobs before starting.

2: It sets the "files changed since" upon the start time of the last 
completed sucessful job and then waits upon maximum jobs.


This is counterintuitive - it should queue itself and only look at last 
job status (rerun failed levels) and the "files changed since" when it 
actually starts running, not when it is placed in the run queue.


While it won't result in any data loss, it _will_ result in unnecessary 
extra backup space being used.


AB


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to