I looked into the duplicate jobs options and according to the
documentation the default is to not allow duplicate jobs. So why am I
seeing duplicate jobs queued? In this case I have a copy to tape job:
Job {
    Name = "CopyToTape"
    Type = Copy
    #Schedule = "WeeklyCycleAfterBackup"
    Priority = 40 # after catalog
    Pool = Disk-Pool
    Maximum Concurrent Jobs = 1
    Selection Type = PoolUncopiedJobs
    Messages = Standard

    Level = Full # ignored
    Client = mn-server-fd # ignored
    FileSet="Standard Full Set" # ignored
}

If one of the copy jobs takes over 24 hours because of something like a
stuck autoloader then the same jobs are queued up again. Why are these
jobs queued up twice? Is it because the jobs are spawned by the copy job and 
the spawning doesn't take into account jobs that are in queue? Could the copy 
job somehow detect that previously spawned jobs are still in the queue? Or is 
the documentation not accurate and the default is to allow duplicate jobs?


-- 
Jon Schewe | http://mtu.net/~jpschewe
If you see an attachment named signature.asc, this is my digital
signature. See http://www.gnupg.org for more information.


------------------------------------------------------------------------------
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to