In message <[EMAIL PROTECTED]> you wrote: > > C) You run multiple jobs in parallel and want to keep jobs together on > tape (to allow much faster restores, usually).
How exactly is this working, especially when the spool size is smaller then each of the backups? Let's say I have 3 jobs "A", "B"" and "C" which all use the same volume and run in paralel; they all write more data to tape than the spool device can hold. I understand all three of them will start simultaneous, filling their respective spool files. Assume "A" is fastest, and fills it's spool file up to the maximum size. * I guess the SD now starts despooling the spooled data of "A", while jobs "B" and "C" continue to run? The FD for job "A" is blocked now? Is this interpretation correct? * Assume now "B" fills it's spool file. * I guess that once despooling for "A" has completed, two things will happen: (1) the FD for "A" continues to spool new data, and the SD starts despooling the data of "B". Is this assumption correct? In the end, we will have a tape / some tapes, where the data of the 3 jobs are interleaved in big blocks of the size of the respective spool files. Is this correct? If yes, then how will a restore be much faster compared to a tape where all "A" data are consecutive, followed by all "B" and then all "C" data? Best regards, Wolfgang Denk -- Software Engineering: Embedded and Realtime Systems, Embedded Linux Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: [EMAIL PROTECTED] Genitiv ins Wasser, weil's Dativ ist! ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=lnk&kid3432&bid#0486&dat1642 _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users