Zitat von ALyarskiy <bacula-fo...@backupcentral.com>: > Zitat von ALyarskiy <bacula-forum < at > backupcentral.com>: > > Zitat von ALyarskiy <bacula-forum < at > backupcentral.com>: > > How can i limit writing jobs to 1 without limiting spooling jobs? > > I want to have only one job writing on the tape but spool other jobs > in parallel. > > This is the default as far as i understand. The tape is locked > exclusive from one writer while other jobs which run concurrently will > spool data as long as the maximum spool size is not reached. Works > fine here but you need a really fast spool area if you want to feed > something like LTO-4/5/6 at full speed. > > Regards > > Andreas > > > It is true for jobs with SpoolData=yes, but i have few jobs that > have SpoolData=no, so they write data right on the tape. > I want to limit jobs that have SpoolData=no by 1 concurrent, so only > 1 job could write data on the tape, and permit the rest jobs with > SpoolData=yes to spool data on local drive concurrently (5-7 streams). > > You might try the "Maximum Concurrent Jobs" directive for the tape > device but i guess with this you will block the despooling until all > non spooling jobs have passed. Why do you want to have spooling and > non-spooling jobs anyway for the same device? > > Regards > > Andreas > > Yes, now i use "Maximum Concurrent Jobs" limitation on device level. > I want to write large jobs right on tape ( usually > 50 Gb) - there > are no network bottlenecks for this clients, and the rest (there are > may be networks bottlenecks sometimes) spool to the local drive. > > When local spool used backup process have to read from client, write > on local drive, then read from it and write the backup to the tape. > For large backups it is HUGE overhead. Also there is problem when > you have 4-5 large backups in same time - local drive becomes > bottleneck (i have 2 Gb network). So it is reasonable to have a > possibility to limit concurrent jobs at different levels. For > example - 6 Jobs running in same time, 2 of them are large and > writing directly to the tape one by one, rest 4 are spooling to > local drive and waiting those 2 to complete.
It all depends. Spooling data is nearly no overhead if you use some cheap local storage as you should. Plugin some SATA or if you can afford SSD, do a RAID0 and spooling will give you the following benefits: - Data on tape is written with the spool size per job so your restore will be a lot faster - You will be able to drive your tape at maximum speed regardles of network congestion or clients searching files and therefore prevent shoe-shining Of course it would be nice to have some additional controls in some situations, but it looks like no one have written code for this until now. Regards Andreas ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_nov _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users