Hello Adrian and thanks for fast replay. 24.03.2012 20:30, Adrian Reyer пишет: > On Sat, Mar 24, 2012 at 12:08:36PM +0400, Anton Nikiforov wrote: >> (all 4 storages have different adresses and run on different machines) >> Each client should be backed up on each storage. So i have jobs for that >> like this: > > May I ask why? Just for distributed backup Data? This is the matter of security. I mean data availability in case of: - Power problems in DC (This 4 hosts are located in different places, 2 in each); - Hardware problems with hosts. They have SATA HDDs to store backups without RAIDs or any other redundancy. - VPC migration issues. When VPC needs to move from one DC to another the restoration of data from local storage is faster than moving the whole image. > >> When i decrease number of concurrent jobs on the client to 1 or 2 - i >> reach situation when all jobs are "is waiting on max Client jobs" and >> some of them are "is waiting on Storage storeage1" (or storage2, or >> storage3 or storage4). And server hangs forever waiting jobs to finish. > > What about reducingnumber of jobs on client, raising concurrent jobs on > storage and enable spooling? I have 50 concurrent jobs (needed only 5, but for tests i rise this parameter) configured on the director. I have 50 concurrent jobs on storage and 1 in device on the storage. I have tested configurations with 1 concurrent job on the client but it stuck waiting. Because (IMHO) director takes that 1 job. I have tested configurations with 2 concurrent jobs on the client. It works fine, but without concurrency. Director do not start jobs that could be started at the same time! Why? Each storage have 1 job on it's device available and each client hava available connection. So when director starts, for example, backing up client1 on storage1 it is possible to start backing up client2 on storage2 and 3 on 3 an 4 on 4. Or something like that. But it is waiting each single job to finish. And only one job run at the moment.
BUT! When i increase... NOW MAGIC... Client's max concurrent jobs to 3!! AND MIRICLE!!! I have conurrently running 4 jobs! But... sometimes from the same client :) And client stucks dumping databases or whatewer. I will try spooling, thanks for the suggestion, but i cannot understand why spooling will help with my problem. The storage is not tape or DVD. It is HDD, so there should no problems with or without spool. My problem is maximum jobs on the client. > > Another possibility is to add 'locking' to the client with before and > after jobs. Something like (typed by heart, not tested): > before-job: > #!/bin/bash > MYPID=$$ > LOCKFILE=/var/lib/bacula/already-running > while [ -e ${LOCKFILE} ] || [ $MYPID -ne $(cat ${LOCKFILE}) ]; do > while [ -e ${LOCKFILE} ]; do > sleep 10 > done > echo $MYPID > ${LOCKFILE} > done > > after-job: > #!/bin/bash > LOCKFILE=/var/lib/bacula/already-running > rm -f ${LOCKFILE} Yes. I have something similar in my before and after job scripts to make jobs more reliable even when two jobs started at the same time from one client. But i want bacula to manage that :) > > However, this only prevents more than one job running on the client, it > won't prevent the storage waiting for some time unless you activate > spooling. > > Regards, > Adrian Best regards, Anton ------------------------------------------------------------------------------ This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users