On Sat, 2005-11-26 at 10:05, Kern Sibbald wrote:
> On Saturday 26 November 2005 09:51, Ove Risberg wrote:
> > Hi,
> >
> > On Fri, 2005-11-25 at 20:53, Kern Sibbald wrote:
> > > Hello,
> > >
> > > I don't have a problem with the basic feature (in fact it was planned
> > > from the very beginning).  However, this is really not a task for the
> > > Director, nor is there any need for multiple simultaneous jobs.  Rather,
> > > this would be an enhancement to the File daemon that permits it to
> > > partition the work into mutiple threads in the same job.
> >
> > The problem with writing multiple filesystems to one data stream at the
> > same time is that restoring one fileststem will be slow because you have
> > to read the data for all filesystems backupped at the same time.
> 
> Generally one does several hundred backups or more for each restore, so it 
> seems to me that the tradeoff is very much positive.  In addition, were one 
> to stream the different data into separate spool files, the difference in 
> restore time would be completely negligible.

I have not played much with spooling but If the file daemon could send
data from each filesystem to a separate spool file the problem with
slower restore will be solved and everyone is happy.

> 
> >
> > If you place each filesystem in a separate job you can fix this by
> > migrating one job at the time to another tape.
> >
> > Do you have a better solution to avoid long restore times?
> 
> Placing different filesystems in separate jobs (IMO) is a administration 
> nightmare.  How could someone new on the job know that he has restored all 
> the filesets, in addition, a restore of more than one fileset requires two 
> separate administrative actions.

We all wants to avoid administation nightmares.

> 
> >
> > > It seems to me that one just needs one new directive that defines the
> > > parallelism in the FD, and this should probably be defined in the
> > > Director on a job level.  Then in the File daemon, it would simply take
> > > the "File" commands and pass them off to the threads.  For example, if
> > > there were two threads, with your example, "/" would be passed to the
> > > first, and "/usr" would be passed to the second, and subsequent File
> > > statements would be passed to the first idle thead.
> >
> > The only problem I have with this solution is that the restore times
> > will be longer.
> 
> I don't see this as a major problem, and if it really is, it could be 
> mitigated as I mentioned above to give *exactly* the same result as running 
> two simultaneous jobs.

I am sure you are right...

I will think about this a bit more and rewrite the Feature Request on
Sunday.

> 
> >
> > > If you want to rewrite the Feature Request along the above lines, I'd be
> > > happy to include it ...
> >
> > I will think about this and rewrite the Feature Request on Sunday.
> >
> > > Please note, you have a minor error in your FileSet in that the
> > > directory /home/biguser will be backed up twice -- once with /home, and
> > > once with /home/biguser, *unless* /home/biguser is a different filesystem
> > > from /home.
> >
> > I know and in job 3 in the example the modified director see this
> > problem and add an exclude for /home/biguser.
> >
> > > On Friday 25 November 2005 20:16, Ove Risberg wrote:
> > > > Item n:   Multiple concurrent jobs from one fileset and job definition
> > > >   Date:   25 November 2005
> > > >   Origin: Ove Risberg (Ove.Risberg at octocode dot com)
> > > >   Status:
> > > >
> > > >   What:   I want to start multiple backup jobs on one client to get the
> > > >           fastest possible backup of my server but I do not want a
> > > >           complicated configuration or restore process.
> > > >
> > > >           The director could parse the FileSet and start one job for
> > > >           each File entry and send a modified FileSet definition to the
> > > >           file daemon for each job.
> > > >
> > > >           A confiuration option should be used to enable or disable
> > > > this feature and MaximumConcurrentJobs should be used to limit the
> > > > number of jobs running at the same time.
> > > >
> > > >           Bacula must be modified to handle full backups with the same
> > > >           ClientId, PoolId, FileSetId and SchedTime as one full backup
> > > >           and include all of them when making a restore or verify.
> > > >
> > > >           No modifications has to be done to file and storage daemons.
> > > >
> > > >           This is a example FileSet in bacula-dir.conf:
> > > >           FileSet {
> > > >             Name = "Full Set"
> > > >             Include {
> > > >               Options {
> > > >                 onefs=no
> > > >                 Start_One_Job_For_Each_File_Entry = Yes
> > > >               }
> > > >               File = /
> > > >               File = /usr
> > > >               File = /home
> > > >               File = /home/biguser
> > > >             }
> > > >             Exclude { }
> > > >           }
> > > >
> > > >           This is the FileSet configurations sent to the file daemon
> > > > for this example:
> > > >            Job 1: Include /, Exclude /usr, /home
> > > >            Job 2: Include /usr
> > > >            Job 3: Include /home, Exclude /home/biguser
> > > >            Job 4: Include /home/biguser
> > > >
> > > >   Why:    Multiple concurrent backups of a large fileserver with many
> > > >           disks and controllers will be much faster.
> > > >
> > > >   Notes:  I am willing to try to implement this but I will probably
> > > >           need some help and advice.
> > > >
> > > >
> > > >
> > > >
> > > > -------------------------------------------------------
> > > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log
> > > > files for problems?  Stop!  Download the new AJAX search engine that
> > > > makes searching your log files as easy as surfing the  web.  DOWNLOAD
> > > > SPLUNK! http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
> > > > _______________________________________________
> > > > Bacula-devel mailing list
> > > > [EMAIL PROTECTED]
> > > > https://lists.sourceforge.net/lists/listinfo/bacula-devel
> >
> > -------------------------------------------------------
> > This SF.net email is sponsored by: Splunk Inc. Do you grep through log
> > files for problems?  Stop!  Download the new AJAX search engine that makes
> > searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
> > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
> > _______________________________________________
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users



-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to