Another thing that tends to slow down backups is lots of hard links. In that 
case, you will see the CPU usage of the FD go above 90%.  

Also, I don't know whether or not Tracy is speaking about Full backups or 
others.  It makes no sense to try to look at backup rates as produced by 
Bacula for anything other than Full backups.

On Wednesday 02 November 2005 18:06, Karl Cunningham wrote:
> Tracy R Reed wrote:
> > Karl Cunningham wrote:
> >>How many files are you backing up? There is a database insert for every
> >>file that gets backed up.  Are you sure there isn't a lot of disk
> >>thrashing going on for database IO?   What if you temporarily put the
> >>database on your usb disk to see if that makes a difference.
> >
> > My main backup run consists of 358,000 files. Mostly OS, email,
> > webpages, digital photos, pretty standard stuff. My bacula.db is 277M in
> > size right now. Is that considered large? I have run some basic disk IO
> > benchmarks while the backup is running and get some pretty impressive
> > numbers. So there would seem to be plenty of available disk IO capacity.
> > I can try putting the db on the USB disk.
> >
> >>While doing the slow backup, do you see lots of CPU idle time?  Is there
> >>significant iowait time?
> >
> > Yes, lots of cpu idle time. Sometimes there is as much as 50% iowait
> > time. I am going to try moving the bacula.db off onto the other disk and
> > see if that improves things. I don't think my situation is all that
> > unusual so I would be surprised if that were really the problem as it
> > seems everyone would have slow backups.
>
> I agree.  350K files isn't that much and 277M db size is certainly not
> large.  Here is the performance of my two Bacula servers backing up
> themselves.  This is after software compression. In both cases the
> database is on its own spindle, which I think makes quite a difference.
>
> dual PII-450   - 335K files totalling 3.6GB -- 875KB/s
> single P4-3GHz - 450K files totalling 4.5GB -- 2160KB/s
>
> Something to note is that the second server backed up its 950MB catalog
> (one file) at 3.5MB/s, and that computation includes the time taken to
> dump the database to a file. So in this case saving one big file is
> considerably faster than saving lots of small files.
>
> Something you could look at is to compare the speed of backing up a
> single large file vs lots of very small ones.  If both were say 1GB of
> data you could see what difference the database inserts and directory
> hashing make.
>
> Karl
>
>
> -------------------------------------------------------
> SF.Net email is sponsored by:
> Tame your development challenges with Apache's Geronimo App Server.
> Download it for free - -and be entered to win a 42" plasma tv or your very
> own Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (">
  /\
  V_V


-------------------------------------------------------
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to