On Fri, 1 Sep 2006, Arno Lehmann wrote:

>> I'm backing up several servers but one specifically is about 86Gb data
>> and 1.6Million files. The data backup finish after a few hours (<6) but
>> then it (I assume) updates the database with all the attributes and that
>> takes >18h!

>> I looked a little on variables etc and it seems like the pounding is
>> SELECT and INSERT statments.
>
> Apart from what Kern said about the technical side of modifying Bacula I
> do think there must be different approaches. For example, I have a
> backup system that stores 84 GB in about 42000 files with one job.

One of my full backup sets is approx 500Gb and 4.5 million files.

Bacula spends about 4 hours doing the inserts after the tape run has 
finished. I have tuned Mysql as much as I can and bumped the mysql server 
to 4Gb ram, but unless the inserts are moved to transaction oriented I 
can't see any way of speeding things up, as things are mostly I/O bound on 
the INSERT/SELECT cycle.

> (by the way, does anyone know how to count the number of files on a
> ReiserFS filesystem, short of doing something like "ls -R /home | egrep
> -v '^(\.+|)$' | wc -l"? It has no inode count for df -i)

Reiserfsck will gave some quick answers, but "find /home -type f | wc -l"
should be faster than your current solution.

AB


-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to