Hi,

I also think you should change the database backend and make some bonie++ 
benchmarks on your storage subsystem. Perhaps it could also be an ethernet 
autoneg proplem (Switch Half/Server Full). Take a look to ethtool.

Some performance data of our bacula installation for comparison: 
Supermicro Server with two Pentium Xeon 2.8 GHz (single core), 2 GB RAM, SuSE 
10.1 (64bit), Bacula 1.38.9 and MySql database backend, at the moment with 
two 320 GB Western Digital harddisks and a 500 GB jobspool area (consists of 
a LVM 2-way stripeset).

We have two VMare GSX servers (with about 10 VMs), where daily snapshots are 
made. These snapshots are daily backuped with about 45-60 MBytes/s to bacula. 
Bacula is using one CPU with about 25% usage, so i think our 2 GBit/s trunk 
to the backup server should correspond with the CPU performance. Bonnie++ 
says, that the max speed of the jobspool area is about 110 MBytes/s, so we 
only have to invest in additional harddisks, if we grow over 1 GBit/s.

We also make filesystem backup of each of this VM servers and several other 
servers in our datacenter, and get about 30-40 MBytes/s (a little bit under 
the snapshot backups, because of the VMware and filesystem losses). But this 
speed is dependent from the speed of the storage backends of the servers (we 
doesn't have EMC Clariions/Symmetrix or Hitachi HDS !-))

A small comment to the backup speed report. Kern seems to measure the total 
time of the backup and amount. If you are using jobspooling, than first the 
jobdata is spooled to harddisk (an individual snapshot job can reach here 
about 30 MBytes/s, if no other jobs are working on these server storages), 
and then transfered to tape (we are using two VXA-3 10-slot libs with about 
15 MBytes/s). 

For a 200 GB snapshot (example) the backup to the job spool will take about 
110 min, so the server (client) should be free (I mean server IO and network 
traffic) after this time, but the transfer to tape will take additional 220 
min, so the 200 GB for the snapshot will take 330 min, witch is about 10 
MBytes/s. So the effective backup speed between bacula and the server is 
about 3 times higher, than the speed which is reported. So you should also 
take a look to your network speed (perhaps with iptraf or ethereal).

The big advantage of jobspooling is, that several datastreams could run in 
parallel, so that the entire backup process could be faster. Also you can 
speed up restores, because you can configure bacula, that the data on the 
tape is not intermixed/interleaved.

So long
Michael

On Sunday 18 June 2006 18:14, Sebastian Stark wrote:
> Am 18.06.2006 um 13:04 schrieb Tracy R Reed:
> > Tracy R Reed wrote:
> >> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read
> >> Blk_wrtn
> >> hda               0.00         0.00         0.00
> >> 0          0
> >> sda             969.39       146.94     13069.39        144
> >> 12808
> >>
> >> How on earth could it be writing so much more than it is reading?
> >> That
> >> is quite puzzling. If I strace the bacula-fd or bacula-sd
> >> processes they
> >> are just sitting in a select. I never see them doing anything
> >> else. But
> >> the spool file is growing so I know it is making progress.
> >
> > A little more info: I noticed further down in the iostat output
> > that it
> > says all of the write IO is going to dm-10 which means the 10th device
> > manager device which in my lvm setup is  /var. This happens to be
> > where
> > the bacula db is located. If I do an strace on bacula-dir when I first
> > start up bacula I see:
>
> As far as I understand the SQLite interface shouldn't be used in
> production environments. You should get better performance by
> switching to MySQL or PostgreSQL.
>
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to