On 12/27/2013 5:14 PM, Levie, Jim wrote:

On Dec 27, 2013, at 11:01 AM, Josh Fisher wrote:

On 12/20/2013 7:10 PM, Levie, Jim wrote:
I have a group of three CentoOS 5.10 boxes and client backups are, well, slow. The hardware specs are that the systems are all Dell R710's (dual quad processors w/24GB of memory). The Bacula server/storage node has a Dell LT04 Powervault 124T and is an authentication only server (LDAP) for a small (~20 node) network. Each of the clients is a pure file server with dual bonded 1Gps links to a CISO switch. One with 3TB and the other with 3TB internal and 6TB in an external hardware RAID box. The total to be backed up is about 11TB. Compression is not used, but spooling into a 5GB space with 10k SAS drives in RAID 0 is used. The overall rate on the sever/storage node is only about 10-12Mps and that on either of the clients is only about 9-10Mps. It takes 5-6days to get a full backup. It would seem like I should be getting better than that.

Your spool space is probably too small. It does not work like a
reader/writer ring buffer. When the spool fills, the clients must wait
while the storage daemon de-spools to tape. Also, you should be setting
the spool size in the config file, rather than letting Bacula detect the
disk full error to trigger de-spooling. A 5 GB spool size means the
clients will stall more than 2,000 times while writing the 11 TB.

If the spool space cannot be larger, then I suggest not using data
spooling, but rather attribute-only spooling. Your clients are likely
fast enough to keep the Powervault busy without data spooling. In
general, if your clients are fast enough to keep the tape drive busy,
then there is not much to be gained by data spooling.

Attribute spooling, though, is critical, else all the DB random i/o will
slow down backups. It is also a good idea to have the DB on a different
partition than the Bacula work directory (where the attribute spool file
is located) in order to speed up de-spooling of attributes.

I can easily make the spool space larger (by a lot) and the size of the spool area is set in the config files. But I'm not so sure that will significantly increase the overall backup rate. The fill time to the 5gb point on the spool yields a data rate only slightly greater than the overall rate. 

Spool Data is set to yes, which should automatically turn attribute spooling on. The DB and both spools are on separate partitions which in turn are on separate disks.

I agree that it isn't likely the spool size if it is slow filling the first 5gb. Still, I would run without data spooling just to try to isolate the bottleneck. You see low CPU usage on all boxes, which means compression, encryption, etc. are turned off or else not a factor. Disk rates are fine. Tape rate is much faster than overall rate. Attributes are being spooled. That suggests a network issue. You could try the following:

Make sure MaximumBandwidthPerJob is not defined in any of the config files. This is just to make sure it is not being (accidentally) throttled on purpose.

Adjust the MaximumNetworkBufferSize setting in all config files. The default could be too small for your fast local netowrk machines. Some people have reported drastic increases in throughput with a larger buffer size. On the other hand, if you have already set it to some large value, try setting a low value like 32k. I believe the buffer size can be too large, depending on the underlying TCP stack settings.

------------------------------------------------------------------------------
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to