To get get maximum concurrency you need 1 storage device per pool. Only one
pool on a storage device can be open at a time. The other approach is to have
multiple jobs use the same pool with a high enough maximum concurrency set.
Multiple jobs will be written to the pool no waiting.
Derek
O
A complete backup takes several days so I didn't run the compressed backup to
completion. But if I use examined files as a gauge it's wasn't as bad as 8x.
It took 16 hours to process 300,000+ files with compression enabled and only 4
with it disabled.
Derek
On Jul 1, 2010, at 18:57, "James
ackup jobs
this isn't a problem but for the size of these backups I need speed over backup
size on disk.
Thanks,
Derek
On Jul 1, 2010, at 11:55 , Gavin McCullagh wrote:
> On Thu, 01 Jul 2010, Derek Harkness wrote:
>
>> I've seen a very significant slow in backup speed b
I've seen a very significant slow in backup speed by enabling gzip compress,
32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm backing up has
lots of CPU 24x2.6ghz so the compression time shouldn't be a huge factor. Is
this normal for bacula or is there an optimization I'm missing.
Yep slow drives and lots of data 5.5TB and I'm only getting 5-10MB/s
Derek
On Jun 18, 2010, at 4:49, Dietz Pröpper wrote:
> Derek Harkness:
>> I'm getting the following error between one of my clients and the
>> storage server. All my other clients have been able
I'm getting the following error between one of my clients and the storage
server. All my other clients have been able to backup just fine to this
storage device but this one client always returns the Connection reset by peer.
I've adjusted the heartbeat and the keepalive setting on the client .
I run into a similar problem and here's what I did.
1) Make sure your autochanger entry has all the drives list in the Device line.
Device = Drive-1, Drive-2, ...
2) In the Storage entry in bacula-dir.conf. Make sure you have a Maximum
Concurrent Jobs entry, 20 seems like a good number.
Stora
Hi all,
I have a backup job that's using data spooling. The job successful despooled
all the data to tape but the bacula-director crashed before the attributes
could get put into the database. Is this information lost? Is there anyway to
get the attributes into the catalog without rerunning