[Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread ekke85
etApp, the NetApp is mounted via NFS on the backup host and is getting the data from there to write to disk. ekke85 +-- |This was sent by ekk...@gmail.com via Backup Central. |Forward SPAM to ab.

[Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread ekke85
073 GB. Write rate = 59.65 MB/s btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 56.51 MB/s I would have thought Bacula would do speeds similar to this. Please let me know if you need to see the config files. I do not have spooling on and I don't have software com

[Bacula-users] 11TB backup run out of memory and dies

2010-06-17 Thread ekke85
sable-batch-insert). It has now been backingup for 13 hours with no problem. It is a bit slow(23mb/sec), but that is something ill look into later, or if someone has any suggestion please send them. Thanks for all your help guys! e

[Bacula-users] 11TB backup run out of memory and dies

2010-06-14 Thread ekke85
00GB and the 11TB is on the same NetApp, I do not spool the 600GB. ekke85 +-- |This was sent by ekk...@gmail.com via Backup Central. |Forward SPAM

[Bacula-users] 11TB backup run out of memory and dies

2010-06-14 Thread ekke85
B side, but I have 12760 rows in the "File" table. ekke85 +-- |This was sent by ekk...@gmail.com via Backup Central. |Forward SPA

[Bacula-users] 11TB backup run out of memory and dies

2010-06-14 Thread ekke85
quot; function. > There is a lot of files that i need to backup. I do run atop and also atopsar to try and see where and when it dies, but it is really hard to find it. The other problem that I think might cause it, is

[Bacula-users] 11TB backup run out of memory and dies

2010-06-11 Thread ekke85
ive Device = /dev/nst4   Device Type = Tape   Media Type = ULTRIUM-LTO-4   Autochanger = Yes   Alert Command = "sh -c 'smartctl -H -l error %c'"   Drive Index = 4   RemovableMedia = yes   Random Access = no   Maximum Spool Size = 50gb   Maximum Job Spool Size = 30gb   Spool

[Bacula-users] 11TB backup run out of memory and dies

2010-06-09 Thread ekke85
Hi I have a Scalar i500 library with 5 drives and 135 slots. I have a Red Hat 5 server with a 1gb nic. The setup works fine for backups on most systems. The problem I have is a NFS share with 11TB of data that I need to backup. Every time I run this job it will write about 600GB of data to a t

[Bacula-users] Maximum Concurrent Jobs

2010-06-04 Thread ekke85
can split the backup to use all 5 drives to write the backup to tapes? or what should i check to try and make the backup run faster? Any help will be much appreciated. ekke85 +-- |This was sent by ekk...@gmail.com via Back

[Bacula-users] label barcodes on Quantum Scalar i500

2010-04-28 Thread ekke85
the tape, so you can execute all of the steps manually and see if > they work, e.g. pick a slot, pick a drive, load a tape from that slot > into that drive with mtx, then write a small bit of data using dd or > tar or whatever, then rewind, eject unload with mt and/or mtx. > > Reg

[Bacula-users] label barcodes on Quantum Scalar i500

2010-04-26 Thread ekke85
ekke85 wrote: > Hi > > I hope someone can help me or point me in the right direction. I have a bran > new Quantum Scalar i500 with 5 drives and 125 slots. When ever I try to do > "label barcodes" it fails with timeout errors. Bacula is reading the barcodes > fr

[Bacula-users] label barcodes on Quantum Scalar i500

2010-04-26 Thread ekke85
uot;loaded? drive 0" command. 3302 Autochanger "loaded? drive 0", result: nothing loaded. 3304 Issuing autochanger "load slot 1, drive 0" command. 3992 B