Hello,
W dniu 20 grudnia 2011 10:30 użytkownik Marcello Romani <
mrom...@ottotecnica.com> napisał:
> >> A "direct" backup from FD into a SD tape device is performed with a
> little
> >> buffering and require more computation, disk seeks and context
> switching. It
> >> is a single, complicated ch
Il 19/12/2011 17:32, gary artim ha scritto:
> Thanks for the advice, _most_ responsive list I belong to! cheers! gary
>
> 2011/12/19 Radosław Korzeniewski :
>> Hello,
>>
>> 2011/12/16 gary artim
>>>
>>> No, just Spool Attributes = yes. g.
>>>
>>
>> A "direct" backup from FD into a SD tape device
Thanks for the advice, _most_ responsive list I belong to! cheers! gary
2011/12/19 Radosław Korzeniewski :
> Hello,
>
> 2011/12/16 gary artim
>>
>> No, just Spool Attributes = yes. g.
>>
>
> A "direct" backup from FD into a SD tape device is performed with a little
> buffering and require more co
Hello,
2011/12/16 gary artim
> No, just Spool Attributes = yes. g.
>
>
A "direct" backup from FD into a SD tape device is performed with a little
buffering and require more computation, disk seeks and context switching.
It is a single, complicated chain with two threads one for FD and one for
SD
No, just Spool Attributes = yes. g.
On Fri, Dec 16, 2011 at 3:27 AM, Marcello Romani
wrote:
> Il 15/12/2011 20:31, gary artim ha scritto:
>> will do, interestingly I picked another slot on my autochanger
>> (different lto vol) and got (see below), running a peak time no less.
>> Last nite I ran a
I'm working on going to SATA 3, current raid is SATA 1 (1.5). I have
run benchmarks, but have to dig (find) them or rerun...g.)
On Fri, Dec 16, 2011 at 1:18 AM, Uwe Schuerkamp
wrote:
> On Thu, Dec 15, 2011 at 10:25:04AM -0800, gary artim wrote:
>> no, full, doing a mod on run command in bconsole
Il 15/12/2011 20:31, gary artim ha scritto:
> will do, interestingly I picked another slot on my autochanger
> (different lto vol) and got (see below), running a peak time no less.
> Last nite I ran a 9pm, the hours of the dead, and was getting about
> 2.4gb min, today 2.66gb min. -- go figure...
>
On Thu, Dec 15, 2011 at 10:25:04AM -0800, gary artim wrote:
> no, full, doing a mod on run command in bconsole to force full backup
> on every test. cheers
>
40MB/sec sounds very much like a natural RAID speed limit you're
hitting. Have you tried running some i/o benchmarks on the disks like
bonn
Interesting...I followed the job and got a big increase in writing the
tape using "Spool Attributes = yes", went from 2.66GB/minute toe
3.8GB/minute, but but the job took longer to finish, close to 50
minutes writing out attribute data to mysql. Looks like putting the
sql on a ssd would help. Nice
will do, interestingly I picked another slot on my autochanger
(different lto vol) and got (see below), running a peak time no less.
Last nite I ran a 9pm, the hours of the dead, and was getting about
2.4gb min, today 2.66gb min. -- go figure...
15-Dec 10:58 bacula-dir JobId 1: Bacula bacula-dir 5
no seperate drive.
On Thu, Dec 15, 2011 at 10:26 AM, John Drescher wrote:
> On Thu, Dec 15, 2011 at 1:25 PM, gary artim wrote:
>> no, full, doing a mod on run command in bconsole to force full backup
>> on every test. cheers
>>
>
> Is the bacula database on the same array as the source?
>
> John
On Thu, Dec 15, 2011 at 2:24 PM, gary artim wrote:
> no, seperate drive. g.
>
Try enabling attribute spooling. Like the other poster said.
John
--
10 Tips for Better Server Consolidation
Server virtualization is being d
no, seperate drive. g.
On Thu, Dec 15, 2011 at 10:26 AM, John Drescher wrote:
> On Thu, Dec 15, 2011 at 1:25 PM, gary artim wrote:
>> no, full, doing a mod on run command in bconsole to force full backup
>> on every test. cheers
>>
>
> Is the bacula database on the same array as the source?
>
>
> -Ursprüngliche Nachricht-
> Von: gary artim [mailto:gar...@gmail.com]
> Gesendet: Donnerstag, 15. Dezember 2011 19:09
> An: bacula-users@lists.sourceforge.net
> Cc: Gary Artim
> Betreff: Re: [Bacula-users] tuning lto-4
>
> using this bacula-sd.conf, the bes
On Thu, Dec 15, 2011 at 1:25 PM, gary artim wrote:
> no, full, doing a mod on run command in bconsole to force full backup
> on every test. cheers
>
Is the bacula database on the same array as the source?
John
--
10 Tip
no, full, doing a mod on run command in bconsole to force full backup
on every test. cheers
On Thu, Dec 15, 2011 at 10:21 AM, John Drescher wrote:
> On Thu, Dec 15, 2011 at 1:09 PM, gary artim wrote:
>> using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
>> not working with ne
On Thu, Dec 15, 2011 at 1:09 PM, gary artim wrote:
> using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
> not working with network backups, this is just a straight raid 5 to
> lto-4. I'm now thinking that my db (mysql) or raid is the
> drag/slowdown since I can get over 180MBs
using this bacula-sd.conf, the best I get is about 2.4GB a minute. I'm
not working with network backups, this is just a straight raid 5 to
lto-4. I'm now thinking that my db (mysql) or raid is the
drag/slowdown since I can get over 180MBs with btape. Any suggestions
welcomes, I feel I've exhausted
180 MBs, 256MB min/max blocksize.
[root@genepi1 bacula]# tapeinfo -f /dev/nst0
Product Type: Tape Drive
Vendor ID: 'HP '
Product ID: 'Ultrium 4-SCSI '
Revision: 'B12H'
Attached Changer API: No
SerialNumber: 'HU17450M8L'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 1
SCSI LUN: 0
Ready: yes
Buffere
Hello,
> blocksize set with mt and in bacula-sd.conf
Unless you are setting "minimum block size" (which you really should
not), Bacula uses the tape drive in variable block size mode, with block
sizes up to the value given in "maximum block size".
Setting a fixed block size with mt (and reading
got close to 120 MBs, using 64kb buffer and 20gb maximum file size
using btape...now test with real data...gary
===
blocksize set with mt and in bacula-sd.conf to == 65536
===
[root@genepi1 bac
btape getting 89 MBs, so maybe my disk and sql updating is effecting
the speed? note drive has a 16384 blocksize, ran tapeinfo on the
drive...gary
[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: "/dev/nst0"
In the message dated: Thu, 01 Dec 2011 16:27:33 GMT,
The pithy ruminations from Alan Brown on
were:
=> gary artim wrote:
=> > You guys/gals are great, very responsive! I did try
=> > spooling/despooling and my run times shot up.
=>
=> They will - you're copying everything twice (disk to disk to
gary artim wrote:
> You guys/gals are great, very responsive! I did try
> spooling/despooling and my run times shot up.
They will - you're copying everything twice (disk to disk to tape), but
this is the only way to achieve fast despooling speeds - if you don't do
this then your LTO drive will s
I believe (its been a while since I have needed to change my
configuration) that my LTO-3 drive does not do hardware compression on
blocks over 512K. I am using 256K blocks right now, and I did not see
any improvement above that. I am using spooling on a pair of striped
hard disks, and despoo
You guys/gals are great, very responsive! I did try
spooling/despooling and my run times shot up. I was using a simple
7200 drive though, no ssd or raid...I assume the performance gain
happens when your networks multi machines...wearing multiple hats so
will report back on btape next week, unless I
gary artim wrote:
> thank much! will try testing with btape.
Please let us know the results
> btw, I ran with 20GB maximum
> file size/2MB max block (see bacula-sd.conf below) and got these
> results, 20MB/s increase, ran 20 minutes faster, got 50MBs --
You should be seeing 120Mb/s or thereabo
thank much! will try testing with btape. btw, I ran with 20GB maximum
file size/2MB max block (see bacula-sd.conf below) and got these
results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- now if I
can just double the speed I could backup 15TB in about 45/hrs. I don't
have that much data ye
On 30/11/11 19.43, gary artim wrote:
> Thanks much, I'll try today the block size change first. Then try the
> spooling. Dont have any unused disk, but may have to try on a shared
> drive.
> The "maximum file size" should be okay? g.
Choosing a max file size is mainly a tradeoff between write perf
block size change didnt make much difference, but also running with an
rsync running against the backup volume (raid 5). Adding spool and
will run with both blocksize change and spool configuration. -- gary
On Wed, Nov 30, 2011 at 10:43 AM, gary artim wrote:
> Thanks much, I'll try today the bloc
Thanks much, I'll try today the block size change first. Then try the
spooling. Dont have any unused disk, but may have to try on a shared
drive.
The "maximum file size" should be okay? g.
On Wed, Nov 30, 2011 at 8:45 AM, Alan Brown wrote:
> gary artim wrote:
>>
>> Hi --
>>
>> Getting about 41.6/
gary artim wrote:
> Hi --
>
> Getting about 41.6/MBs and hoping for closer to the max (120MB). I
> tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
> where about 35/MBs. Any advise welcomed...should I look at max/min
> block sizes?
Don't adjust min size.
Bacula's max block size
Hi --
Getting about 41.6/MBs and hoping for closer to the max (120MB). I
tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
where about 35/MBs. Any advise welcomed...should I look at max/min
block sizes?
most of the data is big, genetics data -- filesizes avg in the 500/MB
to 3-4/G
33 matches
Mail list logo