> I got the biggest gain by changing "Maximum File Size" to 5 GB. How
> fast is the disk where you spool file is locatet?
>
> A different test would be to create a 10 GB file with data from
> /dev/urandom in the spool directory and the write this file to tape
> (eg. nst0). Note: this will overwrite
-- Forwarded message --
From: Dennis Hoppe
Date: Thu, Apr 28, 2011 at 6:16 PM
Subject: Re: [Bacula-users] Waitung for a mount request?
To: John Drescher
Hello John,
Am 28.04.2011 18:46, schrieb John Drescher:
> On Thu, Apr 28, 2011 at 12:36 PM, Dennis Hoppe
> wrote:
>> Am 28.0
Mehma, Thank you very much for your reply.
If the job stucks then there is no log entry, otherwise there is a backup
report for the job done.
- Original Message -
From: "Mehma Sarja"
To:
Sent: Thursday, April 28, 2011 2:13 AM
Subject: Re: [Bacula-users] bconsole delayed response
>
>
> Ok, I don't have that setting enabled but I could try it. Question:
> how do you decide 5 GB is an optimal value for your LTO-4 tapes? what
> value could I put for my LTO-5 tapes? I don't really understand what
> should be the appropiate value for this directive.
> I don't know how to tell you ho
>
> I got the biggest gain by changing "Maximum File Size" to 5 GB. How
> fast is the disk where you spool file is locatet?
>
Ok, I don't have that setting enabled but I could try it. Question:
how do you decide 5 GB is an optimal value for your LTO-4 tapes? what
value could I put for my LTO-5 tap
Jason Voorhees schrieb:
>
> I think I was confusing some terms. The speed I reported was the total
> elapsed time that my backup took. But now according to your comments I
> got this from my logs:
>
> With spooling enabled:
>
> - Job write elapsed time: 102 MB/s average
> - Despooling elapsed ti
>
> to get the maximum speed with your LTO-5 drive you should enable data
> spooling and change the "Maximum File Size" parameter. The spool disk
> must be a fast one, especially if you want to run concurrent jobs.
> Forget hdparm as benchmark, use bonnie++, tiobench, iozone.
>
> Then after after y
Jason Voorhees schrieb:
>
> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
> TS3100 with hardware compression enabled and software (Bacula)
> compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
> network and iperf tests report me a bandwidth of 112 MB/s.
>
> I'
> I tried to copy a 10 GB file between both servers (Bacula and
> Fileserver) with scp and I got a 48 MB/s speed transfer. Is this why
> my backups are always near to that speed?
>
Try backing up that 10GB file on both servers with bacula.
--
John M. Drescher
--
On 04/28/2011 02:06 PM, Jason Voorhees wrote:
> I tried to copy a 10 GB file between both servers (Bacula and
> Fileserver) with scp and I got a 48 MB/s speed transfer. Is this why
> my backups are always near to that speed?
Try it with "scp -c arcfour" - like compression, encryption introduces
eno
On Thu, Apr 28, 2011 at 3:06 PM, Jason Voorhees wrote:
> On Thu, Apr 28, 2011 at 1:43 PM, John Drescher wrote:
>> On Thu, Apr 28, 2011 at 2:38 PM, John Drescher wrote:
/dev/mapper/mpath0:
Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>>> That is a raid. But
On Thu, Apr 28, 2011 at 1:43 PM, John Drescher wrote:
> On Thu, Apr 28, 2011 at 2:38 PM, John Drescher wrote:
>>> /dev/mapper/mpath0:
>>> Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>>>
>> That is a raid. But you still may not be able to sustain over 100MB/s
>> of somewh
On Thu, Apr 28, 2011 at 2:38 PM, John Drescher wrote:
>> /dev/mapper/mpath0:
>> Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>>
> That is a raid. But you still may not be able to sustain over 100MB/s
> of somewhat random reads. Remember that hdparm is only measuring
> sequ
> /dev/mapper/mpath0:
> Timing buffered disk reads: 622 MB in 3.00 seconds = 207.20 MB/sec
>
That is a raid. But you still may not be able to sustain over 100MB/s
of somewhat random reads. Remember that hdparm is only measuring
sequential performance of large reads.
--
John M. Drescher
-
On Thu, Apr 28, 2011 at 12:01 PM, John Drescher wrote:
>> So do you believe these speeds of my backups are normal? I though my
>> Library tape with LTO-5 tapes could write at 140 MB/s approx. It isn't
>> possible to achieve higher speeds?
>
> You need to speed up your source filesystem to achieve
> So do you believe these speeds of my backups are normal? I though my
> Library tape with LTO-5 tapes could write at 140 MB/s approx. It isn't
> possible to achieve higher speeds?
You need to speed up your source filesystem to achieve better
performance. Use raid10 or get a SSD. It has nothing at
> On Thu, 28 Apr 2011 08:52:54 -0700, David Newman said:
>
> On 4/28/11 4:37 AM, Martin Simmons wrote:
> >> On Wed, 27 Apr 2011 21:13:53 -0700, David Newman said:
> >>
> >> bacula 5.0.3, FreeBSD 8.2
> >>
> >> While running a full backup of a file server, bacula keeps issuing
> >> 'Cannot f
On Thu, Apr 28, 2011 at 11:41 AM, John Drescher wrote:
>> How can I know where's the bottleneck? I'm using an ext4 filesystem.
>> Are these tests useful?
>>
>> [root@qsrpsbk1 ~]# hdparm -t /dev/sda
>>
>> /dev/sda:
>> Timing buffered disk reads: 370 MB in 3.01 seconds = 122.89 MB/sec
>> [root@qs
On Thu, Apr 28, 2011 at 12:36 PM, Dennis Hoppe
wrote:
> Hello John,
>
> Am 28.04.2011 17:47, schrieb John Drescher:
>> ...
>> Did you unmount the previous media that was in the device it is
>> complaining about using the umount command?
>
> i am a little bit confused. There should not be any media
> How can I know where's the bottleneck? I'm using an ext4 filesystem.
> Are these tests useful?
>
> [root@qsrpsbk1 ~]# hdparm -t /dev/sda
>
> /dev/sda:
> Timing buffered disk reads: 370 MB in 3.01 seconds = 122.89 MB/sec
> [root@qsrpsbk1 ~]# hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached re
Hello John,
Am 28.04.2011 17:47, schrieb John Drescher:
> ...
> Did you unmount the previous media that was in the device it is
> complaining about using the umount command?
i am a little bit confused. There should not be any media mounted,
because the client is using his own device / pool and th
On Thu, Apr 28, 2011 at 10:30 AM, John Drescher wrote:
>> No, there are just a "normal" number of files from a shared folder of
>> my fileserver with spreadsheets, documents, images, PDFs, just
>> information of final users.
>>
>
> The performance problem is probably filesystem performance. A sing
On 4/28/11 4:37 AM, Martin Simmons wrote:
>> On Wed, 27 Apr 2011 21:13:53 -0700, David Newman said:
>>
>> bacula 5.0.3, FreeBSD 8.2
>>
>> While running a full backup of a file server, bacula keeps issuing
>> 'Cannot find any appendable volumes' messages and prompting me to create
>> a new vol
On Thu, Apr 28, 2011 at 11:25 AM, Dennis Hoppe
wrote:
> Hello John,
>
> Am 28.04.2011 16:07, schrieb John Drescher:
>> 2011/4/28 Dennis Hoppe :
>>> this is my first attempt with bacula and i need some advice about my
>>> configs. I am running a file based backup with an extra device for each
>>> c
> The performance problem is probably filesystem performance. A single
> hard drive will only hit 100 MB/s if you are baking up files that are
> a few hundred MB.
>
>
> --
> John M. Drescher
>
How could I run some tests to verify this? I'm running MySQL server in
the same host where Bacula is inst
2011/4/28 Hugo Letemplier :
> Hi,
>
> I am adding this to precise question and "reopen" the topic.
> I read many time the chapter of the documentation but I was never sure
> of what I understood.
>
> As you know when you do an inc you need a " sequence of job " : at
> least a full + maybe 1 diff +
Did you activated attribute spooling ( and maybe data spooling too if
you use LTO )?
2011/4/28 Jason Voorhees :
> Hi:
>
> On Thu, Apr 28, 2011 at 10:19 AM, John Drescher wrote:
>> On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees
>> wrote:
>>> Hi:
>>>
>>> I'm running Bacula 5.0.3 in RHEL 6.0 x86
> No, there are just a "normal" number of files from a shared folder of
> my fileserver with spreadsheets, documents, images, PDFs, just
> information of final users.
>
The performance problem is probably filesystem performance. A single
hard drive will only hit 100 MB/s if you are baking up files
Try changing your Maximum Network Buffer size in your bacula-sd config.
Something like
Maximum Network Buffer Size = 262144 #65536
Maximum block size = 262144
Keep in mind that this will make your sd unable to read previous
backups, IIRC.
Search archives for this parameter, e.g.
http://old.
Hello John,
Am 28.04.2011 16:07, schrieb John Drescher:
> 2011/4/28 Dennis Hoppe :
>> this is my first attempt with bacula and i need some advice about my
>> configs. I am running a file based backup with an extra device for each
>> client.
>>
>> I thought this would support parallel jobs, but if
Hi:
On Thu, Apr 28, 2011 at 10:19 AM, John Drescher wrote:
> On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees wrote:
>> Hi:
>>
>> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
>> TS3100 with hardware compression enabled and software (Bacula)
>> compression disabled, using L
On 27/04/2011 18:08, Martin Simmons wrote :
>> On Wed, 27 Apr 2011 17:26:45 +0200, le dahut said:
>>
>> On 15/04/2011 16:12, Bruno Friedmann wrote :
>>> On 04/15/2011 03:10 PM, laurent flori wrote:
Le jeudi 14 avril 2011 à 20:19 +0200, Bruno Friedmann a écrit :
> On 04/14/2011 02:57 P
On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees wrote:
> Hi:
>
> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
> TS3100 with hardware compression enabled and software (Bacula)
> compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
> network and iperf tests repo
Hi:
I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
TS3100 with hardware compression enabled and software (Bacula)
compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
network and iperf tests report me a bandwidth of 112 MB/s.
I'm not using any spooling configura
2011/4/28 Dennis Hoppe :
> Hello,
>
> this is my first attempt with bacula and i need some advice about my
> configs. I am running a file based backup with an extra device for each
> client.
>
> I thought this would support parallel jobs, but if i start two backup
> jobs like "run job=bserver" and
Hello,
this is my first attempt with bacula and i need some advice about my
configs. I am running a file based backup with an extra device for each
client.
I thought this would support parallel jobs, but if i start two backup
jobs like "run job=bserver" and "run job=bclient1", the second job is
w
Hi,
I am adding this to precise question and "reopen" the topic.
I read many time the chapter of the documentation but I was never sure
of what I understood.
As you know when you do an inc you need a " sequence of job " : at
least a full + maybe 1 diff + maybe many incs
That would be nice to have
Am Donnerstag, 28. April 2011, 06:13:53 schrieb David Newman:
> bacula 5.0.3, FreeBSD 8.2
>
> While running a full backup of a file server, bacula keeps issuing
> 'Cannot find any appendable volumes' messages and prompting me to create
> a new volume. I've done that, three times, and tried rerunni
> Hello,
>
> have anyone an idea?
>
2.2.6 is came out in November of 2007. It is unlikely that many are
using this old of a version of bacula.
>From memory (since I have not used that version since sometime in
2008) 2.2.6 does not support 64 bit windows clients and may have
problems with shadow c
28.4.2011 14:37, Martin Simmons kirjoitti:
>> On Wed, 27 Apr 2011 21:13:53 -0700, David Newman said:
>>
>> bacula 5.0.3, FreeBSD 8.2
>>
>> While running a full backup of a file server, bacula keeps issuing
>> 'Cannot find any appendable volumes' messages and prompting me to create
>> a new volu
> On Wed, 27 Apr 2011 21:13:53 -0700, David Newman said:
>
> bacula 5.0.3, FreeBSD 8.2
>
> While running a full backup of a file server, bacula keeps issuing
> 'Cannot find any appendable volumes' messages and prompting me to create
> a new volume. I've done that, three times, and tried rerun
> I have a problem with a Windows 2003 server where backup is
> interrupted quite often. I only see the problem on this machine
> (dev0), but the error looks more like an SD failure. This SD is in use
> for several other backups, which work well. Director, FD and SD are
> all running Bacula 5.0.3.
Hello,
have anyone an idea?
Thank you
Köksal
Köksal Erdal schrieb:
> Hello Folks,
>
> we are save our data with bacula 2.2.6 on Solaris.
> We have to backup a Windows 2008 R2 and i would like to know, if the
> bacula client version 2.2.6 is compatible on it?
>
> Download Link for the Client:
43 matches
Mail list logo