On 10/07/2010 11:03 PM, Mingus Dew wrote:
> All,
> I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
> MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible
> version of MySQL 5, but migrating to PostgreSQL isn't an option at this
> time.
>
> I am
>> Compare against a stock, non tuned, Bacula install. Are you
>> going between building where you get the slow transfer speed?
>> UCSC has 1 Gb links between buildings from my recollection. The
>> link to the outside world is not much more than that. Bacula
>> also has a batch mode which you can
On Thu, October 7, 2010 08:21, John Drescher wrote:
[SNIP]
> I see I am running not the latest version of that. I will see if that
> fixes the issue.
>
> dev6 ~ # mtx --version
> mtx version 1.2.18rel
>
> I have mtx-1.3.12 available in my distro repository but the link they
> give me for the upstre
>
> I have been using this setup for awhile. You absolutely must disable
Bacula
> compression on the ZFS Devices within the SD or for the specific Pools
that
> have volumes on the ZFS. Doubling up encryption can actually increase
file
> sizes and also lead to data errors.
>
It can _not_ lead to
> Without attribute spooling or batch (not sure if that
> is postgres only) after each file is read the database
> needs to add records.
We have attribute spooling activated right now.
Tim Gustafson
Baskin School of Engineering
UC Santa Cruz
t...@soe.ucsc.edu
831-459-5354
---
>> Is the MySQL database storage on the same RAID array you are
>> writing backups to?
>
> Yes and no. Currently, in our "dev" environment, they are both on the same
> physical RAID array, but Bacula operates in a separate jail from mySQL. When
> we move to production, the director will probabl
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible
version of MySQL 5, but migrating to PostgreSQL isn't an option at this
time.
I am trying to backup to tape a very large number of files fo
I'm going to try to reply to all the responses I got together.
> Have you tried backing up other hosts on your network? What are
> the speeds with these hosts? I've noticed that different host
> respond with varying speeds despite being on the same network.
> Wondering if this has to do the client
On 10/07/2010 01:34 PM, Phil Stracchino wrote:
> On 10/07/10 13:40, Lamp Zy wrote:
>> Hi,
>>
>> It will be great (at least for me :-) ) when Bacula looks for previous
>> Full backup on a brand new client to also check if a Full backup is
>> currently running.
>>
>> We do Incr. backups M-F at night
On 10/07/10 13:47, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> I'm planning a Bacula setup with ZFS on the SDs (media being disk,
> not tape), and I just wonder - should I use a smaller recordsize (aka
> largest block size) than the default setting of 128kB?
Actually, there are arguments in favor of
On 10/07/10 13:40, Lamp Zy wrote:
> Hi,
>
> It will be great (at least for me :-) ) when Bacula looks for previous
> Full backup on a brand new client to also check if a Full backup is
> currently running.
>
> We do Incr. backups M-F at night and they use tapes from IncrTapes pool.
> Full back
'Roy Sigurd Karlsbakk' wrote:
>Hi all
>
>I'm planning a Bacula setup with ZFS on the SDs (media being disk, not
>tape), and I just wonder - should I use a smaller recordsize (aka
>largest block size) than the default setting of 128kB?
Setting the recordsize to 64k has worked well for us so far.
I
I think at one point Bacula did complain. Thanks for answering my question
though. I think in order to disable the job from being run, I'll have to
make the scripts that call bconsole commands read-only instead of
executable.
-Shon
On Thu, Oct 7, 2010 at 8:12 AM, Phil Stracchino wrote:
> On 10/0
On Thu, Oct 7, 2010 at 2:33 PM, Roy Sigurd Karlsbakk wrote:
>> If the data coming from bacula are already compressed by the bacula-fd
>> there's little space for improvement.
>> In your type of setup, I would disable compression on bacula-fd
>> increasing the speed of backup, your sd doing it by z
I have been using this setup for awhile. You absolutely must disable Bacula
compression on the ZFS Devices within the SD or for the specific Pools that
have volumes on the ZFS. Doubling up encryption can actually increase file
sizes and also lead to data errors.
-Shon
On Thu, Oct 7, 2010 at 2:11
Hi,
On Thu, Oct 7, 2010 at 1:12 PM, Martin Simmons wrote:
>>
>> Now my question:
>>
>>
>> How can I configure my job to continue getting the incremental changes
>> from server2, without running a full job, ie, based on the last
>> backups from server1.
>
> You can't change the Job definition its
> If the data coming from bacula are already compressed by the bacula-fd
> there's little space for improvement.
> In your type of setup, I would disable compression on bacula-fd
> increasing the speed of backup, your sd doing it by zfs mechanism.
thing is, I can't find anything about compression
On 10/07/2010 07:47 PM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> I'm planning a Bacula setup with ZFS on the SDs (media being disk, not tape),
> and I just wonder - should I use a smaller recordsize (aka largest block
> size) than the default setting of 128kB?
>
> Also, last I tried, with ZFS o
Hi all
I'm planning a Bacula setup with ZFS on the SDs (media being disk, not tape),
and I just wonder - should I use a smaller recordsize (aka largest block size)
than the default setting of 128kB?
Also, last I tried, with ZFS on a test box, I enabled compression, the lzjb
algorithm (very lig
Hi,
It will be great (at least for me :-) ) when Bacula looks for previous
Full backup on a brand new client to also check if a Full backup is
currently running.
We do Incr. backups M-F at night and they use tapes from IncrTapes pool.
Full backups use tapes from the FullTapes pool. When I add
> On Thu, 7 Oct 2010 11:03:31 -0300, Eduardo Júnior said:
>
> Hi everyone,
>
>
> I have a file server in DRBD (one primary: server1, one secondary: server2).
> The bacula-fd is installed only on the primary.
>
> Yesterday, the secondary assumed, becoming the primary.
>
> Now my question:
>
On Wed, Oct 6, 2010 at 11:59 PM, Buskas, Patric
wrote:
> Hi,
>
> I'm using the TS3100 with a CentOS 5.5 and Bacula 5.0.3 and it's working
> great.
> I don't think there are any mainstream Linux dist who won't work with this
> auto changer unless they're too old.
> It works great with the mtx com
Hi everyone,
I have a file server in DRBD (one primary: server1, one secondary: server2).
The bacula-fd is installed only on the primary.
Yesterday, the secondary assumed, becoming the primary.
Now my question:
How can I configure my job to continue getting the incremental changes
from server
On 07/10/10, Dan Langille (d...@langille.org) wrote:
> On 10/5/2010 2:53 PM, Rory Campbell-Lange wrote:
> >> Is there a simple and accurate way of providing a list of files of this
> >> sort to Bacula in order to mark them and proceed with a restore job?
>
> Have you tried scripting it?
>
> echo
On 10/07/10 04:10, Ralf Gross wrote:
> Phil Stracchino schrieb:
>> On 10/06/10 14:35, Mingus Dew wrote:
>>> John,
>>> I think I had to create a bogus schedule, that bacula wouldn't
>>> accept the job config without a schedule. I think I'll disable the job
>>> in bconsole and try to start it re
On 10/5/2010 2:53 PM, Rory Campbell-Lange wrote:
> Other than Graham's note about how to escape files I'd be grateful to
> know if there is another way of marking files so that they can be
> restored.
>
> I simply can't find a way of marking files easily from the root of the
> restore console (see
Phil Stracchino schrieb:
> On 10/06/10 14:35, Mingus Dew wrote:
> > John,
> > I think I had to create a bogus schedule, that bacula wouldn't
> > accept the job config without a schedule. I think I'll disable the job
> > in bconsole and try to start it remotely. Just see what happens...
>
> Mi
27 matches
Mail list logo