|| Answering Danixu86's question, Yes , both directors will do their respective
backups on their respective storages in order to let all the FDs in SITE2 to
backup through the LAN to SD2 and same situation in SITE1 (to not pass anything
through internet)... I may be commiting a mistake here, pro
On 11/26/2014 03:07 PM, Thomas Lohman wrote:
>> First let me thank you all for your responses, i really appreciate
>> them. As Joe, i think the problem here are the bacula jobids, ¿ is
>> there any way to say bacula to start from (let say) job id 900 ?
>> i think that's an easy way to fix all t
> First let me thank you all for your responses, i really appreciate
> them. As Joe, i think the problem here are the bacula jobids, ¿ is
> there any way to say bacula to start from (let say) job id 900 ?
> i think that's an easy way to fix all the problem as i will be able
I am not familiar e
On 11/26/2014 10:36 AM, Martin Simmons wrote:
> Look good, but I think you should remove
>
> File = /opt/virtual/images
>
> from the Exclude clause.
>
> __Martin
Hi Martin, if that is removed, then the sparse file will be backed up twice.
Once without the benefit of Bacula's sparse file suppo
I see your point Egoltz, no matter what i think, all the options in my mind
always reach a dead end. The only option i'm thinking now is to have
Active/Inactive Director (like failover) through internet, that way i'll have
all the catalog issue solved, then i can replicate the volumes like norma
On 11/26/2014 08:39 AM, Bill Arlofski wrote:
> ---[ tl;dr verson ]---
> When considering volumes for the expected volume of an upcoming job, Bacula
> appears to be ignoring the Job/JobDef options:
>
> Full Backup Pool =
> Incremental Backup Pool =
> Differential Backup Pool =
>
> And only
Good afternoon,
Will a standard replication work for this purpose?. I explain, this way of
course you will be able to have a replicated database state. But what you
really need isn't the snapshot of
both members, database and storage daemon at the same moment in a concrete
instant ?.
Imagine
Guys,
First let me thank you all for your responses, i really appreciate them. As
Joe, i think the problem here are the bacula jobids, ¿ is there any way to say
bacula to start from (let say) job id 900 ? i think that's an easy way to
fix all the problem as i will be able to replicate both
But you're using both directors to make backups, or the second is only a mirror
waiting for dissaster?, because for example, i've an script running every day
to dump the entire database to another server, then if there are any
dissasters, i only have to import the last backup to a new director.
Mysql replication may in fact guard against duplicate jobids, since the same
jobids will be in both databases, in theory.
It's the replication lag that will be the problem. During the lag it will still
be possible to create duplicate bacula jobids.
-Original Message-
From: Danixu86 [mail
Won't work. The problem is not mysql replication, although if mysql is the DB,
then it does need to be used.
The problem is bacula JobIDs as well.
-Original Message-
From: Danixu86 [mailto:bacula-fo...@backupcentral.com]
Sent: Wednesday, November 26, 2014 11:18 AM
To: bacula-users@lists.
What about mysql cluster?
http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-nodes-groups.html
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-online-add-node.html
I've never tried this before, but maybe is a good solution ;)
greetings!!
Interesting problem.
If we were just talking MyS
Remember that
you will get very little compression a second time. Have you tried
more than 1 tape? I assume you do not have any fixed volume size limit
set on the pool.
Thanks John for reply.
|| Is your data already compressed?
No, i've disabled gzip compression on bacula to speed up backups (fr
Look good, but I think you should remove
File = /opt/virtual/images
from the Exclude clause.
__Martin
> On Tue, 25 Nov 2014 18:35:27 +, Polcari, Joe (Contractor) said:
>
> So I can't quite grasp this.
> I have a directory which is part of / that contains virtual disk images,
> /opt/v
> It looks activated but i'm not able to use it... bacula still using only
> 800GB of tape. I know that it depends of compression ratio of stored files,
> but if gzip gets about 26% of CR, HW must get at least 10% of CR...
>
Yes it looks enabled. Is your data already compressed? Remember that
yo
Interesting problem.
If we were just talking MySQL replication, there is a setting "increment" where
you would set it to 2 in this case and no duplicate record numbers would be
generated.
In this case we need an "increment" setting for bacula jobids, or perhaps a
"number of sites" or "sites" set
Hi, first of all i'm sorry for my english.
I post here because i'm not able to enable Hardware compression on my LTO-4
tape.
I was reading a lot of forums with info about how to enable, but tape still
without compression.
My tape drive is a
[url=http://www.quantum.com/serviceandsupport/softwar
Bacula does not seem to be correctly determining the "Volume" column in the
stat dir output.
---[ tl;dr verson ]---
When considering volumes for the expected volume of an upcoming job, Bacula
appears to be ignoring the Job/JobDef options:
Full Backup Pool =
Incremental Backup Pool =
Diffe
Thanks for your replies folks, I got it to work on CentOS for now
using /etc/sysconfig/bacula-fd.
All the best, Uwe
--
NIONEX --- Ein Unternehmen der Bertelsmann SE & Co. KGaA
--
Download BIRT iHub F-Type - The Free E
"SITE 1 <--> INTERNET <-> SITE2
Dir1.Dir2
(Catalog)(REPLICATED)(Catalog)
SD1...(REPLICATED)..SD2
FD...
20 matches
Mail list logo