On 05/23/16 11:09, Larrybwoy wrote:
> I see your point. Interesting. So I can make it so that each new differential
> backup that gets created writes itself in a totally new volume for that
> particular job ? Is there an option for this I need to add to the pool or ?
>
Here's how I do it. This
On 05/23/2016 11:15 AM, shouldbe q931 wrote:
> Several years ago I did something similar where there was 24 hours of
> hourly rsync "backups" and daily runs of Bacula to tape
Thankfully you don't have to deal with rsync anymore: there's now zfs
with COW snapshots and the ability to mirror snapsho
Larrybwoy> Thanks for the replies and good advice. The reason I
Larrybwoy> thought of this backup plan is because what I need to back
Larrybwoy> up are multiple dynamic file systems from abut 20
Larrybwoy> servers. These file systems contain data that is always
Larrybwoy> changing since they conta
On Mon, May 23, 2016 at 9:20 AM, Larrybwoy
wrote:
I do not need to have backups that are old; in case of a disaster, I
need to be able to bring back the data that was lost during the past
hour at most, so that the people working with the applications only
lose 1 hour of work in the worst
case sc
I see your point. Interesting. So I can make it so that each new differential
backup that gets created writes itself in a totally new volume for that
particular job ? Is there an option for this I need to add to the pool or ?
+-
Think about it this way:
If you're backing up to tape, jobs smaller than a full tape waste tape,
because you can't make the tape smaller.
But if you're backing up to disk, putting more than one batch of jobs
into a volume wastes disk. Because you can't compact a disk volume, and
you can't delete
On 05/23/16 04:20, Larrybwoy wrote:
> The only problem is the filesystem that I back up has 91 gigs, and
> the backup keeps getting bigger and bigger with all the differential
> jobs. So far the max vol size is set to 300GB, and with the backups
> running all weekend it now created a second volume
On 5/23/2016 5:44 AM, Larrybwoy wrote:
> Hey guys,
>
> Thanks for the replies and good advice. The reason I thought of this backup
> plan is because what I need to back up are multiple dynamic file systems from
> abut 20 servers. These file systems contain data that is always changing
> since t
Hey guys,
Thanks for the replies and good advice. The reason I thought of this backup
plan is because what I need to back up are multiple dynamic file systems from
abut 20 servers. These file systems contain data that is always changing since
they contain various dynamic application files. I do
Hey guys,
Thanks for the replies and good advice. The reason I thought of this backup
plan is because what I need to back up are multiple dynamic file systems from
abut 20 servers. These file systems contain data that is always changing since
they contain various dynamic application files. I do
The CLIENT config :
Client {
Name = server1-fd
Address = server1.com
FDPort = 9102
Catalog = MyCatalog
Password = "NDdkODYyMDM4NTZmNjYzNjYwZmE5MzIwZ" # password for
Remote FileDaemon
File Retention = 30 days
Hey guys,
Thanks for the replies and good advice. The reason I thought of this backup
plan is because what I need to back up are multiple dynamic file systems from
abut 20 servers. These file systems contain data that is always changing since
they contain various dynamic application files. I do
12 matches
Mail list logo