On 21.05.2015 19:11, Devin Reade wrote:
> --On Thursday, May 21, 2015 11:09:58 AM -0500 dweimer
> wrote:
>
>> I have three systems two of which are using disk backup, then copy to
>> tape both of those are running on CentOS with 25G volume sizes [...]
>> The third system is using 46G File volumes
Hello Deven,
On 21.05.2015 17:23, Devin Reade wrote:
> --On Thursday, May 21, 2015 09:06:41 AM +0200 Kern Sibbald
> wrote:
>
>> Bacula does keep 64 bit addresses.
> Excellent. Not surprisingly, I'm not dealing with file sizes near 2^63,
> but I *do* need to back up files that are in the 2^39 ra
On 05/21/2015 12:56 PM, Devin Reade wrote:
...
> Per my other other email, the difficult part becomes keeping track
> of what data is where in the non-bacula case.
I haven't used it myself but backuppc has the "archive" job where you
can periodically copy a dataset to other storage. Though my impr
--On Thursday, May 21, 2015 12:21:23 PM -0500 Dimitri Maziuk
wrote:
> How much work is required to mount a filesystem on such drive? That is,
> one drive from md raid1 with extX on it "just mounts" when you stick it
> in a usb cradle. Do you need to jump through extra hoops with zfs?
The ZFS ca
--On Thursday, May 21, 2015 06:50:31 PM +0200 "Rados?aw Korzeniewski"
wrote:
> Why do you need to use a 500MB volume in size? This days it is like
> distributing movies on floppies instead of DVD/BR.
Some of my older deployments had data patterns where incrementals
are typically small, but at t
On 05/21/2015 12:11 PM, Devin Reade wrote:
> One of the options I'm considering is something like setting up
> pairs of drives in a ZFS mirror in removable drive caddies,
> putting sets of the write-once data on such pairs via rsync or
> some-such, and then making 3 or 4 copies of those pairs of d
On 05/21/2015 11:50 AM, Radosław Korzeniewski wrote:
> ... I do not see any problem
> with currently available filesystems to handle 100GB or 1TB file.
1TB might be an overkill esp. if you're using 1TB disks. My volumes are
~25GB because we originally thought of maybe archiving on BRs but that
(t
--On Thursday, May 21, 2015 11:09:58 AM -0500 dweimer
wrote:
> I have three systems two of which are using disk backup, then copy to
> tape both of those are running on CentOS with 25G volume sizes [...]
> The third system is using 46G File volumes
Thanks. Good to know.
> I wouldn't worry abo
Hello,
2015-05-21 17:23 GMT+02:00 Devin Reade :
>
> On that note, I've traditionally gone with volume sizes in the ~500MB
> (2^29)
> range (for disk stores), but in this case that can push the volume
> count in the catalog to more than 512k entries once a minimum number
> of offsite copies have be
On 05/21/2015 10:23 am, Devin Reade wrote:
> --On Thursday, May 21, 2015 09:06:41 AM +0200 Kern Sibbald
> wrote:
>
>> Bacula does keep 64 bit addresses.
>
> Excellent. Not surprisingly, I'm not dealing with file sizes near
> 2^63,
> but I *do* need to back up files that are in the 2^39 range (
Hello,
2015-05-20 0:53 GMT+02:00 Heitor Faria :
> > Hello,
> >
> > i have run bacula v5.2.6.
> > My problem is, that i have a big file (25GB) and each day becomes the
> file to
> > end a piece longer. Does the possibility exist that bacula only protects
> the
> > modifications in the file?
> > At
On 2015-05-20 21:08, Cejka Rudolf wrote:
> Dimitri Maziuk wrote (2015/05/20):
>> 160 < 180.
>>
>> you get it now?
>
> It should be "min. 160 < max. 180", which is false.
Actually, no: "up to" 160 so max. vs max.
> Tape users usually do not want to switch off hardware tape drive
> compression and
--On Thursday, May 21, 2015 09:06:41 AM +0200 Kern Sibbald
wrote:
> Bacula does keep 64 bit addresses.
Excellent. Not surprisingly, I'm not dealing with file sizes near 2^63,
but I *do* need to back up files that are in the 2^39 range (from
filesystems that are in the 2^46 range onto virtual c
Hello,
Well, when backing up and restoring, Bacula really does not care about
the size of the original file. It simply read()s blocks for backup and
write()s them for restore. Bacula does keep 64 bit addresses. On the SD
output end, if you do not limit your Volume size, there will surely be
some
14 matches
Mail list logo