On Friday 15 September 2006 23:20, Jeff LaCoursiere wrote:
> 
> On Fri, 15 Sep 2006, Dan Langille wrote:
> 
> > Large volumes on *disk* slow down the restore if you're trying to get
> > one file back.  GREATLY.  Try it.  I suggest smaller volumes. e.g.
> > 2GB.
> 
> Yup, not only have I tried it, but it was question 1) in my original post.
> It doesn't make any sense, really.  Files on disk can be lseek()'ed
> directly to the appropriate block, and the catalog really should contain
> the block within the volume to seek to.  It should be instant.  I am a bit
> confused as to why it is not.

As the archives will attest several times, I have tried to make it work but 
there is always some regression that fails.  I gave up trying to make it 
work.  If someone wants to send me a patch that makes it work, I'll treat 
them to a really nice dinner.

> 
> >
> > Consider you need a 30MB from somewhere in that 200GB file.  Assume
> > it's 1/4 of the way through the volume.  You must read 50GB before
> > you get to that file.  Of course, it could be at 170GB through the
> > file... Either way, you don't want to wait all that time.
> >
> > Tape drives often have skip ahead functionality so this problem is
> > not always applicable to tape.
> >
> 
> The equivalent "skip-ahead" for disk files is the lseek() system call...
> 
> All that aside, it does make me queasy to have such large files on disk.
> All I need is for the filesystem to have an fsck problem and suddenly a
> months worth of backups are removed :) So I am looking into smaller volume
> sizes.  Ideally I would have a different "file" volume written each night
> to the same disk with some kind of auto-labelling.  Does anyone else do
> this?  Then each "file" would just contain the backups run for that day.
> If the auto-labelling could include date information in the filename that
> would be even better still, as once the files in these volumes expire from
> the catalog I could come back later and know which volume I really want.

A good starting point would be:

http://www.bacula.org/dev-manual/Automated_Disk_Backup.html


> I have also been planning to "archive" the volumes onto actual tape for
> eternal safekeeping...
> 
> Am I approaching this the right way?  I keep running across the comment
> that "if you try to use bacula and force backups to particular volumes you
> will be unhappy".  

This is very true, but it doesn't mean that you cannot use pools and limit the 
size of the Volumes or the number of backups a Volume contains.

> The bright idea to dump tape and use detachable USB 
> hard drives instead seems to be biting me in the ***.

Bacula's prior to 1.39.x don't deal very well with File storage that may not 
be mounted (i.e. USB).

> 
> j
> 
> -------------------------------------------------------------------------
> Using Tomcat but need to do more? Need to support web services, security?
> Get stuff done quickly with pre-integrated technology to make your job 
easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to