Alan Brown <[EMAIL PROTECTED]> wrote:

> On Mon, 1 May 2006, Bill Moran wrote:
> 
> > How about our reasons for doing this:
> > 1) We need speedy restores, thus we have a 1T RAID5 array and backup
> >   to disk devices there.
> 
> I backup ~20Tb to LTO2. The tapes run almost as fast as most of the 
> disk arrays (MSA 1000 and Nexsan Atabeast)

I guess I didn't make my point.

The tapes are several orders of magnitude slower than disk.  The
time it takes to get them back onsite alone is too slow for most
recovery scenerios.

> > 2) We _must_ have offsite backups.  The RAID5 array is a bit heavy
> >   to carry around ... plus it can only be 1 place at a time.
> 
> That's tape-tape migration, OR job cloning - one set locally, one set 
> offsite.

OK.  Call it what you want.

> > Our desired solution is to back up to tape and disk simultaneously.
> > Thus we can take the tapes offsite, while still having fast, local
> > backups on the RAID5.
> 
> Sveral people have requested simultaneous backups for the same reason.
> Asking for D-D-T just obscures what you really want.

Can you please define these terms, then.  I'm fuzzy on what they mean:
1) simultaneous backups: ?
2) D-D-T: ?

We've been calling what we want "multiplexing"

> > I expect we're not the only people with such requirements.
> 
> Not at all, but 1Tb is a small amount of disk these days.

I suppose.  We don't expect it to stay that small, and we've got a limited
amount of time to shake the bugs out of the system before our backup
needs balloon.

> In any case, the functionality you need can probably be achieved 
> using bcopy on the backup sets - and given the small dataset you have, it 
> wouldn't take long.

You mean the utterly undocumented bcopy?  Or is there some other bcopy that
you're referring to?

I did do some experiments with bcopy, and wasn't happy with the results.
If I rememeber correctly, it required me to write a script that:
1) Knew which on-disk volume it needed to duplicate.
2) Was able to generate a label for the tape _prior_ to running bcopy.

While that's certainly possible, it's still far from "ready for general
consumption".  Quite frankly, I'm in the dark on how to do #1 reliably,
and #2 requires btape.  Starts to seem rather klunky.

If anyone has any pointers on how to accomplish #1 reliably, I'd love to
hear them.  Wouldn't upset me at all to find out that there's a simple
way to do this that I'm just not aware of.

> We achieve our data security with a large data safe(*), as the area is 
> geologically and politically stable, the only real worry is fire, in which 
> case instant restoral of service would be impossible anyway, even with 
> offsite storage.

Before the end of the year, we will require redundancy that can survive
major catastropies that are geographically localized.  IOW: if Pittsburgh
were to get hit with a nuclear warhead, our customers would expect us
to fail over to a second site.  Thus we need truely offsite backups.

As you mentioned, the _size_ of our data is small by comparison to many
other businesses, which is a blessing ...

> (*) http://www.phoenixsafe.com/en/viewproduct/4620.html - model 4623.
> 2+ hour fire resistance and designed to withstand a drop of 10 metres (in 
> case of floor collapse in multistory building). Ours is sitting on a 
> specially installed concrete pad and I wouldn't be surprised to find it 
> standing intact if the building was completely razed around it.

Very cool.  However, it wouldn't suit our needs.  As I mentioned, the unit
would _literally_ have to be nuclear warhead-proof.  Since nobody makes
such a unit, our DRP must include geographically dispersed redundancy.

-- 
Bill Moran
Potential Technologies
http://www.potentialtech.com


-------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to