On Tue, 2 May 2006, Bill Moran wrote:

 The time it takes to get them back onsite alone is too slow for most
recovery scenerios.

Hence the necessity for job cloning - AND a decent data firesafe.

What does a firesafe have to do with this?

Data backed up to disk can still be erased.

A New Zealand ISP got very publically humiliated in 1999 when a 14yo script kiddie did exactly this to their webservers - first he erased the backup disks and then he erased the data spools themselves.

Even without malicious intent, it's not difficult to come up with a number of "Ooops" causes for mounted filesystems getting trashed - including those being used as backup volumes.

2) D-D-T: ?

disk to disk to tape (which isn't done for the reasons you want it done)

Huh?  I'm not following either your explanation of the term, nor your
assertion as to why I would want to use it.

You were originally asking for a way of going backing up from disks, to a backup disk and then from there to tape. That's dist to disk to tape.

Then you DEFINITELY need faster tape.

No.  As previously described, the _tapes_ are fast enough, getting them
back from the offsite storage facility is too slow.

Treat archival/offsite sets differently to your onsite needs.

Local disk backup is fast, but it's vulnerable.

Local tape backup is marginally slower, but it's a lot harder to erase.

Either way is only suitable for file recovery or system rebuilds.

Offsite backup is only ever suitable for historic snapshot reebuilds.

Our need for offsite storage is different than our on-disk backups.
The offsite backups are for archival and legal requirements.  Our
on-disk backups are for handling 99% of the data-loss incidents.

It's the 1% which will kill you. Your need for a separate datacentre to be up and running quickly. Moves you out of the class of a simple backup system and into the realm of requiring replicated storage systems - which doesn't cost much (if anything) more for geographical failover abilities than it does for snapshotting with an eye to being up and running in a few hours - restoring off the tapes will take you at least a day for 1Tb of data.

Your other alternative is to backup twice, to 2 different pools.

Which is what we're doing now.

Bacula can run both backups simultaneously.

If you have that kind of demand then you need fully replicated
geographically dispersed filesystems. Bringing things online from backups
will take too long.

You are correct.  But simply implementing geographically redundant
filesystems would not meet all our needs.  In some ways it would be
overkill and prohibitively expensive.  Additionally, redundancy
_never_ replaces backup.  I can have 1000 redundant systems dispersed
across the entire galaxy, and if a user accidentally deletes an
important record, I'll still need to go to backup to recover it,
since the redundant systems will faithfully delete it from all mirrors.

Correct, hence why replicated systems need backup.

However once you are at that level, the issue raises that using backup disks themselves opens up a simlar erasure possibility, which in turn means that you need local backup tapes. Disks can be used too, but they shouldn't be the only line of local restoration.

The upshot is that the product is expected to be reliable, period.  It
has to survive considerable man-made or natural disasters.  Our DRP
reflects that.




The supporting stuff: email, financial records, and other "business"
stuff needs to be reliable, backed up, and archived - but the
requirements are less.  We can afford _some_ data loss in this area
(about a week) and we can afford some time to recover from a
catastrophe.

You'd be surprised, in many ways.

Email is "only email" until it goes down, at which point it becomes "a business-critical application" - and that's only the tip of the iceberg.

Your "business" stuff needs to be treated as importantly as the other items, else things get difficult, quickly.

If we apply the business DRP to our product, the product will be
unacceptable.  If we apply the product DRP to the business data,
we'll incur significant expenses that are unwarranted.

I've seen that argument used many times by IT admins, until things go wrong and they're told in no uncertain terms that it just aint so.

Additionally, redundancy and backups serve two different purposes.
Trying to use backup to solve redundancy problems will not work and
vice versa.

I never said it would, however your actual need becomes more vieweable the more you actually reveal.

Until this response you hadn't stated you already had any replication in place at all and it seemed you were relying on offsite backups as a failover mechanism.

The internet was designed (originally(*)) to withstand nuclear attack and
route around the damage.

Yeah, and the constitution of the US was supposed to protect individuals
from unreasonable taxation, but look how that ended up.

Do people stripped of (or denied) voting rights still have to pay tax?

If you need that kind of durability then relying on a tape backup system
is at least an order of magnitude below your _true_ requirements and will
leave you caught short if things do go bang.(**)

Exactly.  No pressure ...

The beauty of the GPL is that if something almost-but-not-quite fits your needs, you can always modify it (and preferably contribute the modifications back to the knowledge pool) to suit your needs.

AB


-------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to