To clarify, there are at least two issues with remote replication vs. backups 
in my mind. (Feel free to joke about the state of my mind!  ;-)

The first, which as you point out can be alleviated with snapshots, is the 
ability to "go back" in time. If an accident wipes out a file, the missing file 
will shortly be deleted on the remote end. Snapshots help you here ... as long 
as you can keep sufficient space online. If your turnover is 1 TB/day and you 
require the ability to go back to the end of any week in the past year, that's 
52 TB.

The second is protection against file system failures. If a bug in file system 
code, or damage to the metadata structures on disk, results in the master being 
unreadable, then it could easily be replicated to the remote system. (Consider 
a bug which manifests itself only when 10^9 files have been created; both file 
systems will shortly fail.) Keeping backups in a file system independent manner 
(e.g. tar format, netbackup format, etc.) protects against this.

If you're not concerned about the latter, and you can afford to keep all of 
your backups on rotating rust (and have sufficient CPU & I/O bandwidth at the 
remote site to scrub those backups), and have sufficient bandwidth to actually 
move data between sites (for 1 TB/day, assuming continuous modification, that's 
11 MB/second if data is never rewritten during the day, or potentially much 
more in a real environment) then remote replication could work.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to