Jonathan Loran wrote:
> 
> We are using zfs compression across 5 zpools, about 45TB of data on 
> iSCSI storage.  I/O is very fast, with small fractional CPU usage (seat 
> of the pants metrics here, sorry).  We have one other large 10TB volume 
> for nearline Networker backups, and that one isn't compressed.  We 
> already compress these data on the backup client, and there wasn't any 
> more compression to be had on the zpool, so it isn't worth it there. 

cool.

> There's no doubt that heavier weight compression would be a problem as 
> you say.  One thing that would be ultra cool on the backup pool would be 
> to have post write compression.  After backups are done, the backup 
> server sits more or less idle. It would be cool to do a compress on 
> scrub operation that cold do some real high level compression.  Then we 
> could zfssend | ssh-remote | zfsreceive to an off site location with far 
> less less network bandwidth, not to mention the remote storage could be 
> really small.  Datadomain (www.datadomain.com 
> <http://www.datadomain.com>) does block level checksumming to save files 
> as link lists of common blocks.  They get very high compression ratios 
> (in our tests about 6/1, but with more frequent full backups, more like 
> 20/1).  Then off site transfers go that much faster.

Do not assume that a compressed file system will send compressed.  IIRC, it
does not.

But since UNIX is a land of pipe dreams, you can always compress anyway :-)
        zfs send ... | compress | ssh ... | uncompress | zfs receive ...

  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to