Yesterday, Arne Jansen wrote:
Paul Archer wrote:
Because it's easier to change what I'm doing than what my DBA does, I
decided that I would put rsync back in place, but locally. So I changed
things so that the backups go to a staging FS, and then are rsync'ed
over to another FS that I take sna
Now is probably a good time to mention that dedupe likes LOTS of RAM, based
on
experiences described here. 8 GiB minimum is a good start. And to avoid
those
obscenely long removal times due to updating the DDT, an SSD based L2ARC
device
seems to be highly recommended as well.
That is, of course,
Oops, I meant SHA256. My mind just maps SHA->SHA1, totally forgetting that ZFS
actually uses SHA256 (a SHA-2 variant).
More on ZFS dedup, checksums and collisions:
http://blogs.sun.com/bonwick/entry/zfs_dedup
http://www.c0t0d0s0.org/archives/6349-Perceived-Risk.html
--
This message posted from
Though the rsync switch is probably the answer to your problem...
You might want to consider upgrading to Nexenta 3.0, switching checksums from
fletcher to sha1 and then enabling block level deduplication. You'd probably
use less GB per snapshot even with rsync running inefficiently.
--
This me
Paul Archer wrote:
>
> Because it's easier to change what I'm doing than what my DBA does, I
> decided that I would put rsync back in place, but locally. So I changed
> things so that the backups go to a staging FS, and then are rsync'ed
> over to another FS that I take snapshots on. The only prob