I think you just answered your own question; 2TB with about 1.5x dedup, and only 1GB of RAM isn't going to work. ZFS will need to check the dedup tables everytime you write, and if the dedup tables don't fit* into 1/4 of RAM (or was it that ZFS only uses 3/4 of available RAM by default, then dedup only uses 1/4 of that ZFS RAM? I forget!), then there will be a lot of iops running as ZFS keeps having to read the dedup tables from disk instead.

I had a similar system, AMD64 3000 (I think, or 3200) with 2GB and when I bumped it up to 8GB write speed exponentially improved.

* There is some calculation you can go through to work out dedup requirements for RAM, but the rough rule of thumb is 2GB (I'd even say 4GB or more, if it's a live/nearline system, versus a backup/archive system) of RAM for every 1TB; and you likely still need to tweak arc_meta_limit.

For a larger example, I have an archive server with 20TB RAIDZ3 (13x2TB disks), 1.62x dedup with 10.8T used and 12.9T free, and 48GB of RAM, where arc_meta_limit has been tweaked to use 32GB of RAM instead. I get about 20-30mbps (with dedup off I would normally be getting 50-60mbps) over 1gbps ethernet.

On 24/08/2012 18:25, Julius Roberts wrote:
On 24 August 2012 15:03, Robbie Crash <sardonic.smi...@gmail.com> wrote:
Are you using compression or dedup on the FS?
Yes, both.  We're getting about 1.5x dedup on the Backups pool, not
sure how to calculate compression


_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to