On 08/24/2012 12:25 PM, Julius Roberts wrote:
> On 24 August 2012 15:03, Robbie Crash <sardonic.smi...@gmail.com> wrote:
>> Are you using compression or dedup on the FS?
> 
> Yes, both.  We're getting about 1.5x dedup on the Backups pool,

So let me get this straight, you've got a 2TB dataset with dedup and
compression on a machine with 1GB of RAM and you're complaining about
poor performance? This is *expected* behavior. Your DDT simply doesn't
fit into RAM, so the machine has to go and fetch entries on practically
every write to the pool, so your pool is going to get hit hard by
random-read loads as the kernel has to re-read portions of the DDT while
doing dedup.

See http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe for
a guide on RAM sizing when doing dedup.

> not sure how to calculate compression

# zfs get compressratio <dataset>

Cheers,
--
Saso

_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to