2011/6/1 Edward Ned Harvey <opensolarisisdeadlongliveopensola...@nedharvey.com>:
> (2)  The above is pretty much the best you can do, if your server is going
> to be a "normal" server, handling both reads & writes.  Because the data and
> the meta_data are both stored in the ARC, the data has a tendency to push
> the meta_data out.  But in a special use case - Suppose you only care about
> write performance and saving disk space.  For example, suppose you're the
> destination server of a backup policy.  You only do writes, so you don't
> care about keeping data in cache.  You want to enable dedup to save cost on
> backup disks.  You only care about keeping meta_data in ARC.  If you set
> primarycache=metadata ....  I'll go test this now.  The hypothesis is that
> my arc_meta_used should actually climb up to the arc_meta_limit before I
> start hitting any disk reads, so my write performance with/without dedup
> should be pretty much equal up to that point.  I'm sacrificing the potential
> read benefit of caching data in ARC, in order to hopefully gain write
> performance - So write performance can be just as good with dedup enabled or
> disabled.  In fact, if there's much duplicate data, the dedup write
> performance in this case should be significantly better than without dedup.

I guess this is pretty much why I have primarycache=metadata and
set zfs:zfs_arc_meta_limit=0x100000000
set zfs:zfs_arc_min=0xC0000000
in /etc/system.

And the ARC size on this box tends to drop far below arc_min after a
few days, not withstanding the fact it's supposed to be a hard limit.

I call for an arc_data_max setting :)

-- 
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to