Nathan Kroenert wrote:
Hm - a ZilArc??
Or, slarc?
Or L2ArZi
I'm tried something sort of similar to this when fooling around,
adding different *slices* for ZIL / L2ARC but as I'm too poor to
afford good SSD's my resolut was poor at beat... ;)
Perfectly predictable. zilstat will show you the size of iops in
the ZIL writes. It seems consistent that latency-sensitive apps
(NFS service, databases) have a need for many fast, small sync
writes. While you do not pay a seek/rotate latency penalty in an
SSD, you may pay a page erase penalty. For modern SSDs, the
write-optimized use DRAM to help eliminate the page erase penalty,
but also cost a lot more, because they have more parts. Meanwhile,
the read-optimized SSDs tend to use MLC flash and pay a large erase
penalty, but can read very fast.
The L2ARC is a read cache, so we can optimize the writes to L2ARC by
making them large, thus reducing the impact of the erase penalty.
Also, during the time required to write the L2ARC, the data is also in
the ARC, so there is no read penalty during the write to L2ARC. The
win is apparent after the data is evicted from the ARC and we can
read it faster from the L2ARC than we can from the main pool. For
this case, not paying the seek/rotate penalty is a huge win for SSDs.
To help you understand how this works, you might remember:
+ None of the data in the slog is in the main pool, yet.
+ All of the data in the L2ARC is in the ARC or main pool already.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss