On 05/01/2010 18:49, Richard Elling wrote:
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:

The problem is that while RAID-Z is really good for some workloads it is really bad for others.
Sometimes having L2ARC might effectively mitigate the problem but for some workloads it won't (due to the huge size of a working set). In such environments RAID-Z2 offers much worse performance then similarly configured NetApp (RAID-DP, same number of disk drives). If ZFS would provide another RAID-5/RAID-6 like protection but with different characteristics so writing to a pool would be slower but reading from it would be much faster (comparable to RAID-DP) some customers would be very happy. Then maybe a new kind of cache device would be needed to buffer writes to NV storage to make writes faster (like "HW" arrays have been doing for years).

This still does not address the record checksum.  This is only a problem
for small, random read workloads, which means L2ARC is a good solution.
If L2ARC is a set of HDDs, then you could gain some advantage, but IMHO
HDD and good performance do not belong in the same sentence anymore.
Game over -- SSDs win.


as I wrote - sometimes the working set is so big that L2ARC or not there is virtually no difference and it is not practical to deploy L2ARC several TBs in size or bigger. For such workload RAID-DP behaves much better (many small random reads, not that much writes).


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to