I'm sorry to be asking such a basic question that would seem to be easily found 
on Google, but after 30 minutes of "googling" and looking through this lists' 
archives, I haven't found a definitive answer.

Is the L2ARC caching scheme based on files or blocks?

The reason I ask: We have several databases that are stored in single large 
files of 500GB or more.

So, is L2ARC doing us any good if the entire file can't be cached at once?

We're looking at buying some additional SSD's for L2ARC (as well as additional 
RAM to support the increased L2ARC size) and I'm wondering if we NEED to plan 
for them to be large enough to hold the entire file or if ZFS can cache the 
most heavily used parts of a single file.

After watching arcstat (Mike Harsch's updated version) and arc_summary, I'm 
still not sure what to make of it. It's rare that the l2arc (14Gb) hits double 
digits in %hit whereas the ARC (3Gb) is frequently >80% hit.


TIA
matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to