> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Perhaps I need to specify some usecases more clearly:
Actually, I'm not sure you do need to specify usecases more clearly -
Because the idea is obviously awesome. The main prob
On Thu, 12 Jan 2012, Edward Ned Harvey wrote:
Suppose you have a 1G file open, and a snapshot of this file is on disk from
a previous point in time.
for ( i=0 ; i<1trillion ; i++ ) {
seek(random integer in range[0 to 1G]);
write(4k);
}
Something like this would quickly try to wri
I'm sorry to be asking such a basic question that would seem to be easily found
on Google, but after 30 minutes of "googling" and looking through this lists'
archives, I haven't found a definitive answer.
Is the L2ARC caching scheme based on files or blocks?
The reason I ask: We have several da
mattba...@gmail.com said:
> We're looking at buying some additional SSD's for L2ARC (as well as
> additional RAM to support the increased L2ARC size) and I'm wondering if we
> NEED to plan for them to be large enough to hold the entire file or if ZFS
> can cache the most heavily used parts of a sin