Eric D. Mudama writes:
 > On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
 > > On Tue, Jan 20 at  9:04, Richard Elling wrote:
 > >>
 > >> Yes.  And I think there are many more use cases which are not
 > >> yet characterized.  What we do know is that using an SSD for
 > >> the separate ZIL log works very well for a large number of cases.
 > >> It is not clear to me that the efforts to characterize a large
 > >> number of cases is worthwhile, when we can simply throw an SSD
 > >> at the problem and solve it.
 > >>  -- richard
 > >>
 > >
 > > I think the issue is, like a previous poster discovered, there's not a
 > > lot of available data on exact performance changes of adding ZIL/L2ARC
 > > devices in a variety of workloads, so people wind up spending money
 > > and doing lots of trial and error, without clear expectations of
 > > whether their modifications are working or not.
 > 
 > Sorry for that terrible last sentence, my brain is fried right now.
 > 
 > I was trying to say that most people don't know what they're going to
 > get out of an SSD or other ZIL/L2ARC device ahead of time, since it
 > varies so much by workload, configuration, etc. and it's an expensive
 > problem to solve through trial an error since these
 > performance-improving devices are many times more expensive than the
 > raw SAS/SATA devices in the main pool.
 > 

I agree with you on the L2ARC front but not on the SSD for
ZIL. We clearly expect 10X gain for lightly threaded
workloads and that's a big satifyer because not everything
happen with large amount of concurrency and some high value
tasks do not.

On the L2ARC  the benefit are less  direct because of the L1
ARC presence. The gains, if present will  be of the similar
nature with   8-10X  gain to   workloads  that  are  lightly
threaded  and served   from L2ARC vs   disk.  Note that it's
possible  to configure    which   (higher business    value)
filesystems are allowed to install in the L2ARC.

One dirty way to evaluate if the  L2ARC will be effective in
your environment is to  consider if the  last X GB  of added
memory had a positive impact on your performance
metrics (does nailing down memory reduces performance ?).
If so then on the graph of performance vs caching you are
still on a positive slope and L2ARC is likely to help. When
request you care most about are served from caches, or
when something else saturates (e.g. total  CPU) then it's
time to stop.

-r



 > -- 
 > Eric D. Mudama
 > edmud...@mail.bounceswoosh.org
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to