>
>
> At present, I do not see async write QoS as being interesting. That leaves
> sync writes and reads
> as the managed I/O. Unfortunately, with HDDs, the variance in response
> time >> queue management
> time, so the results are less useful than the case with SSDs. Control
> theory works, once again.
> For sync writes, they are often latency-sensitive and thus have the
> highest priority. Reads have
> lower priority, with prefetch reads at lower priority still.
>
>
This makes sense for the most part, and i agree that with spinning HDDs
there might be minimal benefit.  It is why I suggested that ARC/L2ARC might
be the reasonable starting place for an idea like this because the
latencies are orders of magnitude lower.  Perhaps i'm looking for a way to
modify the prefetch to have a higher priority when the system is under some
threshold.

>
> On a related note (maybe?) I would love to see pool-wide settings that
> control how aggressively data is added/removed form ARC, L2ARC, etc.
>
> Evictions are done on an as-needed basis. Why would you want to evict more
> than needed?
> So you could fetch it again?
>
> Prefetching can be more aggressive, but we actually see busy systems
> disabling prefetch to
> improve interactive performance. Queuing theory works, once again.
>
> It's not that I want evictions to occur for no reason... only that the
rate be accelerated if there is contention.  If I recall correctly, ZFS has
some default values included that throttle how quickly the ACR/L2ARC are
updated, and the explanation I read was it was due to SSDs 6+ years ago
were not capable of the IOPS and throughput that they are today.

I know that ZFS has a prefetch capability but have seen fairly little
written about it, are there any good references you can point me to better
understand it?  In particular I would like to see some kind of measurement
on my systems showing how often this capability is utilized.


>  Something that would accelerate the warming of a cold pool of storage or
> be more aggressive in adding/removing cached data on a volatile dataset
> (e.g. where Virtual Machines are turned on/off frequently).  I have heard
> that some of these defaults might be changed in some future release of
> Illumos, but haven't seen any specifics saying that the idea is nearing
> fruition in release XYZ.
>
> It is easy to warm data (dd), even to put it into MFU (dd + dd). For best
> performance with
> VMs, MFU works extremely well, especially with clones.
>

I'm unclear on the best way to warm data... do you mean to simply `dd
if=/volumes/myvol/data of=/dev/null`?  I have always been under the
impression that ARC/L2ARC has rate limiting how much data can be added to
the cache per interval (i can't remember the interval).  Is this not the
case?  If there is some rate limiting in place, dd-ing the data like my
example above would not necessarily cache all of the data... it might take
several iterations to populate the cache, correct?

Forgive my naivete, but when I look at my pool when it is under random load
and see a heavy load hitting the spinning disk vdevs and relatively little
on my L2ARC SSDs I wonder how to better utilize their performance.  I would
think that if my L2ARC is not yet full and it has very low
IOPS/throughput/busy/wait, then ZFS should use that opportunity to populate
the cache aggressively based on the MRU or some other mechanism.

Sorry to digress from the original thread!
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to