>
>
>
> I'm unclear on the best way to warm data... do you mean to simply `dd
> if=/volumes/myvol/data of=/dev/null`? I have always been under the
> impression that ARC/L2ARC has rate limiting how much data can be added to
> the cache per interval (i can't remember the interval). Is this not the
>
>
> At present, I do not see async write QoS as being interesting. That leaves
> sync writes and reads
> as the managed I/O. Unfortunately, with HDDs, the variance in response
> time >> queue management
> time, so the results are less useful than the case with SSDs. Control
> theory works, once a
I don't have anything significant to add to this conversation, but wanted
to chime in that I also find the concept of a QOS-like capability very
appealing and that Jim's recent emails resonate with me. You're not alone!
I believe there are many use cases where a granular prioritization that
contr
Excellent thanks to you both. I knew of both those methods and wanted
to make sure i wasn't missing something!
On Wed, Sep 26, 2012 at 11:21 AM, Dan Swartzendruber wrote:
> **
> On 9/26/2012 11:18 AM, Matt Van Mater wrote:
>
> If the added device is slower, you will experience
>
> If the added device is slower, you will experience a slight drop in
> per-op performance, however, if your working set needs another SSD,
> overall it might improve your throughput (as the cache hit ratio will
> increase).
>
Thanks for your fast reply! I think I know the answer to this questi
I've looked on the mailing list (the evil tuning wikis are down) and
haven't seen a reference to this seemingly simple question...
I have two OCZ Vertex 4 SSDs acting as L2ARC. I have a spare Crucial SSD
(about 1.5 years old) that isn't getting much use and i'm curious about
adding it to the pool