[EMAIL PROTECTED] wrote:
> Hi richard,
> using kstat -m zfs as you recommended produces some
> interesting results in the L2 catagory
>
> I can see the l2_size field increase immediately
> after doing a:
> zpool add pool cache cache_device
> and the l2_hits value increase with each test
> run as t
On 17 January, 2008 - Bill Moloney sent me these 0,7K bytes:
> Thanks Marion and richard,
> but I've run these tests with much larger data sets
> and have never had this kind of problem when no
> cache device was involved
>
> In fact, if I remove the SSD cache device from my
> pool and run the te
Thanks Marion and richard,
but I've run these tests with much larger data sets
and have never had this kind of problem when no
cache device was involved
In fact, if I remove the SSD cache device from my
pool and run the tests, they seem to run with no issues
(except for some reduced performance as
Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>
>> I have a set of threads each doing random reads to about 25% of its own,
>> previously written, large file ... a test run will read in about 20GB on a
>> server with 2GB of RAM
>> . . .
>> after several successful runs of my test applicatio
[EMAIL PROTECTED] said:
> I have a set of threads each doing random reads to about 25% of its own,
> previously written, large file ... a test run will read in about 20GB on a
> server with 2GB of RAM
> . . .
> after several successful runs of my test application, some run of my test
> will be ru
I'm using a FC flash drive as a cache device to one of my pools:
zpool add pool-name cache device-name
and I'm running random IO tests to assess performance on a
snv-78 x86 system
I have a set of threads each doing random reads to about 25% of
its own, previously written, large file