This is snv_128 x86.
> ::arc
hits = 39811943
misses=630634
demand_data_hits = 29398113
demand_data_misses=490754
demand_metadata_hits = 10413660
demand_metadata_misses=133461
prefetch_data_hits= 0
pre
You are both right. More below...
On Sep 10, 2010, at 2:06 PM, Piotr Jasiukajtis wrote:
> I don't have any errors from fmdump or syslog.
> The machine is SUN FIRE X4275 I don't use mpt or lsi drivers.
> It could be a bug in a driver since I see this on 2 the same machines.
>
> On Fri, Sep 10, 2
I don't have any errors from fmdump or syslog.
The machine is SUN FIRE X4275 I don't use mpt or lsi drivers.
It could be a bug in a driver since I see this on 2 the same machines.
On Fri, Sep 10, 2010 at 9:51 PM, Carson Gaspar wrote:
> On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote:
>>
>> Ok, now I
On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote:
Ok, now I know it's not related to the I/O performance, but to the ZFS itself.
At some time all 3 pools were locked in that way:
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsv
Ok, now I know it's not related to the I/O performance, but to the ZFS itself.
At some time all 3 pools were locked in that way:
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w
trn tot device
0.0
On 06/09/2010 10:56, Piotr Jasiukajtis wrote:
Hi,
I am looking for the ideas on how to check if the machine was under
high I/O pressure before it panicked (caused manually by an NMI).
By I/O I mean disks and ZFS stack.
Do you believe ZFS was a key component in the I/O pressure?
I've CC'd zfs-