manoj nayak wrote:
>
> ----- Original Message ----- From: "Richard Elling" 
> <[EMAIL PROTECTED]>
> To: "manoj nayak" <[EMAIL PROTECTED]>
> Cc: <zfs-discuss@opensolaris.org>
> Sent: Wednesday, January 23, 2008 7:20 AM
> Subject: Re: [zfs-discuss] ZFS vq_max_pending value ?
>
>
>> manoj nayak wrote:
>>>
>>>> Manoj Nayak wrote:
>>>>> Hi All.
>>>>>
>>>>> ZFS document says ZFS schedules it's I/O in such way that it 
>>>>> manages to saturate a single disk bandwidth  using enough 
>>>>> concurrent 128K I/O.
>>>>> The no of concurrent I/O is decided by vq_max_pending.The default 
>>>>> value for  vq_max_pending is 35.
>>>>>
>>>>> We have created 4-disk raid-z group inside ZFS pool on Thumper.ZFS 
>>>>> record size is set to 128k.When we read/write a 128K record ,it 
>>>>> issue a
>>>>> 128K/3 I/O to each of the 3 data disks in the 4-disk raid-z group.
>>>>>
>>>>
>>>> Yes, this is how it works for a read without errors.  For a write, you
>>>> should see 4 writes, each 128KBytes/3.  Writes may also be
>>>> coalesced, so you may see larger physical writes.
>>>>
>>>>> We need to saturate all three data disk bandwidth in the Raidz 
>>>>> group.Is it required to set vq_max_pending value to 35*3=135  ?
>>>>>
>>>>
>>>> No.  vq_max_pending applies to each vdev.
>>>
>>> 4 disk raidz group issues 128k/3=42.6k io to each individual data 
>>> disk.If 35 concurrent 128k IO is enough to saturate a disk( vdev ) ,
>>> then 35*3=105 concurrent 42k io will be required to saturates the 
>>> same disk.
>>
>> ZFS doesn't know anything about disk saturation.  It will send
>> up to vq_max_pending  I/O requests per vdev (usually a vdev is a
>> disk). It will try to keep vq_max_pending I/O requests queued to
>> the vdev.
>
> I can see the "avg pending I/Os" hitting my  vq_max_pending limit, 
> then raising the limit would be a good thing. I think , it's due to
> many 42k Read IO to individual disk in the 4 disk raidz group.

You're dealing with a queue here.  iostat's average pending I/Os represents
the queue depth.   Some devices can't handle a large queue.  In any
case, queuing theory applies.

Note that for reads, the disk will likely have a track cache, so it is
not a good assumption that a read I/O will require a media access.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to