manoj nayak wrote:
>
>> Manoj Nayak wrote:
>>> Hi All.
>>>
>>> ZFS document says ZFS schedules it's I/O in such way that it manages 
>>> to saturate a single disk bandwidth  using enough concurrent 128K I/O.
>>> The no of concurrent I/O is decided by vq_max_pending.The default 
>>> value for  vq_max_pending is 35.
>>>
>>> We have created 4-disk raid-z group inside ZFS pool on Thumper.ZFS 
>>> record size is set to 128k.When we read/write a 128K record ,it issue a
>>> 128K/3 I/O to each of the 3 data disks in the 4-disk raid-z group.
>>>
>>
>> Yes, this is how it works for a read without errors.  For a write, you
>> should see 4 writes, each 128KBytes/3.  Writes may also be
>> coalesced, so you may see larger physical writes.
>>
>>> We need to saturate all three data disk bandwidth in the Raidz 
>>> group.Is it required to set vq_max_pending value to 35*3=135  ?
>>>
>>
>> No.  vq_max_pending applies to each vdev.
>
> 4 disk raidz group issues 128k/3=42.6k io to each individual data 
> disk.If 35 concurrent 128k IO is enough to saturate a disk( vdev ) ,
> then 35*3=105 concurrent 42k io will be required to saturates the same 
> disk.

ZFS doesn't know anything about disk saturation.  It will send
up to vq_max_pending  I/O requests per vdev (usually a vdev is a
disk). It will try to keep vq_max_pending I/O requests queued to
the vdev.

For writes, you should see them become coalesced, so rather than
sending 3 42.6kByte write requests to a vdev, you might see one
128kByte write request.

In other words, ZFS has an I/O scheduler which is responsible
for sending I/O requests to vdevs.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to