Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit :

When sequential I/O is done to the disk directly there is no performance
degradation at all.

All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some kind of
filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams. If you have data that
shows otherwise, please feel free to share.

[I]t does not take any additional time in ldi_strategy(),
bdev_strategy(), mv_rw_dma_start().  In some instance it actually
takes less time. The only thing that sometimes takes additional time
is waiting for the disk I/O.

Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests when
many sequential I/Os were queued.

-j



I just posted a comment which might reconcile the positions.

It is taking longer to run the I/O because (possibly) the I/O completion interrupt is delayed until _all_ the N queued I/Os are effectively done. This is compatible with the data that 32 I/O are each taking ~25 time longer with NCQ but giving a 10-20% what have you, performance degradation. NCQ as currently done, messes up with the staged I/O pipelining that ZFS tries to do.

Is it possible to have NCQ not coalesce interrupts that much ? I suspect this will provide the best to both worlds (raw and zfs).

-r

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to