On May 29, 2007, at 2:59 PM, [EMAIL PROTECTED] wrote:

When sequential I/O is done to the disk directly there is no performance
degradation at all.

All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some kind of
filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams. If you have data that
shows otherwise, please feel free to share.

[I]t does not take any additional time in ldi_strategy(),
bdev_strategy(), mv_rw_dma_start().  In some instance it actually
takes less time. The only thing that sometimes takes additional time
is waiting for the disk I/O.

Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests when
many sequential I/Os were queued.

-j

It could very well be that on-disc cache is being partitioned differently when NCQ is enabled in certain implementations. For example, with NCQ disabled, on disc look ahead may be enabled, netting sequential I/O improvements. Just guessing, as this level of disc implementation detail is vendor specific and generally proprietary. I would not expect the elevator sort algorithm to impose any performance penalty unless it were fundamentally flawed.

There's a bit of related discussion here

I'm actually struck by the minimal gains being seen in random I/O. A few years ago, when NCQ was in prototype, I saw better than 50% improvement in average random I/O response time with large queue depths. My gut feeling is that the issue is farther up the stack .. Bob


_______________________________________________
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to