Hello Dan, Thank you very much for this interesting reply.
> roughly speaking, reading through the filesystem does > the least work > possible to return the data. A scrub does the most > work possible to > check the disks (and returns none of the data). Thanks for the clarification. That's what I had thought. > > For the OP: scrub issues low-priority IO (and the > details of how much > and how low have changed a few times along the > version trail). Is there any documentation about this, besides source code? > However, that prioritisation applies only within the > kernel; sata disks > don't understand the prioritisation, so once the > requests are with the > disk they can still saturate out other IOs that made > it to the front > of the kernel's queue faster. I am not sure what you are hinting at. I initially thought about TCQ vs. NCQ when I read this. But I am not sure which detail of TCQ would allow for I/O discrimination that NCQ doesn't have. All I know about command cueing is that it is about optimising DMA strategies and optimising the handling of the I/O requests currently issued in respect to what to do first to return all data in the least possible time. (??) > If you're looking for > something to > tune, you may want to look at limiting the number of > concurrent IO's > handed to the disk to try and avoid saturating the > heads. Indeed, that was what I had in mind. With the addition that I think it is as well necessary to avoid saturating other components, such as CPU. > > You also want to confirm that your disks are on an > NCQ-capable > controller (eg sata rather than cmdk) otherwise they > will be severely > limited to processing one request at a time, at least > for reads if you > have write-cache on (they will be saturated at the > stop-and-wait > channel, long before the heads). I have two systems here, a production system that is on LSI SAS (mpt) controllers, and another one that is on ICH-9 (ahci). Disks are SATA-2. The plan was that this combo will have NCQ support. On the other hand, do you know if there a method to verify if its functioning? Best regards, Tonmaus -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss