> 
> > > sata
> > > disks don't understand the prioritisation, so

> 
> Er, the point was exactly that there is no
> discrimination, once the
> request is handed to the disk. 

So, do you say that SCSI drives do understand prioritisation (i.e. TCQ supports 
the schedule from ZFS), while SATA/NCQ drives don't, or is it just boiling down 
to what Richard told us, SATA disks being too slow?

> If the
> internal-to-disk queue is
> enough to keep the heads saturated / seek bound, then
> a new
> high-priority-in-the-kernel request will get to the
> disk sooner, but
> may languish once there.  

Thanks. That makes sense to me.


> 
> You can shorten the number of outstanding IO's per
> vdev for the pool
> overall, or preferably the number scrub will generate
> (to avoid
> penalising all IO).  

That sounds like a meaningful approach to addressing bottlenecks caused by 
zpool scrub to me.

>The tunables for each of these
> should be found
> readily, probably in the Evil Tuning Guide.

I think I should try to digest the Evil Tuning Guide occasionally with respect 
to this topic. Thanks for pointing me to a direction. Maybe what you have 
suggested above (shorten the number of I/Os issued by scrub) is already 
possible? If not, I think it would be a meaningful improvement to request.

> Disks with write cache effectively do this [command cueing] for
> writes, by pretending
> they complete immediately, but reads would block the
> channel until
> satisfied.  (This is all for ATA which lacked this,
> before NCQ. SCSI
> has had these capabilities for a long time).

As scrub is about reads, are you saying that this is still a problem with 
SATA/NCQ drives, or not? I am unsure what you mean at this point.

> > > limiting the number of concurrent IO's handed to
> the disk to try
> > > and avoid saturating the heads.
> > 
> > Indeed, that was what I had in mind. With the
> addition that I think
> > it is as well necessary to avoid saturating other
> components, such
> > as CPU.  
> 
> Less important, since prioritisation can be applied
> there too, but
> potentially also an issue.  Perhaps you want to keep
> the cpu fan
> speed/noise down for a home server, even if the scrub
> runs longer.

Well, the only thing that was really remarkable while scrubbing was CPU load 
constantly near 100%. I still think that is at least contributing to the 
collapse of concurrent payload. I.e., it's all about services that take place 
in Kernel: CIFS, ZFS, iSCSI.... Mostly, about concurrent load within ZFS 
itself. That means an implicit trade-off while a file is being provided over 
CIFS, i.e..

> 
> AHCI should be fine.  In practice if you see actv > 1
> (with a small
> margin for sampling error) then ncq is working.

Ok, and how is that in respect to mpt? My assertion that mpt will support NCQ 
is mainly based on the marketing information provided by LSI that these 
controllers offer NCQ support with SATA drives. How (by which tool) do I get to 
this "actv" parameter?

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to