Michael van Elst a écrit :
> joel.bertr...@systella.fr (=?UTF-8?Q?BERTRAND_Jo=c3=abl?=) writes:
> 
>>      fcfs seems to be a trivial FIFO queue. A have found few informations
>> about priocscan strategy. But what is the difference between disksort
>> and priocscan ? And between priocscan and readprio ? Does prioscan works
>> in read and write and readprio only in read mode ?
> 

        Thanks for your answer.

> fcfs       is a simple FIFO
> 
> disksort   sorts requests by block address
> 
>   The queue is executed in ascending order (one-way).
>   When it reaches the end, it continues at the start
>   of the queue.
> 
> 
> readprio   has two priorities
>   1. reads are queued in FIFO order
>   2. writes are sorted by block address like disksort.
> 
>   For reads there is a burst limit of 48 I/O request, then
>   it will allow up to 16 write requests.
> 
> 
> priocscan  has three priorities
>   1. "time critical"
>   2. "time limited" (the default)
>   3. "noncritical"
> 
>   the priority is chosen by the client. E.g. the filesystem
>   will put synchronous metadata operations into "time critical",
>   most other I/O into "time limited" and asynchronous data
>   writes into "noncritical".
> 
>   Again each priority has a burst limit so that a queue cannot
>   starve a lower priority queue.
>   - "time critical" allows a burst of 4 I/Os,
>   - "time limited" allows a burst of 16 I/Os,
>   - "noncritical" allows a burst of 64 I/Os.

        Clear.

> 
> 
> For stacked device drivers (like dk,cgd,ccd or vnd), it's usually
> better to use simple FIFO on the upper layers and to choose the
> buffer strategy only for the lowest layer that actually talks
> to hardware.

        I don't understand. For example, my rootfs is on /dev/raid0a.

legendre# dkctl raid0a
strategy:
/dev/rraid0a: priocscan

getcache:
/dev/rraid0a: read cache enabled
/dev/rraid0a: write-back cache enabled
/dev/rraid0a: read cache enable is not changeable
/dev/rraid0a: write cache enable is changeable
/dev/rraid0a: cache parameters are not savable
/dev/rraid0a: cache Force Unit Access (FUA) supported
/dev/rraid0a: cache Disable Page Out (DPO) not supported

listwedges:
/dev/rraid0a: no wedges configured

legendre# dkctl raid0
strategy:
/dev/rraid0d: priocscan
...

        If I change strategy on raid0 device, all slices are modified with the
same strategy. And I cannot modify ccd0 strategy (nor wedges strategies
on this ccd0 device) :

legendre# dkctl ccd0
strategy:
dkctl: /dev/rccd0: DIOCGSTRATEGY: Inappropriate ioctl for device
legendre# dkctl ccd0a
strategy:
dkctl: /dev/rccd0a: DIOCGSTRATEGY: Inappropriate ioctl for device
legendre# dkctl dk3
strategy:
dkctl: /dev/rdk3: DIOCGSTRATEGY: Inappropriate ioctl for device
legendre#

> On the other hand, the upper layers rarely queue anything, so
> the difference is just how much CPU time is wasted in processing
> an expensive strategy on multiple layers.
> 
> 
> Modern disks queue many requests themselves, the strategy
> used by the kernel has little meaning then. Just like above,
> the stragey done by the lowest layer counts.

        That being said, if I understand, disksort or readprio could be more
efficient than priocscan.

        Regards,

        JB

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to