Ragnar Kjørstad wrote:
> 
> On Wed, Sep 13, 2000 at 11:22:16AM -0400, Michael T. Babcock wrote:
> > If I may ask a potentially stupid question, how can request latency be
> > anything but a factor of time?  Latency is how /long/ you (or the computer)
> > /waits/ for something.  That defines it as a function of time.
> 
> Latency is of course a factor of time, but the point is that the
> acceptable latency differs from device to device. For a slower device
> longer latency must be acceptable, and if the relationship is linear,
> then using number of requests may be a simpler and better way of doing
> it.
>

Latency is a function of time, but the units of time need not be seconds and could be 
requests.
In theory.  In this particular case, measuring time in units of requests is 
inappropriate.

We have a choice between having users (or device driver authors
for the defaults) estimate how many milliseconds is reasonable for a given device, or 
having them
estimate how many requests can be guaranteed to not exceed the time they can wait for 
a particular
device.  I would rather take responsibility for knowing not to get unreasonable in the 
smallness of
my MP3 buffering for a floppy device experiencing multiple processes accessing it for 
reasons of
performance effects being disastrous than take responsibility for estimating how many 
requests will
not
add up to more than X milliseconds of buffer on my hard drive.

Number of requests is irrelevant to realtime I/O scheduling, seconds are relevant to 
realtime
I/O scheduling.

There are two problems to be solved: fairness, and realtime I/O scheduling.  Fairness 
can be handled
effectively by measuring time in units of requests.  Guaranteeing maximum latencies 
requires
traditional time units.

Hans
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to