Peter Williams <[EMAIL PROTECTED]> writes: > > If the average usage rate is estimated over longer periods it will be > lower allowing lower limits to be used. Also if the task's own usage > rate estimates are used to test the limits then the limit can be lower. > > If the default limits can be made sufficiently small then the > temptation to use this feature by "ordinary" applications will > disappear. > > I'm not an expert but I imagine that the CPU usage rates of most RT > tasks taken over reasonably long time intervals is quite low and > therefore the default limits could also be quite low without adversely > effecting the programs that this mechanism is meant to help.
True for some, but definitely not for all. When a system was purchased specifically to do some realtime job, it often makes sense to dedicate large chunks of the main processor to realtime number crunching. Mass-produced general-purpose processors have excellent price/performance ratios. There's no good reason not to take advantage of that. People commonly run heavy Fast Fourier Transform or reverb calculations in realtime threads. They may use up as much of the CPU as the user/owner is willing to allocate. With soft realtime, its hard to push this reliably beyond about 70-80%. But, those numbers are definitely practical. -- joq - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/