On Fri, 2008-01-11 at 11:28 +0100, Andrea Righi wrote: > Bill Davidsen wrote: > > Andrea Righi wrote: > >> Allow to limit the bandwidth of I/O-intensive processes, like backup > >> tools running in background, large files copy, checksums on huge files, > >> etc. > >> > >> This kind of processes can noticeably impact the system responsiveness > >> for some time and playing with tasks' priority is not always an > >> acceptable solution. > >> > >> This patch allows to specify a maximum I/O rate in sectors per second > >> for each single process via /proc/<PID>/io_throttle (default is zero, > >> that specify no limit). > >> > > It would seem to me that this would be vastly more useful in the real > > world if there were a settable default, so that administrators could > > avoid having to find and tune individual user processes. And it would > > seem far less common that the admin would want to set the limit *up* for > > a given process, and it's likely to be one known to the admin, at least > > by name. > > > > Of course if you want to do the effort to make it fully tunable, it > > could have a default by UID or GID. Usful on machines shared by students > > or managers. > > At the moment I'm simply using it to backup my PC by this wrapper: > > $ cat iothrottle > #!/bin/sh > [ $# -lt 2 ] && echo "usage: $0 RATE CMD" && exit 1 > rate=$1 > shift > $* & > trap "kill -9 $!" SIGINT SIGTERM > [ -e /proc/$!/io_throttle ] && echo $rate >/proc/$!/io_throttle > wait %1 > $ ./iothrottle 100 tar ... > > But I totally agree with you that setting the limits per-UID/per-GID, > instead of per-task, would be actually more useful. > > Maybe a nice approach would be to define the UID/GID upper bounds via > configfs (for example) and allow the users to tune the max I/O rate of > their single tasks according to the defined ranges. In this way it could > be even possible to define I/O shaping policies, i.e. give a bandwidth > of 10MB/s to user A, 20MB/s to user B, 30MB/s to group X, etc. > > Anyway, I'm wondering if it's possible (and how) to already do this with > process containers...
I think there is an IO controller somewhere based on CFQ. I don't like this patch, because it throttles requests/s, and that doesn't say much. If a task would generate a very seeky load it could still tie up the disk even with a relatively low setting. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/