On Thu, Nov 30, 2000 at 06:10:20PM -0500, Brian Stults wrote: > However, if one more CPU-intensive job were > added by user1, all jobs would be slowed proportionately.
Have you tested that this actually happens? Empirical data is harder to argue with than theory. > My argument > is that nice'ing the CPU-intensive jobs would cause the I/O-intensive > job to run faster without slowing the CPU-jobs at all. The reason is > that the I/O-intensive job doesn't use much CPU-time. So when it gets > its turn on the CPU it doesn't use all of its allotted time. > Generally speaking, is this correct in theory? Nope. When a process is blocked waiting for I/O, it's automatically put to sleep until it can run again. In theory, the I/O-intensive job won't receive any CPU time at all until there's more data available for it to process (or, on the output side, the hard drive catches up) and, when it gets a turn, it will block and go back to sleep when it needs to wait for I/O again. Niceness affects how frequently a task is allocated CPU time (and, sometimes, the amount of CPU time it gets when it comes up). How the task handles its time once it comes up is not affected by niceness. > It seems especially > considerate to nice the CPU-intensive jobs, since that user gets more > aggregate CPU time anyway since they're running multiple big jobs. Agreed, and that's what nice is there for. It is definitely a Good Thing to use on big jobs, just not for the reason you think it is. (Of course, I'm just a userland programmer and have never looked at the kernel's scheduler, so my understanding of this may be slightly less than perfect...) -- "Two words: Windows survives." - Craig Mundie, Microsoft senior strategist "So does syphillis. Good thing we have penicillin." - Matthew Alton Geek Code 3.1: GCS d? s+: a- C++ UL++$ P++>+++ L+++>++++ E- W--(++) N+ o+ !K w---$ O M- V? PS+ PE Y+ PGP t 5++ X+ R++ tv b+ DI++++ D G e* h+ r++ y+