Torvald Riegel wrote: > whenever you have a bounded amount of parallel > work, you don't care what gets executed first. > ... > You control the incoming work. ... > > Look at your unit test, for example. Typically, the transactions would > be the result of some input operation, not spawned as fast as possible. > Programs have a choice to not read input, typically. If you really have > potentially more work to do than you can compute fast enough, you have > much bigger problems than no fairness. > If you want to spawn random work, then you can easily throttle that ...
While I agree that the vast majority of computation tasks can be controlled in size, some may not so easily: think of the just-in-time compilation threads or garbage-collection threads in a Java VM, for example. The rwlock writer starvation problem is not solved by throttling to a fixed percentage of CPU time: If every reader thread spends 10% of its time with the rwlock held, it will work fine with 4 CPUs, but it will hang with 24 CPUs. For comparison: While it is true that the vast majority of integer overflows in a C program can be detected by code inspection and value checks before the operation, I'm very glad that there are techniques (gcc -ftrapv and multi-precision arithmetic), each for different use-cases, that solve the problem altogether. While it is true that the vast majority of memory allocations can be tracked manually and free()d [C] / delete'd [C++], I'm very glad that there are techniques (garbage collection, in languages such as Lisp) that solve the problem altogether. That's what I'd like to have here as well. Even if it costs performance. > This isn't changed by the fact that misuse of the tool can be > demonstrated (ie, the unit test). I disagree. The suboptimal unit test clearly shows the starvation problem is not solved altogether, when one uses plain POSIX APIs on glibc. Bruno