On Fri, 2017-01-06 at 00:43 +0100, Bruno Haible wrote: > Torvald Riegel wrote: > > whenever you have a bounded amount of parallel > > work, you don't care what gets executed first. > > ... > > You control the incoming work. ... > > > > Look at your unit test, for example. Typically, the transactions would > > be the result of some input operation, not spawned as fast as possible. > > Programs have a choice to not read input, typically. If you really have > > potentially more work to do than you can compute fast enough, you have > > much bigger problems than no fairness. > > If you want to spawn random work, then you can easily throttle that ... > > While I agree that the vast majority of computation tasks can be controlled > in size, some may not so easily: think of the just-in-time compilation threads > or garbage-collection threads in a Java VM, for example.
I think you need to more precise in your examples, or I can't see what you have in mind. I can easily think of JIT scenarios / implementations that don't need fair rwlocks. Same for GC; for example, if it's a stop-the-world GC, then the overall GC you can do is bounded (ie, there's only so much that's not used anymore). > The rwlock writer starvation problem is not solved by throttling to a fixed > percentage of CPU time: If every reader thread spends 10% of its time with the > rwlock held, it will work fine with 4 CPUs, but it will hang with 24 CPUs. Well, obviously, you need to throttle in such a way that all work can be performed eventually before new work arrives. For example, don't accept new work for a while if old work hasn't been done yet. Making the whole system slower is not going to necessarily change anything for imbalances. > For comparison: > > While it is true that the vast majority of integer overflows in a C program > can be detected by code inspection and value checks before the operation, I'm > very glad that there are techniques (gcc -ftrapv and multi-precision > arithmetic), > each for different use-cases, that solve the problem altogether. > > While it is true that the vast majority of memory allocations can be tracked > manually and free()d [C] / delete'd [C++], I'm very glad that there are > techniques (garbage collection, in languages such as Lisp) that solve the > problem altogether. I don't think these are analogies to the rwlock case. But either way, your not claiming that C integer semantics are bad or useless, for example. You can of course want to use *a different tool*, such as multi-precision arithmetic. > That's what I'd like to have here as well. Even if it costs performance. ISTM you first need to make up your mind what you actually want. In precise terms, for example regarding forward progress. The examples you bring up, such as GC, suggest that you want abstractions at a much higher level than explicit threading and locks, as provided by POSIX and C11. However, you previously also said that you want to make POSIX and ISO C easier to use, so there's a gap there. > > This isn't changed by the fact that misuse of the tool can be > > demonstrated (ie, the unit test). > > I disagree. The suboptimal unit test clearly shows the starvation problem is > not solved altogether, when one uses plain POSIX APIs on glibc. But that's the point. You test for a feature that the tool does not intend to provide to you. To exaggerate, that's like testing that x86_64 int overflows after 2^32.