On Fri, Nov 15 2019 14:16:44 +0100, Martin Pieuchot wrote: > > Yes, that was my point exactly. Less jobs didn't fare any better (well, > > it had less spin time, but took around the same real time), so the > > conclusion I arrived at was that something in my setup was eventually > > contending on a small number of locks. My guess is that it's either the > > filesystem, the IDE driver, something Hyper-V specific, or a combination > > of the above. > > What does it bring to guess? Why don't you look deeper where the > contention is? > > > This change is all about utilizing CPUs better in parallelizing existing > > workloads, so I wouldn't expect a very large change in user time (but it > > should happen over a smaller amout of real time). > > Is this change about better parallelizing? Do we see that? Or is it a > guess? If we want OpenBSD to do a better job at parallelizing maybe we > should look at where the contention is and then how to get rid of it? > > > > You can also write Makefiles that expose less the limitation of the > > > system. ktrace(1) is your friend for that. > > > > The idea was to test real-world workloads, ie. actual OpenBSD builds. I > > do have enough memory on this thing to place objs in mfs; maybe I'll try > > that next time around. > > I'd suggest you to spend your time understanding where is the bottleneck > instead of randomly trying to change stuff :)
Your points are valid and I agree with them completely. There are clearly problems with lock contention, and there are problems with resource utilisation due to make not starting enough jobs. While they are related, I kind of have to pick my battles one by one not to descend too far into yak shaving territory. Given infinite time, let's fix everything, but since that is not afforded to me, some educated guesses have to be made to be able to test the right thing in the meantime. ;) -- Lauri Tirkkonen | lotheac @ IRCnet