On 27 April 2017 at 00:12, Scott Smith via lldb-dev <lldb-dev@lists.llvm.org> wrote: > After a dealing with a bunch of microoptimizations, I'm back to > parallelizing loading of shared modules. My naive approach was to just > create a new thread per shared library. I have a feeling some users may not > like that; I think I read an email from someone who has thousands of shared > libraries. That's a lot of threads :-) > > The problem is loading a shared library can cause downstream parallelization > through TaskPool. I can't then also have the loading of a shared library > itself go through TaskPool, as that could cause a deadlock - if all the > worker threads are waiting on work that TaskPool needs to run on a worker > thread.... then nothing will happen. > > Three possible solutions: > > 1. Remove the notion of a single global TaskPool, but instead have a static > pool at each callsite that wants it. That way multiple paths into the same > code would share the same pool, but different places in the code would have > their own pool. >
I looked at this option in the past and this was my preferred solution. My suggestion would be to have two task pools. One for low-level parallelism, which spawns std::thread::hardware_concurrency() threads, and another one for higher level tasks, which can only spawn a smaller number of threads (the algorithm for the exact number TBD). The high-level threads can access to low-level ones, but not the other way around, which guarantees progress. I propose to hardcode 2 pools, as I don't want to make it easy for people to create additional ones -- I think we should be having this discussion every time someone tries to add one, and have a very good justification for it (FWIW, I think your justification is good in this case, and I am grateful that you are pursuing this). pl _______________________________________________ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev