jasonmolenda wrote:

> > Was the setting intended for testing purposes only, or did you intend to 
> > include that in a final PR?
> 
> The latter. IMO the risks involved by parallelization are a bit too high to 
> do it without a flag. I'm even thinking about having it opt-in rather than 
> opt-out for some time.

I'm fine with having a temporary setting to disable it, which we can remove 
after it has been in a release or two and we've had time for people to live on 
it in many different environments.  But it should definitely be enabled by 
default when we're at the point to merge it, and the setting should only be a 
safety mechanism if this turns out to cause a problem for a configuration we 
weren't able to test.

We're not at that point yet, but just outlining my thinking on this.  I would 
even put it explicitly under an experimental node (e.g. see 
`target.experimental.inject-local-vars` we have currently) so if someone does 
disable it in their ~/.lldbinit file, and we remove the setting in a year or 
two, they won't get errors starting lldb, it will be silently ignored.

I was playing with the performance in a couple of different scenarios.  For 
some reason that I haven't looked into, we're getting less parallelism when 
many of the binaries are in the shared cache in lldb.  Maybe there is locking 
around the code which finds the binary in lldb's own shared cache, so when 9 
threads try to do it at the same time, we have additional lock contention.  
That's why the simulator speedup is better than a macos-native process speedup, 
and the speedup for a remote iOS debug process with an expanded shared cache on 
the mac (so all the libraries are in separate mach-o files) was faster still.

https://github.com/llvm/llvm-project/pull/110439
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to