ioeric added inline comments.
================ Comment at: clangd/tool/ClangdMain.cpp:86 + return nullptr; + Index = AsyncLoad.get(); + return Index.get(); ---------------- ilya-biryukov wrote: > ioeric wrote: > > ilya-biryukov wrote: > > > ioeric wrote: > > > > ilya-biryukov wrote: > > > > > I believe is a data race (multiple threads may run this line > > > > > concurrently). > > > > > You would want some synchronization around this, `std::shared_future` > > > > > could be a good fit > > > > Nice catch. > > > > > > > > I am a bit hesitated about using `shared_future` though. It could > > > > potentially get rid of the mutex but would require much more careful > > > > use. For example, `Index` can be set to the same value repeatedly, > > > > which makes me a bit nervous. > > > The following pattern should give proper results: > > > ``` > > > class AsyncIndex : public Index { > > > public: > > > AsyncIndex(std::shared_future<Index> IndexFut); > > > //.... > > > private; > > > const SymbolIndex *index() const { > > > if (IndexFut.wait(0) != ready) > > > return nullptr; > > > return &IndexFut.get(); > > > } > > > > > > std::shared_future<Index>IndexFut; > > > }; > > > > > > > > > AsyncIndex AI(std::async([]() { /* load the index here */ return Index; > > > }); > > > ``` > > My concern with this approach is that we are still calling `wait` even > > though it's not necessary once Index has been loaded. > Maybe not bother before we know it causes actual performance issues? My bet > is that it would never become a bottleneck, given the amount of work we're > doing in addition to every call here. Sure, the performance overhead is small in both cases. But I would still try to avoid `wait` e.g. OS can decide to `yield` even if the timeout is 0. Repository: rCTE Clang Tools Extra https://reviews.llvm.org/D51475 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits