sammccall added a comment.

The cache looks way simpler now, thank you!
As discussed offline, flattening ASTBuilder right into ASTWorker still seems 
like a good idea to me, but happy with what you come up with there.



================
Comment at: clangd/TUScheduler.cpp:71
+
+  /// Update the function used to compute the value.
+  void update(std::function<llvm::Optional<ParsedAST>()> ComputeF);
----------------
ilya-biryukov wrote:
> sammccall wrote:
> > I think I understand this more as "updates the value" but the value is 
> > lazy...
> > 
> > I find this API somewhat hard to follow, maybe just because it's unfamiliar.
> > I've mostly seen cache APIs look like one of:
> > 1. `Cache(function<Value(Input)> Compute)`, `Value Cache::get(Input)`
> > 2. `void Cache::put(Key, Value)`, `Optional<Value> Cache::get(Key)`
> > 3. `Handle Cache::put(Value)`, `Optional<Value> Handle::get()`
> > 
> > This one is `Slot Cache::create()`, `void Slot::update(function<Value()> 
> > LazyV)`, `Value Slot::get()`.
> > 
> > It's similar-ish to 3, but has 3 nontrivial operations instead of 2, and 
> > the slot object is externally mutable instead of immutable, so it seems 
> > more complex. What does it buy us in exchange?
> > 
> > (1 & 2 work well with a natural key or inputs that are easy to compare, 
> > which we don't particularly have)
> As discussed offline, now we have a simpler version that keeps `unique_ptr`s 
> to idle ASTs and the clients are responsible for building the ASTs.
> Note that it's not a "cache" per se, so we might want a different name for 
> that.
> @sammccall, you suggested to call it a pool, I find it reasonable.  Should we 
> name it `ASTPool` instead of `ASTCache`?
I think the name is actually fine, it's still mostly a cache.
It does have things in common with a pool, but unrelated consumers can't share 
a resource, so I think that name is at least as misleading.


================
Comment at: clangd/TUScheduler.cpp:66
+
+/// Provides an LRU cache of ASTs.
+class TUScheduler::ASTCache {
----------------
I'd say a little more about the interaction here. e.g. 
```
/// An LRU cache of idle ASTs.
/// Because we want to limit the overall number of these we retain, the cache
/// owns ASTs (and may evict them) while their workers are idle.
/// Workers borrow them when active, and return them when done.


================
Comment at: clangd/TUScheduler.cpp:84
+  /// Store the value in the pool, possibly removing the last used AST.
+  void put(Key K, std::unique_ptr<ParsedAST> V) {
+    std::unique_lock<std::mutex> Lock(Mut);
----------------
consider assert(findByKey(K) == LRU.end()) as a precondition


================
Comment at: clangd/TUScheduler.cpp:92
+    LRU.pop_back();
+    // AST destructor may need to run, make sure it happens outside the lock.
+    Lock.unlock();
----------------
Just "run the expensive destructor outside the lock"?
the "may not" case seems unimportant and slightly confusing here


================
Comment at: clangd/TUScheduler.cpp:94
+    Lock.unlock();
+    ForCleanup.reset();
+  }
----------------
this line isn't actually needed right?


================
Comment at: clangd/TUScheduler.cpp:342
+    if (!AST)
+      return Action(llvm::make_error<llvm::StringError>(
+          "invalid AST", llvm::errc::invalid_argument));
----------------
This failure doesn't get cached, correct? That's bad for performance.

But if we think this is always a clangd bug, it's probably fine. (and certainly 
simplifies things)


Repository:
  rCTE Clang Tools Extra

https://reviews.llvm.org/D47063



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to