vsk requested changes to this revision.
vsk added a comment.
This revision now requires changes to proceed.

I'm quite concerned about the design decision to represent a trace as a vector 
of TraceInstruction. This bakes considerations that appear to be specific to 
Facebook's usage of IntelPT way too deep into lldb's APIs, and isn't workable 
for our use cases.

At Apple, we use lldb to navigate instruction traces that contain billions of 
instructions. Allocating 16 bytes per instruction simply will not scale for our 
workflows. We require the in-memory overhead to be approximately 1-2 bits per 
instruction. I'm not familiar with how large IntelPT traces can get, but 
presumably (for long enough traces) you will encounter the same scaling problem.

What alternatives to the vector<TraceInstruction> representation have you 
considered? One idea might be to implement your program analyses on top of a 
generic interface for navigating forward/backward through a trace and 
extracting info about the instruction via a set of API calls; this leaves the 
actual in-memory representation of "TraceInstruction" unspecified.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D103588/new/

https://reviews.llvm.org/D103588

_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to