[ Moving this discussion back to the list. I pressed the wrong button when replying.]
Thanks for the explanation Ravi. It sounds like a very useful feature indeed. I've found a reference to the debugserver profile data in GDBRemoteCommunicationClient.cpp:1276, so maybe that will help with your investigation. Maybe also someone more knowledgeable can explain what those A packets are used for (?). On 21 October 2015 at 15:48, Ravitheja Addepally <ravithejaw...@gmail.com> wrote: > Hi, > Thanx for your reply, some of the future processors to be released by > Intel have this hardware support for recording the instructions that were > executed by the processor and this recording process is also quite fast and > does not add too much computational load. Now this hardware is made > accessible via the perf_event_interface where one could map a region of > memory for this purpose by passing it as an argument to this > perf_event_interface. The recorded instructions are then written to the > memory region assigned. Now this is basically the raw information, which can > be obtained from the hardware. It can be interpreted and presented to the > user in the following ways -> > > 1) Instruction history - where the user gets basically a list of all > instructions that were executed > 2) Function Call History - It is also possible to get a list of all the > functions called in the inferior > 3) Reverse Debugging with limited information - In GDB this is only the > functions executed. > > This raw information also needs to decoded (even before you can disassemble > it ), there is already a library released by Intel called libipt which can > do that. At the moment we plan to work with Instruction History. > I will look into the debugserver infrastructure and get back to you. I guess > for the server client communication we would rely on packets only. In case > of concerns about too much data being transferred, we can limit the number > of entries we report because anyway the amount of data recorded is too big > to present all at once so we would have to resort to something like a > viewport. > > Since a lot of instructions can be recorded this way, the function call > history can be quite useful for debugging and especially since it is a lot > faster to collect function traces this way. > > -ravi > > On Wed, Oct 21, 2015 at 3:14 PM, Pavel Labath <lab...@google.com> wrote: >> >> Hi, >> >> I am not really familiar with the perf_event interface (and I suspect >> others aren't also), so it might help if you explain what kind of >> information do you plan to collect from there. >> >> As for the PtraceWrapper question, I think that really depends on >> bigger design decisions. My two main questions for a feature like this >> would be: >> - How are you going to present this information to the user? (I know >> debugserver can report some performance data... Have you looked into >> how that works? Do you plan to reuse some parts of that >> infrastructure?) >> - How will you get the information from the server to the client? >> >> pl >> >> >> On 21 October 2015 at 13:41, Ravitheja Addepally via lldb-dev >> <lldb-dev@lists.llvm.org> wrote: >> > Hello, >> > I want to implement support for reading Performance measurement >> > information using the perf_event_open system calls. The motive is to add >> > support for Intel PT hardware feature, which is available through the >> > perf_event interface. I was thinking of implementing a new Wrapper like >> > PtraceWrapper in NativeProcessLinux files. My query is that, is this a >> > correct place to start or not ? in case not, could someone suggest me >> > another place to begin with ? >> > >> > BR, >> > A Ravi Theja >> > >> > >> > _______________________________________________ >> > lldb-dev mailing list >> > lldb-dev@lists.llvm.org >> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev >> > > > _______________________________________________ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev