Re: [lldb-dev] LLDB: Unwinding based on Assembly Instruction Profiling

2015-10-30 Thread Jason Molenda via lldb-dev
Hi Abhishek,


> On Oct 30, 2015, at 6:56 AM, Abhishek Aggarwal  wrote:
> 
> When eh_frame has epilogue description as well, the Assembly profiler
> doesn't need to augment it. In this case, is eh_frame augmented unwind
> plan used as Non Call Site Unwind Plan or Assembly based Unwind Plan
> is used?

Yes, you're correct.

If an eh_frame unwind plan describes the epilogue and the prologue, we will use 
it at "non-call sites", that is, the currently executing function.  

If we augment an eh_frame unwind plan by adding epilogue instructions, we will 
use it at non-call sites.

If an eh_frame unwind plan is missing epilogue, and we can't augment it for 
some reason, then it will not be used at non-call sites (the currently 
executing function).

The assembly unwind plan will be used for the currently executing function if 
we can't use the eh_frame unwind plan.



> I checked FuncUnwinders::GetUnwindPlanAtNonCallSite()
> function. When there is nothing to augment in eh_frame Unwind plan,
> then GetEHFrameAugmentedUnwindPlan() function returns nullptr and
> AssemblyUnwindPlan is used as Non Call Site Unwind Plan. Is it the
> expected behavior?


Yes.  FuncUnwinders::GetEHFrameAugmentedUnwindPlan gets the plain eh_frame 
unwind plan, passes it to UnwindAssembly_x86::AugmentUnwindPlanFromCallSite().

UnwindAssembly_x86::AugmentUnwindPlanFromCallSite will verify that the unwind 
plan describes the prologue.  If the prologue isn't described, it says that 
this cannot be augmented.

It then looks to see if the epilogue is described.  If the epilogue is 
described, it says the unwind plan is usable as-is.

If the epilogue is not described, it will use the assembly unwinder to add the 
epilogue unwind instructions.

> 
> About your comments on gcc producing ''asynchronous unwind tables'',
> do you mean that gcc is not producing asynchronous unwind tables as it
> keeps *some* async unwind instructions and not all of them?


"asynchronous" means that the unwind instructions are valid at every 
instruction location.

"synchronous" means that the unwind instructions are only valid at places where 
an exception can be thrown, or a function is called that may throw an exception.


Inside lldb, I use the terminology "non-call site" to mean "asynchronous".  
You're at an arbitrary instruction location, for instance, you're in the 
currently-executing function.  I use "call site" to mean synchronous - a 
function has called another function, so it's in the middle of the function 
body, past the prologue, before the epilogue.  This is a function higher up on 
the stack.

The terms are confusing, I know.

The last time I checked, gcc cannot be made to emit truly asynchronous unwind 
instructions.  This is easy to test on a i386 binary compiled with 
-fomit-frame-pointer.  For instance (the details will be a little different on 
an ELF system but I bet it will be similar if the program runs position 
independent aka pic):

% cat >test.c
#include 
int main () { puts ("HI"); }
^D
% clang  -arch i386 -fomit-frame-pointer test.c
% lldb a.out
(lldb) target create "a.out"
Current executable set to 'a.out' (i386).(lldb) disass -b -n main
a.out`main:
a.out[0x1f70] <+0>:  83 ec 0c   subl   $0xc, %esp
a.out[0x1f73] <+3>:  e8 00 00 00 00 calll  0x1f78; <+8>
a.out[0x1f78] <+8>:  58 popl   %eax
a.out[0x1f79] <+9>:  8d 80 3a 00 00 00  leal   0x3a(%eax), %eax
a.out[0x1f7f] <+15>: 89 04 24   movl   %eax, (%esp)
a.out[0x1f82] <+18>: e8 0d 00 00 00 calll  0x1f94; 
symbol stub for: puts

Look at the call instruction at +3.  What is this doing?  It calls the next 
instruction, which does a pop %eax. This is loading the address main+8 into eax 
so it can get the address of the "HI" string which is at main+8+0x3a.  It's 
called a "pic base", or position independent code base, because this program 
could be loaded at any address when it is run, the instructions can't directly 
reference the address of the "HI" string.

If I run this program and have lldb dump its assembly unwind rules for the 
function:

(lldb) image show-unwind -n main
row[0]:0: CFA=esp +4 => esp=CFA+0 eip=[CFA-4] 
row[1]:3: CFA=esp+16 => esp=CFA+0 eip=[CFA-4] 
row[2]:8: CFA=esp+20 => esp=CFA+0 eip=[CFA-4] 
row[3]:9: CFA=esp+16 => esp=CFA+0 eip=[CFA-4] 
row[4]:   34: CFA=esp +4 => esp=CFA+0 eip=[CFA-4] 

It gets this right.  After the call instruction at +3, the CFA is now esp+20 
because we just pushed a word on to the tack.  And after the pop instruction at 
+8, the CFA is back to esp+16 because we popped that word off the stack.

An asynchronous unwind plan would describe these stack movements.  A 
synchronous unwind plan will not -- they are before any point where we could 
throw an exception, or before we call another function.

(notice that you need to use -fomit-frame-pointer to get this problem.  If ebp 
is set up as the frame pointer, it doesn't matter how we change the stack 
po

Re: [lldb-dev] Two CLs requiring changes to the Xcode project

2015-11-12 Thread Jason Molenda via lldb-dev
Done in r252998.

I didn't see anything in the xcode project file about a gtest target.

J

> On Nov 12, 2015, at 5:39 PM, Zachary Turner via lldb-dev 
>  wrote:
> 
> Hi all,
> 
> I submitted r252993 and 252994.  These changes will require a corresponding 
> change in the Xcode workspace.  Would anyone mind making those changes for 
> me?  It should be pretty simple, just need to add a .cpp and .h file to the 
> gtest target for ScriptInterpreterPythonTests, and add 
> PythonExceptionState.cpp to Plugins/ScriptInterpreter/Python
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Two CLs requiring changes to the Xcode project

2015-11-12 Thread Jason Molenda via lldb-dev
Ah, my bad.  It's the lldb-gtest target.


> On Nov 12, 2015, at 5:49 PM, Zachary Turner  wrote:
> 
> Hmm, can you ask Todd about it?  He said he added one, but I'm not sure how 
> it works.
> 
> On Thu, Nov 12, 2015 at 5:46 PM Jason Molenda  wrote:
> Done in r252998.
> 
> I didn't see anything in the xcode project file about a gtest target.
> 
> J
> 
> > On Nov 12, 2015, at 5:39 PM, Zachary Turner via lldb-dev 
> >  wrote:
> >
> > Hi all,
> >
> > I submitted r252993 and 252994.  These changes will require a corresponding 
> > change in the Xcode workspace.  Would anyone mind making those changes for 
> > me?  It should be pretty simple, just need to add a .cpp and .h file to the 
> > gtest target for ScriptInterpreterPythonTests, and add 
> > PythonExceptionState.cpp to Plugins/ScriptInterpreter/Python
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Benchmark tests

2015-12-09 Thread Jason Molenda via lldb-dev
FWIW, nope, I've never messed with the benchmark tests.

> On Dec 9, 2015, at 1:22 PM, Todd Fiala  wrote:
> 
> Hey Jason,
> 
> Are you the benchmark user?
> 
> -Todd
> 
> On Wed, Dec 9, 2015 at 12:32 PM, Zachary Turner via lldb-dev 
>  wrote:
> Is anyone using the benchmark tests?  None of the command line options 
> related to the benchmark tests were claimed as being used by anyone.  Which 
> makes me wonder if the tests are even being used by anyone.  
> 
> What I really want to know is: Is it really ok to delete the -x and -y 
> command line options?  And what is the status of these tests?  Does anyone 
> use them?
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> 
> 
> 
> -- 
> -Todd

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb 340.4.119 unable to attach (El Capitan)

2015-12-29 Thread Jason Molenda via lldb-dev

> On Dec 26, 2015, at 3:53 AM, Andre Vergison via lldb-dev 
>  wrote:
> 

> I tried the above because in fact I had a process which a segmentation fault 
> 11, here’s what lldb makes out of the core dump:
>  
> txt$ lldb /cores/core.33158
> (lldb) target create "/cores/core.33158"
> warning: (x86_64) /cores/core.33158 load command 175 LC_SEGMENT_64 has a 
> fileoff
>  + filesize (0x31c57000) that extends beyond the end of the file 
> (0x31c56000), t
> he segment will be truncated to match
> warning: (x86_64) /cores/core.33158 load command 176 LC_SEGMENT_64 has a 
> fileoff
>  (0x31c57000) that extends beyond the end of the file (0x31c56000), ignoring 
> thi
> s section
> Current executable set to '/cores/core.33158' (x86_64).
> (lldb)


For what it's worth, this is often a harmless warning message when debugging a 
user process core dump.  The core creator code in the kernel adds an extra 
memory segment to the core file when it writes it out.  There's a bug report 
tracking the issue but it's pretty much cosmetic so it hasn't been addressed 
yet.  Try debugging your core file and see if it works.  You may want to 
specify the name of your binary on the lldb cmd line like 'lldb a.out -c 
/tmp/core.33158'.

J

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb 340.4.119 unable to attach (El Capitan)

2015-12-29 Thread Jason Molenda via lldb-dev
 /Users/tst/a.out
>   502  3705 1   0 10:27AM ttys0000:00.00 /Users/tst/a.out
>   502  3724 1   0 10:27AM ttys0000:00.00 /Users/tst/a.out
> tst2$
>  
>  
> > To have lldb use the official Xcode version of lldb's debugserver (assuming 
> > you have Xcode installed and aren't trying to use just the command line 
> > tools), you should be able to build with a command line like this:
>  
> xcodebuild -scheme desktop -configuration Debug DEBUGSERVER_USE_FROM_SYSTEM=1 
> <
>  
> tst$ xcodebuild
> xcode-select: error: tool 'xcodebuild' requires Xcode, but active developer 
> dire
> ctory '/Library/Developer/CommandLineTools' is a command line tools instance
> tst $
>  
> tst$ xcodebuild -scheme desktop -configuration Debug DEBUGSER
> VER_USE_FROM_SYSTEM=1
> xcode-select: error: tool 'xcodebuild' requires Xcode, but active developer 
> dire
> ctory '/Library/Developer/CommandLineTools' is a command line tools instance
> tst$
>  
> > Or from within Xcode itself, locally adjust your Xcode project to set the 
> > "DEBUGSERVER_USE_FROM_SYSTEM" user variable to 1. <
>  
> Not sure what “from within Xcode itself” means (GUI?) but I tried this:
>  
> tst$ export DEBUGSERVER_USE_FROM_SYSTEM=1
> tst$ set|grep DEBUG
> DEBUGSERVER_USE_FROM_SYSTEM=1
>  
> This doesn’t seem to help.
>  
> > I'm not sure if you already did this, but you may need to turn on your dev 
> > tools security via:
> sudo DevToolSecurity –enable <
>  
> tst$ sudo find / -name DevToolSecurity
> find: /dev/fd/3: Not a directory
> find: /dev/fd/4: Not a directory
> find: /Volumes/VMware Shared Folders: Input/output error
> tst$
>  
> I have a feeling that my install is not complete. What can I do from within 
> the ssh session? As I’m remote (ssh only) I’d have to ask the local admin to 
> tweak settings using the Xcode gui, if needed. What would you suggest?
>  
> Thanks,
> Andre
>  
> From: Todd Fiala [mailto:todd.fi...@gmail.com] 
> Sent: maandag 28 december 2015 19:19
> To: Andre Vergison
> Cc: lldb-dev@lists.llvm.org
> Subject: Re: [lldb-dev] lldb 340.4.119 unable to attach (El Capitan)
>  
> Hi Andre,
>  
> On Sat, Dec 26, 2015 at 3:53 AM, Andre Vergison via lldb-dev 
>  wrote:
> Hi,
> I tried Jason Molenda’s test code on El Capitan, lldb-340.4.119 (Jason 
> Molenda via lldb-dev | 3 Oct 02:59 2015).
> I’m connected to a remote VM using ssh.
>  
> tst$ echo 'int main () { }' > /tmp/a.c
> tst$ xcrun clang /tmp/a.c -o /tmp/a.out
> tst$ xcrun lldb /tmp/a.out
> (lldb) target create "/tmp/a.out"
> Current executable set to '/tmp/a.out' (x86_64).
> (lldb) r
> error: process exited with status -1 (unable to attach)
> (lldb) run
> error: process exited with status -1 (unable to attach)
> (lldb) quit
> tst$ ps -ef|grep a.out
>   502 33174 1   0 12:20PM ttys0000:00.00 /tmp/a.out
>   502 33187 1   0 12:20PM ttys0000:00.00 /tmp/a.out
>  
> Just shooting in the dark, but perhaps the a.out is either not in a state 
> where it can be touched (yet), could be zombified or something.  Have you 
> tried 'sudo kill -9' on them?  Also, if you look for a debugserver or lldb in 
> the process list (either of which could be a parent of it), are they hanging 
> around?  If so, killing them might allow the a.out processes to die.
>  
> Are you using an lldb that you built?  If so, the underlying attach problem 
> could be due to some kind signing/permissions with debugserver.  To have lldb 
> use the official Xcode version of lldb's debugserver (assuming you have Xcode 
> installed and aren't trying to use just the command line tools), you should 
> be able to build with a command line like this:
>  
> xcodebuild -scheme desktop -configuration Debug DEBUGSERVER_USE_FROM_SYSTEM=1
>  
> Or from within Xcode itself, locally adjust your Xcode project to set the 
> "DEBUGSERVER_USE_FROM_SYSTEM" user variable to 1.
>  
> I'm not sure if you already did this, but you may need to turn on your dev 
> tools security via:
> sudo DevToolSecurity --enable
>  
> Let us know if that gets you any further.
>  
> Thanks!
>  
> -Todd
>  
>  
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb 340.4.119 unable to attach (El Capitan)

2016-01-05 Thread Jason Molenda via lldb-dev

> On Jan 5, 2016, at 10:17 AM, Greg Clayton via lldb-dev 
>  wrote:
> 
> 
>> So how about:
>> 
>> (lldb) run
>> error: developer mode not enabled
> 
> We should be able to do this. The main issue is detecting that the user is in 
> a remote scenario where they don't have access to the UI. A dialog box will 
> be popped up if the user is on the system, but when remotely connected we 
> would need to detect this and return a correct error. This is a little harder 
> as well because "debugserver", our GDB remote protocol debug stub, is what is 
> requesting the debugging privelege. This is a program that is spawned by LLDB 
> as a child process. But is should be able to be done somehow.


I have a low priority Todo to implement this - when I looked into it, it was 
just a few CF calls and I could retrieve whether developer mode was enabled in 
debugserver and report that to the user.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] No breakpoints after update

2016-02-16 Thread Jason Molenda via lldb-dev
If you can pause the remote process while debugging, I would start by seeing if 
lldb knows about any images being loaded:

(lldb) image list

I'd also try 'target list', 'platform status', and maybe 'log enable lldb dyld' 
from the start.

If lldb can't figure out where the binaries are loaded in memory, it'll never 
set breakpoints.


J


> On Feb 16, 2016, at 11:40 AM, Carlo Kok via lldb-dev 
>  wrote:
> 
> After updating lldb to latest (from ~october) i'm not getting any hits 
> anymore for any breakpoints.
> 
> I'm remote debugging from Windows to OSX, the Platform (MacOS), ABI (sysV) 
> all seems fine. the language runtime doesn't load yet but from what I've seen 
> during debugging it never actually gets a dyld breakpoint hit.
> 
> Log is here:
> 
> http://pastebin.com/raw/NyUUed0v
> 
> checked every line, didn't see anything obvious. Any hints at what I can try 
> would be appreciated.
> 
> Thanks.
> 
> -- 
> Carlo Kok
> RemObjects Software
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry regarding AddOneMoreFrame function in UnWindLLDB

2016-05-31 Thread Jason Molenda via lldb-dev

> On May 31, 2016, at 11:31 AM, jing...@apple.com wrote:
> 
> 
>> On May 31, 2016, at 12:52 AM, Ravitheja Addepally via lldb-dev 
>>  wrote:
>> 
>> Hello,
>>  I posted this query a while ago, i still have no answers, I am 
>> currently working on Bug 27687 (PrintStackTraces), so the reason for the 
>> failure is the erroneous unwinding of the frames from the zeroth frame. The 
>> error is not detected in AddOneMoreFrame, since it only checks for 2 more 
>> frames, if it was checking more frames in AddOneMoreFrame, it would have 
>> detected the error. Now my questions are ->
>> 
>> ->  is that is there any specific reason for only checking 2 frames instead 
>> of more ?
> 
> The stepping machinery uses the unwinder on each stop to figure out whether 
> it has stepped in or out, which is fairly performance sensitive, so we don't 
> want AddOneMoreFrame to do more work than it has to.  


Most common case for a bad unwind, where the unwinder is stuck in a loop, is a 
single stack frame repeating.  I've seen loops as much as six frames repeating 
(which are not actually a series of recursive calls) but it's less common.

> 
>> ->  Why no make the EH CFI based unwinder the default one and make the 
>> assembly the fallback ?


Sources of unwind information fall into two categories.  They can describe the 
unwind state at every instruction of a function (asynchronous) or they can 
describe the unwind state only at function call boundaries (synchronous).

Think of "asynchronous" here as the fact that the debugger can interrupt the 
program at any point in time.

Most unwind information is designed for exception handling -- it is 
synchronous, it can only throw an exception in the body of the function, or an 
exception is passed up through it when it is calling another function.  

For exception handling, there is no need/requirement to describe the prologue 
or epilogue instructions, for instance.

eh_frame (and DWARF's debug_frame from which it derives) splits the difference 
and makes things quite unclear.  It is guaranteed to be correct for exception 
handling -- it is synchronous, and is valid in the middle of the function and 
when it is calling other functions -- but it is a general format that CAN be 
asynchronous if the emitter includes information about the prologue or epilogue 
or mid-function stack changes.  But eh_frame is not guaranteed to be that way, 
and in fact there's no way for it to indicate what it describes, beyond the 
required unwind info for exception handling.

On x86, gcc and clang have always described the prologue unwind info in their 
eh_frame.  gcc has recently started describing the epilogue too (clang does 
not).  There's code in lldb (e.g. 
UnwindAssembly_x86::AugmentUnwindPlanFromCallSite) written by Tong Shen when 
interning at Google which will try to detect if the eh_frame describes the 
prologue and epilogue.  If it does, it will use eh_frame for frame 0.  If it 
only describes the prologue, it will use the instruction emulation code to add 
epilogue instructions and use that at frame 0.


There are other sources of unwind information similar to eh_frame that are only 
for exception handling.  Tamas added ArmUnwindInfo last year which reads the 
.ARM.exidx unwind tables.  I added compact unwind importing - an Apple specific 
format that uses a single 4-byte word to describe the unwind state for each 
function, which can't describe anything in the prologue/epilogue.  These 
formats definitely can't be used to unwind at frame 0 because we could be 
stopped anywhere in the prologue/epilogue where they are not accurate.


It's unfortunate that eh_frame doesn't include a way for the producer to 
declare how async the unwind info is, it makes the debugger's job a lot more 
difficult.


J
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry regarding AddOneMoreFrame function in UnWindLLDB

2016-06-01 Thread Jason Molenda via lldb-dev
It gets so tricky!  It's hard for the unwinder to tell the difference between a 
real valid stack unwind and random data giving lots of "frames".

It sounds like the problem that needs fixing is to figure out why the assembly 
unwind is wrong for frame 0.  What do you get for 

disass -a 

image show-unwind -a 

?


> On Jun 1, 2016, at 12:56 AM, Ravitheja Addepally  
> wrote:
> 
> Ok , currently the problem that I am facing is that there are cases in which 
> eh_frame should have been used for frame 0 but it isn't and the assembly 
> unwind just gives wrong information which could only be detected if the 
> debugger tried to extract more frames. Now the usage of AddOneMoreFrame in 
> UnwindLLDB is to try to get more than one frames in the stack. I want to run 
> both the unwinders and select the one that gives more number of frames.
> 
> On Wed, Jun 1, 2016 at 12:27 AM, Jason Molenda  wrote:
> 
> > On May 31, 2016, at 11:31 AM, jing...@apple.com wrote:
> >
> >
> >> On May 31, 2016, at 12:52 AM, Ravitheja Addepally via lldb-dev 
> >>  wrote:
> >>
> >> Hello,
> >>  I posted this query a while ago, i still have no answers, I am 
> >> currently working on Bug 27687 (PrintStackTraces), so the reason for the 
> >> failure is the erroneous unwinding of the frames from the zeroth frame. 
> >> The error is not detected in AddOneMoreFrame, since it only checks for 2 
> >> more frames, if it was checking more frames in AddOneMoreFrame, it would 
> >> have detected the error. Now my questions are ->
> >>
> >> ->  is that is there any specific reason for only checking 2 frames 
> >> instead of more ?
> >
> > The stepping machinery uses the unwinder on each stop to figure out whether 
> > it has stepped in or out, which is fairly performance sensitive, so we 
> > don't want AddOneMoreFrame to do more work than it has to.
> 
> 
> Most common case for a bad unwind, where the unwinder is stuck in a loop, is 
> a single stack frame repeating.  I've seen loops as much as six frames 
> repeating (which are not actually a series of recursive calls) but it's less 
> common.
> 
> >
> >> ->  Why no make the EH CFI based unwinder the default one and make the 
> >> assembly the fallback ?
> 
> 
> Sources of unwind information fall into two categories.  They can describe 
> the unwind state at every instruction of a function (asynchronous) or they 
> can describe the unwind state only at function call boundaries (synchronous).
> 
> Think of "asynchronous" here as the fact that the debugger can interrupt the 
> program at any point in time.
> 
> Most unwind information is designed for exception handling -- it is 
> synchronous, it can only throw an exception in the body of the function, or 
> an exception is passed up through it when it is calling another function.
> 
> For exception handling, there is no need/requirement to describe the prologue 
> or epilogue instructions, for instance.
> 
> eh_frame (and DWARF's debug_frame from which it derives) splits the 
> difference and makes things quite unclear.  It is guaranteed to be correct 
> for exception handling -- it is synchronous, and is valid in the middle of 
> the function and when it is calling other functions -- but it is a general 
> format that CAN be asynchronous if the emitter includes information about the 
> prologue or epilogue or mid-function stack changes.  But eh_frame is not 
> guaranteed to be that way, and in fact there's no way for it to indicate what 
> it describes, beyond the required unwind info for exception handling.
> 
> On x86, gcc and clang have always described the prologue unwind info in their 
> eh_frame.  gcc has recently started describing the epilogue too (clang does 
> not).  There's code in lldb (e.g. 
> UnwindAssembly_x86::AugmentUnwindPlanFromCallSite) written by Tong Shen when 
> interning at Google which will try to detect if the eh_frame describes the 
> prologue and epilogue.  If it does, it will use eh_frame for frame 0.  If it 
> only describes the prologue, it will use the instruction emulation code to 
> add epilogue instructions and use that at frame 0.
> 
> 
> There are other sources of unwind information similar to eh_frame that are 
> only for exception handling.  Tamas added ArmUnwindInfo last year which reads 
> the .ARM.exidx unwind tables.  I added compact unwind importing - an Apple 
> specific format that uses a single 4-byte word to describe the unwind state 
> for each function, which can't describe anything in the prologue/epilogue.  
> These formats definitely can't be used to unwind at frame 0 because we could 
> be stopped anywhere in the prologue/epilogue where they are not accurate.
> 
> 
> It's unfortunate that eh_frame doesn't include a way for the producer to 
> declare how async the unwind info is, it makes the debugger's job a lot more 
> difficult.
> 
> 
> J
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-d

Re: [lldb-dev] Inquiry regarding AddOneMoreFrame function in UnWindLLDB

2016-06-02 Thread Jason Molenda via lldb-dev
This has no eh_frame unwind instructions.  Even if we were using eh_frame at 
frame 0, you'd be out of luck.

I forget the exact order of fallbacks.  I think for frame 0 we try to use the 
assembly profile unwind ("async unwind plan") and if we can't do that we fall 
back to the eh_frame unwind ("sync unwind plan") and as a last resort we'll use 
the architecture default unwind plan.  Which, for a stack frame like this that 
doesn't do the usual push rbp; mov rsp, rbp sequence, means we'll skip at least 
one stack frame.

The assembly inspection unwind plan from AssemblyParse_x86 looks correct to me. 
This function saves some register on the stack (all of them argument or 
volatile registers, so that's weird & the assembly profiler probably won't 
record them; whatever), calls a function, restores the reg values and then 
jumps to the returned function pointer from that first func call.  Maybe this 
is some dynamic loader fixup routine for the first time an external function is 
called and the solib needs to be paged in.

You're stopped in the body of the function (offset 86) where the stack pointer 
is still as expected.  I'd have to think about that unwind entry for offset +94 
(if you were stopped on the jmp instruction) a bit more - that's a bit unusual. 
 But unless you're on the jmp, I can't see this unwind going wrong.


J

> On Jun 2, 2016, at 1:48 AM, Ravitheja Addepally  
> wrote:
> 
> Hello,
>  This is happening in TestPrintStackTraces, where we can end up here:
> ld-linux-x86-64.so.2`___lldb_unnamed_symbol95$$ld-linux-x86-64.so.2:
> 0x77df04e0 <+0>:  48 83 ec 38   subq  
>  $0x38, %rsp
> 0x77df04e4 <+4>:  48 89 04 24   movq  
>  %rax, (%rsp)
> 0x77df04e8 <+8>:  48 89 4c 24 08movq  
>  %rcx, 0x8(%rsp)
> 0x77df04ed <+13>: 48 89 54 24 10movq  
>  %rdx, 0x10(%rsp)
> 0x77df04f2 <+18>: 48 89 74 24 18movq  
>  %rsi, 0x18(%rsp)
> 0x77df04f7 <+23>: 48 89 7c 24 20movq  
>  %rdi, 0x20(%rsp)
> 0x77df04fc <+28>: 4c 89 44 24 28movq  
>  %r8, 0x28(%rsp)
> 0x77df0501 <+33>: 4c 89 4c 24 30movq  
>  %r9, 0x30(%rsp)
> 0x77df0506 <+38>: 48 8b 74 24 40movq  
>  0x40(%rsp), %rsi
> 0x77df050b <+43>: 48 8b 7c 24 38movq  
>  0x38(%rsp), %rdi
> 0x77df0510 <+48>: e8 4b 8f ff ffcallq 
>  0x77de9460; ___lldb_unnamed_symbol54$$ld-linux-x86-64.so.2
> 0x77df0515 <+53>: 49 89 c3  movq  
>  %rax, %r11
> 0x77df0518 <+56>: 4c 8b 4c 24 30movq  
>  0x30(%rsp), %r9
> 0x77df051d <+61>: 4c 8b 44 24 28movq  
>  0x28(%rsp), %r8
> 0x77df0522 <+66>: 48 8b 7c 24 20movq  
>  0x20(%rsp), %rdi
> 0x77df0527 <+71>: 48 8b 74 24 18movq  
>  0x18(%rsp), %rsi
> 0x77df052c <+76>: 48 8b 54 24 10movq  
>  0x10(%rsp), %rdx
> 0x77df0531 <+81>: 48 8b 4c 24 08movq  
>  0x8(%rsp), %rcx
> ->  0x77df0536 <+86>: 48 8b 04 24   movq  
>  (%rsp), %rax
> 0x77df053a <+90>: 48 83 c4 48   addq  
>  $0x48, %rsp
> 0x77df053e <+94>: 41 ff e3  jmpq  
>  *%r11
> 0x77df0541 <+97>: 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00  nopw  
>  %cs:(%rax,%rax)
> 
> 
> image show-unwind --address 0x77df0536
> UNWIND PLANS for 
> ld-linux-x86-64.so.2`___lldb_unnamed_symbol95$$ld-linux-x86-64.so.2 (start 
> addr 0x77df04e0)
> 
> Asynchronous (not restricted to call-sites) UnwindPlan is 'assembly insn 
> profiling'
> Synchronous (restricted to call-sites) UnwindPlan is 'eh_frame CFI'
> 
> Assembly language inspection UnwindPlan:
> This UnwindPlan originally sourced from assembly insn profiling
> This UnwindPlan is sourced from the compiler: no.
> This UnwindPlan is valid at all instruction locations: yes.
> Address range of this UnwindPlan: [ld-linux-x86-64.so.2..text + 
> 88576-0x00015a70)
> row[0]:0: CFA=rsp +8 => rsp=CFA+0 rip=[CFA-8] 
> row[1]:4: CFA=rsp+64 => rsp=CFA+0 rip=[CFA-8] 
> row[2]:   94: CFA=rsp -8 => rsp=CFA+0 rip=[CFA-8] 
> 
> eh_frame UnwindPlan:
> This UnwindPlan originally sourced from eh_frame CFI
> This UnwindPlan is sourced from the compiler: yes.
> This UnwindPlan is valid at all instruction locations: no.
> Address range of this UnwindPlan: [ld-linux-x86-64.so.2..text + 
> 88576-0x00015a61)
> row[0]:0: CFA=rsp+24 => rip=[CFA-8] 
> row[1]:4: CFA=rsp+80 => rip=[CF

Re: [lldb-dev] Remote Kernel Debugging using LLDB

2016-07-28 Thread Jason Molenda via lldb-dev
Hi, the KDK from Apple includes a README file (.txt or .html, I forget) which 
describes how to set up kernel debugging.  I'd start by looking at those notes. 
 There have also been WWDC sessions that talk about kernel debugging, e.g.

https://developer.apple.com/videos/play/wwdc2013/707/

(there are PDFs of the slides of the presentation - the lldb part comes at the 
end)


> On Jul 27, 2016, at 10:31 PM, hrishikesh chaudhari via lldb-dev 
>  wrote:
> 
> Hi,
> I have been trying to debug my kernel Extension. In order to enter a kernel 
> into a panic mode, I have put hard debug point using (int $3). When the 
> target system starts, the kernel waits into panic mode for debugger to attach.
> 
> Now the problem is:
> 
> What should I set target in lldb command? I have mach_kernel from KDK (kernel 
> debug kit) and also have my own kernel extension. if I set mach_kernel a 
> target I am not able put breakpoint in my kernel extension and if I make 
> target as my kernel ext ..i can put breakpoint but then after hitting 
> continue it says invalid process . So the question is how to proceed 
> after connecting bebugger in panic mode??? – hrishikesh chaudhari Jul 22 at 
> 12:52   
> Thanks
> -- 
> Hrishikesh Chaudahri
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Remote Kernel Debugging using LLDB

2016-07-28 Thread Jason Molenda via lldb-dev
Is your kext loaded in lldb when you're connected to the kernel?  If you do 
'image list' do you see your kext there?  Does it show the dSYM, which has all 
of the debug information, also loaded for your kext?  If your kext with its 
dSYM is on the local filesystem, you can add a line to your ~/.lldbinit file 
giving it the file path to your kext,

settings set platform.plugin.darwin-kernel.kext-directories  
/your/directory/here

and lldb will index the kexts in that directory when it starts the kernel debug 
session.


> On Jul 28, 2016, at 5:55 AM, hrishikesh chaudhari via lldb-dev 
>  wrote:
> 
> Ya. I have followed the .html README file for OSX 10.9. It has given the 
> target path for lldb should be the mach_kernel in KDK.
> 
> Now my question is ... As i have put the hard debug point in my kernel 
> extension, which leads the kernel to go into panic mode and there it is 
> waiting for debugger to connect. Now i want to put the breakpoints in my 
> kernel extension. Here what should be the target for lldb command? if target 
> i put as mentioned in README file i could not put breakpoints in my Kext and 
> if i put my Kext as a target i could put the breakpoint but when i do 
> continue , lldb shows invalid process.
> 
> Help needed
> Hrishikesh
> 
> On Thu, Jul 28, 2016 at 12:47 PM, Jason Molenda  wrote:
> Hi, the KDK from Apple includes a README file (.txt or .html, I forget) which 
> describes how to set up kernel debugging.  I'd start by looking at those 
> notes.  There have also been WWDC sessions that talk about kernel debugging, 
> e.g.
> 
> https://developer.apple.com/videos/play/wwdc2013/707/
> 
> (there are PDFs of the slides of the presentation - the lldb part comes at 
> the end)
> 
> 
> > On Jul 27, 2016, at 10:31 PM, hrishikesh chaudhari via lldb-dev 
> >  wrote:
> >
> > Hi,
> > I have been trying to debug my kernel Extension. In order to enter a kernel 
> > into a panic mode, I have put hard debug point using (int $3). When the 
> > target system starts, the kernel waits into panic mode for debugger to 
> > attach.
> >
> > Now the problem is:
> >
> > What should I set target in lldb command? I have mach_kernel from KDK 
> > (kernel debug kit) and also have my own kernel extension. if I set 
> > mach_kernel a target I am not able put breakpoint in my kernel extension 
> > and if I make target as my kernel ext ..i can put breakpoint but then after 
> > hitting continue it says invalid process . So the question is how to 
> > proceed after connecting bebugger in panic mode??? – hrishikesh chaudhari 
> > Jul 22 at 12:52
> > Thanks
> > --
> > Hrishikesh Chaudahri
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> 
> 
> 
> -- 
> Hrishikesh Chaudahri
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Remote Kernel Debugging using LLDB

2016-07-28 Thread Jason Molenda via lldb-dev
Ah, I don't know how to do kernel debugging with macOS running under a VM.  I 
think this sounds more like a question for the apple devforums, I think it's 
more of a "how do I do kext debugging with a VM", not so much an lldb question.

> On Jul 28, 2016, at 7:38 PM, hrishikesh chaudhari  
> wrote:
> 
> Hi,
> I m able to do remote kernel debugging but on two physical mac 10.11 El 
> Capitan. why is it giving me problems on two VMs or 1  physical machine n one 
> VM ?? Is it necessary to have both physical machines n not VM ??
> 
> 
> On Jul 29, 2016 3:34 AM, "Jason Molenda"  wrote:
> Is your kext loaded in lldb when you're connected to the kernel?  If you do 
> 'image list' do you see your kext there?  Does it show the dSYM, which has 
> all of the debug information, also loaded for your kext?  If your kext with 
> its dSYM is on the local filesystem, you can add a line to your ~/.lldbinit 
> file giving it the file path to your kext,
> 
> settings set platform.plugin.darwin-kernel.kext-directories  
> /your/directory/here
> 
> and lldb will index the kexts in that directory when it starts the kernel 
> debug session.
> 
> 
> > On Jul 28, 2016, at 5:55 AM, hrishikesh chaudhari via lldb-dev 
> >  wrote:
> >
> > Ya. I have followed the .html README file for OSX 10.9. It has given the 
> > target path for lldb should be the mach_kernel in KDK.
> >
> > Now my question is ... As i have put the hard debug point in my kernel 
> > extension, which leads the kernel to go into panic mode and there it is 
> > waiting for debugger to connect. Now i want to put the breakpoints in my 
> > kernel extension. Here what should be the target for lldb command? if 
> > target i put as mentioned in README file i could not put breakpoints in my 
> > Kext and if i put my Kext as a target i could put the breakpoint but when i 
> > do continue , lldb shows invalid process.
> >
> > Help needed
> > Hrishikesh
> >
> > On Thu, Jul 28, 2016 at 12:47 PM, Jason Molenda  wrote:
> > Hi, the KDK from Apple includes a README file (.txt or .html, I forget) 
> > which describes how to set up kernel debugging.  I'd start by looking at 
> > those notes.  There have also been WWDC sessions that talk about kernel 
> > debugging, e.g.
> >
> > https://developer.apple.com/videos/play/wwdc2013/707/
> >
> > (there are PDFs of the slides of the presentation - the lldb part comes at 
> > the end)
> >
> >
> > > On Jul 27, 2016, at 10:31 PM, hrishikesh chaudhari via lldb-dev 
> > >  wrote:
> > >
> > > Hi,
> > > I have been trying to debug my kernel Extension. In order to enter a 
> > > kernel into a panic mode, I have put hard debug point using (int $3). 
> > > When the target system starts, the kernel waits into panic mode for 
> > > debugger to attach.
> > >
> > > Now the problem is:
> > >
> > > What should I set target in lldb command? I have mach_kernel from KDK 
> > > (kernel debug kit) and also have my own kernel extension. if I set 
> > > mach_kernel a target I am not able put breakpoint in my kernel extension 
> > > and if I make target as my kernel ext ..i can put breakpoint but then 
> > > after hitting continue it says invalid process . So the question is 
> > > how to proceed after connecting bebugger in panic mode??? – hrishikesh 
> > > chaudhari Jul 22 at 12:52
> > > Thanks
> > > --
> > > Hrishikesh Chaudahri
> > >
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> >
> >
> >
> > --
> > Hrishikesh Chaudahri
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] A problem with the arm64 unwind plans I'm looking at

2016-11-04 Thread Jason Molenda via lldb-dev
Hi Tamas & Pavel, I thought you might have some ideas so I wanted to show a 
problem I'm looking at right now.  The arm64 instruction unwinder forwards the 
unwind state based on branch instructions within the function.  So if one block 
of code ends in an epilogue, the next instruction (which is presumably a branch 
target) will have the correct original unwind state.  This change went in to 
UnwindAssemblyInstEmulation.cpp  mid-2015 in r240533 - the code it replaced was 
poorly written, we're better off with this approach.

However I'm looking at a problem where clang will come up with a branch table 
for a bunch of case statements.  e.g. this function:

0x17df0 <+0>:   stpx22, x21, [sp, #-0x30]!
0x17df4 <+4>:   stpx20, x19, [sp, #0x10]
0x17df8 <+8>:   stpx29, x30, [sp, #0x20]
0x17dfc <+12>:  addx29, sp, #0x20; =0x20 
0x17e00 <+16>:  subsp, sp, #0x10 ; =0x10 
0x17e04 <+20>:  movx19, x1
0x17e08 <+24>:  movx20, x0
0x17e0c <+28>:  addw21, w20, w20, lsl #2
0x17e10 <+32>:  bl 0x17f58   ; symbol stub for: 
getpid
0x17e14 <+36>:  addw0, w0, w21
0x17e18 <+40>:  movw8, w20
0x17e1c <+44>:  cmpw20, #0x1d; =0x1d 
0x17e20 <+48>:  b.hi   0x17e4c   ; <+92> at a.c:112
0x17e24 <+52>:  adrx9, #0x90 ; switcher + 196
0x17e28 <+56>:  nop
0x17e2c <+60>:  ldrsw  x8, [x9, x8, lsl #2]
0x17e30 <+64>:  addx8, x8, x9
0x17e34 <+68>:  br x8
0x17e38 <+72>:  subsp, x29, #0x20; =0x20 
0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
0x17e48 <+88>:  ret
0x17e4c <+92>:  addw0, w0, #0x1  ; =0x1 
0x17e50 <+96>:  b  0x17e38   ; <+72> at a.c:115
0x17e54 <+100>: orrw8, wzr, #0x7
0x17e58 <+104>: strx8, [sp, #0x8]
0x17e5c <+108>: sxtw   x8, w19
0x17e60 <+112>: strx8, [sp]
0x17e64 <+116>: adrx0, #0x148; "%c %d\n"
0x17e68 <+120>: nop
0x17e6c <+124>: bl 0x17f64   ; symbol stub for: 
printf
0x17e70 <+128>: subsp, x29, #0x20; =0x20 
0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
0x17e7c <+140>: ldpx22, x21, [sp], #0x30
0x17e80 <+144>: b  0x17f38   ; f3 at b.c:4
0x17e84 <+148>: sxtw   x8, w19
0x17e88 <+152>: strx8, [sp]
0x17e8c <+156>: adrx0, #0x127; "%c\n"
0x17e90 <+160>: nop
0x17e94 <+164>: bl 0x17f64   ; symbol stub for: 
printf
0x17e98 <+168>: bl 0x17f40   ; f4 at b.c:7
0x17e9c <+172>: sxtw   x8, w19
0x17ea0 <+176>: strx8, [sp]
0x17ea4 <+180>: adrx0, #0x10f; "%c\n"
0x17ea8 <+184>: nop
0x17eac <+188>: bl 0x17f64   ; symbol stub for: 
printf
0x17eb0 <+192>: bl 0x17f4c   ; symbol stub for: 
abort


It loads data from the jump table and branches to the correct block in the +52 
.. +68 instructions.  We have epilogues at 88, 144, and 192.  And we get an 
unwind plan like

row[0]:0: CFA=sp +0 => 
row[1]:4: CFA=sp+48 => x21=[CFA-40] x22=[CFA-48] 
row[2]:8: CFA=sp+48 => x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] x22=[CFA-48] 
row[3]:   12: CFA=sp+48 => x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] x22=[CFA-48] 
fp=[CFA-16] lr=[CFA-8] 
row[4]:   20: CFA=sp+64 => x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] x22=[CFA-48] 
fp=[CFA-16] lr=[CFA-8] 
row[5]:   80: CFA=sp+64 => x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] x22=[CFA-48] 
fp=  lr=  
row[6]:   84: CFA=sp+64 => x19=  x20=  x21=[CFA-40] x22=[CFA-48] 
fp=  lr=  
row[7]:   88: CFA=sp +0 => x19=  x20=  x21=  x22=  fp= 
 lr=  
row[8]:   92: CFA=sp+64 => x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] x22=[CFA-48] 
fp=[CFA-16] lr=[CFA-8] 
row[9]:  108: CFA=sp+64 => x8=[CFA-56] x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] 
x22=[CFA-48] fp=[CFA-16] lr=[CFA-8] 
row[10]:  136: CFA=sp+64 => x8=[CFA-56] x19=[CFA-24] x20=[CFA-32] x21=[CFA-40] 
x22=[CFA-48] fp=  lr=  
row[11]:  140: CFA=sp+64 => x8=[CFA-56] x19=  x20=  x21=[CFA-40] 
x22=[CFA-48] fp=  lr=  
row[12]:  144: CFA=sp +0 => x8=[CFA-56] x19=  x20=  x21=  
x22=  fp=  lr=  

where we have no unwind state for the range 148..192 (I complicated it a little 
by calling a noreturn function that ended up being the last one -- that's why 
it doesn't do an epilogue sequence at the very end of the function).


I'm not sure how we should address this one - our branch-target approach can't 
do the right thing here, there is no indication (for lldb) of t

Re: [lldb-dev] A problem with the arm64 unwind plans I'm looking at

2016-11-08 Thread Jason Molenda via lldb-dev
Yeah I was thinking that maybe if we spot an epilogue instruction (ret, b 
), and the next instruction doesn't have a reinstated 
register context, we could backtrack to the initial register context of this 
block of instructions (and if it's not the beginning of the function), 
re-instate that register context for the next instruction.

It doesn't help if we have a dynamic dispatch after the initial part of the 
function.  For that, we'd need to do something like your suggestion of finding 
the biggest collection of register saves.

e.g. if I rearrange/modify my example function a little to make it more 
interesting (I didn't fix up the +offsets)

prologue:
> 0x17df0 <+0>:   stpx22, x21, [sp, #-0x30]!
> 0x17df4 <+4>:   stpx20, x19, [sp, #0x10]
> 0x17df8 <+8>:   stpx29, x30, [sp, #0x20]
> 0x17dfc <+12>:  addx29, sp, #0x20; =0x20

direct branch:
> 0x17e1c <+44>:  cmpw20, #0x1d; =0x1d
> 0x17e20 <+48>:  b.hi   0x17e4c   ; <+92>  { block #3 }

dynamic dispatch:
> 0x17e24 <+52>:  adrx9, #0x90 ; switcher + 196
> 0x17e28 <+56>:  nop
> 0x17e2c <+60>:  ldrsw  x8, [x9, x8, lsl #2]
> 0x17e30 <+64>:  addx8, x8, x9
> 0x17e34 <+68>:  br x8

block #1
> 0x17e9c <+172>: sxtw   x8, w19
> 0x17ea0 <+176>: strx8, [sp]
> 0x17ea4 <+180>: adrx0, #0x10f; "%c\n"
> 0x17ea8 <+184>: nop
> 0x17eac <+188>: bl 0x17f64   ; symbol stub for: 
> printf
> 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> 0x17eb0 <+192>: b 0x17f4c   ; symbol stub for: 
> abort

block #2
> 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> 0x17e48 <+88>:  ret


block #3
> 0x17e4c <+92>:  addw0, w0, #0x1  ; =0x1
> 0x17e50 <+96>:  b  0x17e38   ; <+72> at a.c:115
> 0x17e54 <+100>: orrw8, wzr, #0x7
> 0x17e58 <+104>: strx8, [sp, #0x8]
> 0x17e5c <+108>: sxtw   x8, w19
> 0x17e60 <+112>: strx8, [sp]
> 0x17e64 <+116>: adrx0, #0x148; "%c %d\n"
> 0x17e68 <+120>: nop
> 0x17e6c <+124>: bl 0x17f64   ; symbol stub for: 
> printf
> 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> 0x17e80 <+144>: b  0x17f38   ; f3 at b.c:4

block #4
> 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> 0x17e48 <+88>:  ret

First, an easy one:  When we get to the first instruction of 'block #4', we've 
seen a complete epilogue ending in 'B other-function' and the first instruction 
of block #4 is not branched to.  If we find the previous direct branch target 
-- to the first instruction of 'block #3' was conditionally branched to, we 
reuse that register context for block #4.  This could easily go wrong for 
hand-written assembly where you might undo the stack state part-way and then 
branch to another part of the function.  But I doubt compiler generated code is 
ever going to do that.

Second, a trickier one: When we get to the first instruction of 'block #2', we 
have no previous branch target register context to re-instate.  We could look 
further into the function (to block target #3 again) and reuse that register 
state, the assumption being that a function has one prologue that sets up the 
complete register state and then doesn't change anything outside mid-function 
epilogues.  I'm not opposed to that idea.  The other way would be to look 
backwards in the instruction stream for the row with the most registers saved, 
as you suggested, maybe reusing the earliest one if there are multiple entries 
with the same # of registers (this would need to ignore IsSame registers).


Let me see if I can code something along these lines and we can look at how 
that turns out.



> On Nov 7, 2016, at 8:10 AM, Tamas Berghammer  wrote:
> 
> Hi Jason,
> 
> I thought about this situation when implemented the original branch following 
> code and haven't been able to come up with a really good solution.
> 
> My only idea is the same what you mentioned. We should try to recognize all 
> unconditional branche

Re: [lldb-dev] A problem with the arm64 unwind plans I'm looking at

2016-11-09 Thread Jason Molenda via lldb-dev
I like that idea.  A bunch of other work just landed on my desk so it might be 
a bit before I do it, but I'll see how that patch looks.

> On Nov 9, 2016, at 3:54 AM, Tamas Berghammer  wrote:
> 
> Based on your comments I have one more idea for a good heuristic. What if we 
> detect a dynamic branch (e.g. "br ", "tbb ...", etc...) and store the 
> register state for that place. Then when we find a block with no unwind info 
> for the first instruction then we use the one we saved for the dynamic branch 
> (as we know that the only way that block can be reached is through a dynamic 
> branch). If there is exactly 1 dynamic branch in the code then this should 
> gave us the "perfect" result while if we have multiple dynamic branches then 
> we will pick one "randomly" but for compiler generated code I think it will 
> be good enough. The only tricky case is if we fail to detect the dynamic 
> branch but that should be easy to fix as we already track every branch on ARM 
> (for single stepping) and doing it on AArch64 should be easy as well.
> 
> On Tue, Nov 8, 2016 at 11:10 PM Jason Molenda  wrote:
> Yeah I was thinking that maybe if we spot an epilogue instruction (ret, b 
> ), and the next instruction doesn't have a reinstated 
> register context, we could backtrack to the initial register context of this 
> block of instructions (and if it's not the beginning of the function), 
> re-instate that register context for the next instruction.
> 
> It doesn't help if we have a dynamic dispatch after the initial part of the 
> function.  For that, we'd need to do something like your suggestion of 
> finding the biggest collection of register saves.
> 
> e.g. if I rearrange/modify my example function a little to make it more 
> interesting (I didn't fix up the +offsets)
> 
> prologue:
> > 0x17df0 <+0>:   stpx22, x21, [sp, #-0x30]!
> > 0x17df4 <+4>:   stpx20, x19, [sp, #0x10]
> > 0x17df8 <+8>:   stpx29, x30, [sp, #0x20]
> > 0x17dfc <+12>:  addx29, sp, #0x20; =0x20
> 
> direct branch:
> > 0x17e1c <+44>:  cmpw20, #0x1d; =0x1d
> > 0x17e20 <+48>:  b.hi   0x17e4c   ; <+92>  { block 
> > #3 }
> 
> dynamic dispatch:
> > 0x17e24 <+52>:  adrx9, #0x90 ; switcher + 196
> > 0x17e28 <+56>:  nop
> > 0x17e2c <+60>:  ldrsw  x8, [x9, x8, lsl #2]
> > 0x17e30 <+64>:  addx8, x8, x9
> > 0x17e34 <+68>:  br x8
> 
> block #1
> > 0x17e9c <+172>: sxtw   x8, w19
> > 0x17ea0 <+176>: strx8, [sp]
> > 0x17ea4 <+180>: adrx0, #0x10f; "%c\n"
> > 0x17ea8 <+184>: nop
> > 0x17eac <+188>: bl 0x17f64   ; symbol stub for: 
> > printf
> > 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> > 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> > 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> > 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> > 0x17eb0 <+192>: b 0x17f4c   ; symbol stub for: 
> > abort
> 
> block #2
> > 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> > 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> > 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> > 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> > 0x17e48 <+88>:  ret
> 
> 
> block #3
> > 0x17e4c <+92>:  addw0, w0, #0x1  ; =0x1
> > 0x17e50 <+96>:  b  0x17e38   ; <+72> at a.c:115
> > 0x17e54 <+100>: orrw8, wzr, #0x7
> > 0x17e58 <+104>: strx8, [sp, #0x8]
> > 0x17e5c <+108>: sxtw   x8, w19
> > 0x17e60 <+112>: strx8, [sp]
> > 0x17e64 <+116>: adrx0, #0x148; "%c %d\n"
> > 0x17e68 <+120>: nop
> > 0x17e6c <+124>: bl 0x17f64   ; symbol stub for: 
> > printf
> > 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> > 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> > 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> > 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> > 0x17e80 <+144>: b  0x17f38   ; f3 at b.c:4
> 
> block #4
> > 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> > 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> > 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> > 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> > 0x17e48 <+88>:  ret
> 
> First, an easy one:  When we get to the first instruction of 'block #4', 
> we've seen a complete epilogue ending in 'B other-function' and the first 
> instruction of block #4 is not branched to.  If we find the previous direct 
> branch target -- to the first instruction of 'block #3' was conditionally 
> branched to, we reuse that register context for block #4.  This could easily 
> go wrong for hand-written assembly where yo

Re: [lldb-dev] Bug in StackFrame::UpdateCurrentFrameFromPreviousFrame

2016-11-14 Thread Jason Molenda via lldb-dev
Looks incorrect to me.  It was introduced with this change.  Adding Greg.


Author: Greg Clayton 
Date:   Fri Aug 27 21:47:54 2010 +

Made it so we update the current frames from the previous frames by doing 
STL
swaps on the variable list, value object list, and disassembly. This avoids
us having to try and update frame indexes and other things that were getting
out of sync.



git-svn-id: https://llvm.org/svn/llvm-project/llvdb/trunk@112301 
91177308-0d34-0410-b5e6-96231b3b80d8



> On Nov 13, 2016, at 4:48 PM, Zachary Turner  wrote:
> 
> I was going through doing some routine StringRef changes and I ran across 
> this function:
> 
>   std::lock_guard guard(m_mutex);
>   assert(GetStackID() ==
>  prev_frame.GetStackID()); // TODO: remove this after some testing
>   m_variable_list_sp = prev_frame.m_variable_list_sp;
>   
> m_variable_list_value_objects.Swap(prev_frame.m_variable_list_value_objects);
>   if (!m_disassembly.GetString().empty()) {
> m_disassembly.Clear();
> m_disassembly.GetString().swap(m_disassembly.GetString());
>   }
> 
> Either I'm crazy or that bolded line is a bug.  Is it supposed to be 
> prev_frame.m_disassembly.GetString()?
> 
> What would the implications of this bug be?  i.e. how can we write a test for 
> this?
> 
> Also, as a matter of curiosity, why is it swapping?  That means it's 
> modifying the input frame, when it seems like it really should just be 
> modifying the current frame.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Bug in StackFrame::UpdateCurrentFrameFromPreviousFrame

2016-11-14 Thread Jason Molenda via lldb-dev
For reference, the original code that Greg wrote in r112301 was

+if (!m_disassembly.GetString().empty())
+m_disassembly.GetString().swap (m_disassembly.GetString());




> On Nov 14, 2016, at 1:44 PM, Zachary Turner  wrote:
> 
> If the swap is correct, then wouldn't we also need to swap the variable list?
> 
> On Mon, Nov 14, 2016 at 10:58 AM Jim Ingham  wrote:
> 
> > On Nov 13, 2016, at 4:48 PM, Zachary Turner via lldb-dev 
> >  wrote:
> >
> > I was going through doing some routine StringRef changes and I ran across 
> > this function:
> >
> >   std::lock_guard guard(m_mutex);
> >   assert(GetStackID() ==
> >  prev_frame.GetStackID()); // TODO: remove this after some testing
> >   m_variable_list_sp = prev_frame.m_variable_list_sp;
> >   
> > m_variable_list_value_objects.Swap(prev_frame.m_variable_list_value_objects);
> >   if (!m_disassembly.GetString().empty()) {
> > m_disassembly.Clear();
> > m_disassembly.GetString().swap(m_disassembly.GetString());
> >   }
> >
> > Either I'm crazy or that bolded line is a bug.  Is it supposed to be 
> > prev_frame.m_disassembly.GetString()?
> >
> > What would the implications of this bug be?  i.e. how can we write a test 
> > for this?
> >
> > Also, as a matter of curiosity, why is it swapping?  That means it's 
> > modifying the input frame, when it seems like it really should just be 
> > modifying the current frame.
> 
> What lldb does is store the stack frame list it calculated from a previous 
> stop, and copy as much as is relevant into the new stack frame when it stops, 
> which will then become the stack frame list that gets used.  So this is a 
> transfer of information from the older stop's stack frame to the new one.  
> Thus the swap.
> 
> To be clear, current here means "the stack frame we are calculating from this 
> stop" and previous here means "the stack frame from the last stop".  That's 
> confusing because previous & next also get used for up and down the current 
> stack frame list.  That's why I always try to use "younger" and "older" for 
> ordering in one stack (that and it makes the ordering unambiguous.)
> 
> So while this is definitely a bug, this is just going to keep the frames in 
> the newly calculated stack frame list from taking advantage of any 
> disassembly that was done on frames from the previous stop.  Since this will 
> get created on demand if left empty, it should have no behavioral effect.  To 
> test this you would have to count the number of times you disassembled the 
> code for a given frame.  If this were working properly, you'd only do it once 
> for the time that frame lived on the stack.  With this bug you will do it 
> every time you stop and ask for disassembly for this frame.
> 
> Jim
> 
> 
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] logging in lldb

2016-12-15 Thread Jason Molenda via lldb-dev
Hi Pavel, sorry for not keeping up with the thread, I've been super busy all 
this week.  I'm not going to object to where this proposal has ended up.  I 
personally have a preference for the old system but not based on any 
justifiable reasons.


> On Dec 15, 2016, at 7:13 AM, Pavel Labath  wrote:
> 
> Just to let you know, I will be on vacation until the end of the year,
> so probably will not respond to any comments until January. If you
> have any concerns, do let me know, as I'd like to get started when I
> get back.
> 
> pl
> 
> On 13 December 2016 at 16:32, Pavel Labath  wrote:
>> Hello again,
>> 
>> I'd to get back to the question of unifying llvm's and lldb's logging
>> mechanisms that Chris asked about. In the way these two are
>> implemented now, they have a number of similarities, but also a number
>> of differences. Among the differences, there is one that I think will
>> be most painful to resolve, so I'll start with that one:
>> 
>> I am talking about how to disable logging at compile-time. Currently,
>> llvm's logging mechanism can be controlled both at runtime and
>> compile-time. lldb's can be only controlled at runtime. While we may
>> not want/need the compile-time knob, it is a very hard requirement for
>> llvm, which tries to squeeze every ounce of performance from the
>> hardware. So, if we are going to have a common logging API, we will
>> need to support being compiled without it.
>> 
>> This has impact on the kind of syntax we are able to use. I see two
>> problems here.
>> 
>> 1. The first one is that our log patterns are split into independent
>> parts. Currently the pattern is:
>> Log *log = GetLogIf(Flag);
>> ...
>> if (log) log->Printf(...);
>> 
>> The API we (mostly?) converged to above is:
>> Log *log = GetLogIf(Flag);
>> ...
>> LLDB_LOG(log, ...);
>> 
>> If we want to compile the logging away, getting rid of the second part
>> is easy, as it is already a macro. However, for a completely clean
>> compile, we would need to remove the first part as well. Since
>> wrapping it in #ifdef would be too ugly, I think the easiest solution
>> would be to just make it go away completely.
>> 
>> The way I understand it, the reason we do it in two steps now is to
>> make the code fast if logging is off. My proposal here would be to
>> make the code very fast even without the additional local variable. If
>> we could use the macro like:
>> LLDB_LOG(Flag, ...)
>> where the macro would expand to something like:
>> if (LLVM_UNLIKELY(Flag & lldb_private::enabled_channels)) log_stuff(...)
>> where `enabled_channels` is just a global integral variable then the
>> overhead of a disabled log statement would be three instructions
>> (load, and, branch), some of which could be reused if we had more
>> logging statements in a function. Plus the macro could hint the cpu
>> and compiler to optimize for the "false" case. This is still an
>> increase over the status quo, where the overhead of a log statement is
>> one or two instructions, but I am not sure if this is significant.
>> 
>> 2. The second, and probably bigger, issue is the one mentioned by
>> Jason earlier in this thread -- the ability to insert random code into
>> if(log) blocks. Right writing the following is easy:
>> if (log) {
>>  do_random_stuff();
>>  log->Printf(...);
>> }
>> 
>> In the first version of the macro, this is still easy to write, as we
>> don't have to worry about compile-time. But if we need this to go
>> away, we will need to resort to the same macro approach as llvm:
>> LLDB_DEBUG( { do_random_stuff(); LLDB_LOG(...); });
>> Which has all the disadvantages Jason mentioned. Although, I don't
>> think this has to be that bad, as we probably will not be doing this
>> very often, and the issues can be mitigated by putting the actual code
>> in a function, and only putting the function calls inside the macro.
>> 
>> 
>> 
>> So, my question to you is: Do you think these two drawbacks are worth
>> sacrificing for the sake of having a unified llvm-wide logging
>> infrastructure? I am willing to drive this, and implement the llvm
>> side of things, but I don't want to force this onto everyone, if it is
>> not welcome. If you do not think this is a worthwhile investment then
>> I'd rather proceed with the previous lldb-only solution we discussed
>> above, as that is something I am more passionate above, will already
>> be a big improvement, and a good stepping stone towards implementing
>> an llvm-wide solution in the future.
>> 
>> Of course, if you have alternative proposals on how to implement
>> llvm-wide logging, I'd love to hear about it.
>> 
>> Let me know what you think,
>> pavel

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is anything using the REPL?

2017-03-21 Thread Jason Molenda via lldb-dev
It's used in the swift lldb, https://github.com/apple/swift-lldb

The idea is to have any non-swift specific code in llvm.org; the github 
repository for swift specific additions.


> On Mar 21, 2017, at 6:19 PM, Zachary Turner via lldb-dev 
>  wrote:
> 
> AFAICT this is all dead code.  Unless someone is using it out of tree?  There 
> is a way to register repl support for various languages, but no code in tree 
> is actually doing this.  It's possible I'm just not finding the code though.
> 
> It appears this code was all added about 18 months ago, and if it hasn't 
> found any use in that time frame, it would be great to remove it to reduce 
> technical debt.
> 
> That said, if it's actually being used in tree somewhere and I'm just 
> overlooking it, let me know.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Is anything using the REPL?

2017-03-21 Thread Jason Molenda via lldb-dev
I don't follow REPL issues very closely but I think some people may have hopes 
of doing a repl in a language other than swift in the future which is why it 
was upstreamed to llvm.

J

> On Mar 21, 2017, at 6:34 PM, Zachary Turner  wrote:
> 
> Thanks, I had a suspicion it might be used in Swift.  Given that swift seems 
> to be the only consumer, and there are no plans for support for any other 
> languages, would it be reasonable to say that it's a swift-specific addition 
> and could be in the swift repo?
> 
> If not, I will need to come up with a good way to get REPL.h to not #include 
> code from the source tree (and ideally, not #include code from Commands at 
> all).
> 
> On Tue, Mar 21, 2017 at 6:24 PM Jason Molenda  wrote:
> It's used in the swift lldb, https://github.com/apple/swift-lldb
> 
> The idea is to have any non-swift specific code in llvm.org; the github 
> repository for swift specific additions.
> 
> 
> > On Mar 21, 2017, at 6:19 PM, Zachary Turner via lldb-dev 
> >  wrote:
> >
> > AFAICT this is all dead code.  Unless someone is using it out of tree?  
> > There is a way to register repl support for various languages, but no code 
> > in tree is actually doing this.  It's possible I'm just not finding the 
> > code though.
> >
> > It appears this code was all added about 18 months ago, and if it hasn't 
> > found any use in that time frame, it would be great to remove it to reduce 
> > technical debt.
> >
> > That said, if it's actually being used in tree somewhere and I'm just 
> > overlooking it, let me know.
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB performance drop from 3.9 to 4.0

2017-04-12 Thread Jason Molenda via lldb-dev
I don't know exactly when the 3.9 / 4.0 branches were cut, and what was done 
between those two points, but in general we don't expect/want to see 
performance regressions like that.  I'm more familiar with the perf 
characteristics on macos, Linux is different in some important regards, so I 
can only speak in general terms here.

In your example, you're measuring three things, assuming you have debug 
information for MY_PROGRAM.  The first is "Do the initial read of the main 
binary and its debug information".  The second is "Find all symbol names 
'main'".  The third is "Scan a newly loaded solib's symbols" (assuming you 
don't have debug information from solibs from /usr/lib etc).  Technically 
there's some additional stuff here -- launching the process, detecting solibs 
as they're loaded, looking up the symbol context when we hit the breakpoint, 
backtracing a frame or two, etc, but that stuff is rarely where you'll see perf 
issues on a local debug session.

Which of these is likely to be important will depend on your MY_PROGRAM.  If 
you have a 'int main(){}', it's not going to be dwarf parsing.  If your binary 
only pulls in three solib's by the time it is running, it's not going to be new 
module scanning. A popular place to spend startup time is in C++ name 
demangling if you have a lot of solibs with C++ symbols.


On Darwin systems, we have a nonstandard accelerator table in our DWARF emitted 
by clang that lldb reads.  The "apple_types", "apple_names" etc tables.  So 
when we need to find a symbol named "main", for Modules that have a SymbolFile, 
we can look in the accelerator table.  If that SymbolFile has a 'main', the 
accelerator table gives us a reference into the DWARF for the definition, and 
we can consume the DWARF lazily.  We should never need to do a full scan over 
the DWARF, that's considered a failure.

(in fact, I'm working on a branch of the llvm.org sources from mid-October and 
I suspect Darwin lldb is often consuming a LOT more dwarf than it should be 
when I'm debugging, I need to figure out what is causing that, it's a big 
problem.)


In general, I've been wanting to add a new "perf counters" infrastructure & 
testsuite to lldb, but haven't had time.  One thing I work on a lot is 
debugging over a bluetooth connection; it turns out that BT is very slow, and 
any extra packets we send between lldb and debugserver are very costly.  The 
communication is so fast over a local host, or over a usb cable, that it's easy 
for regressions to sneak in without anyone noticing.  So the original idea was 
hey, we can have something that counts packets for distinct operations.  Like, 
this "next" command should take no more than 40 packets, that kind of thing.  
And it could be expanded -- "b main should fully parse the DWARF for only 1 
symbol", or "p *this should only look up 5 types", etc.




> On Apr 12, 2017, at 11:26 AM, Scott Smith via lldb-dev 
>  wrote:
> 
> I worked on some performance improvements for lldb 3.9, and was about to 
> forward port them so I can submit them for inclusion, but I realized there 
> has been a major performance drop from 3.9 to 4.0.  I am using the official 
> builds on an Ubuntu 16.04 machine with 16 cores / 32 hyperthreads.
> 
> Running: time lldb-4.0 -b -o 'b main' -o 'run' MY_PROGRAM > /dev/null
> 
> With 3.9, I get:
> real0m31.782s
> user0m50.024s
> sys0m4.348s
> 
> With 4.0, I get:
> real0m51.652s
> user1m19.780s
> sys0m10.388s
> 
> (with my changes + 3.9, I got real down to 4.8 seconds!  But I'm not 
> convinced you'll like all the changes.)
> 
> Is this expected?  I get roughly the same results when compiling llvm+lldb 
> from source.
> 
> I guess I can spend some time trying to bisect what happened.  5.0 looks to 
> be another 8% slower.
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] gdb-remote incompatibility with gdbserver?

2017-12-04 Thread Jason Molenda via lldb-dev
lldb doesn't know what register set exists - if you do 'register read' you'll 
see that there are no registers.  Maybe gdbserver doesn't implement the 
target.xml request (it's their packet definition!  c'mon!)  

Download the x86_64 target def file from

http://llvm.org/svn/llvm-project/lldb/trunk/examples/python/

and load it in your .lldbinit file or on the cmd line

lldb -O 'settings set plugin.process.gdb-remote.target-definition-file 
/path/to/def-file.py' 



> On Dec 4, 2017, at 5:56 PM, David Manouchehri via lldb-dev 
>  wrote:
> 
> My apologizes if this is mentioned somewhere already, couldn't find
> anything on the subject; it seems that gdb-remote doesn't work very
> well (or at all in my tests) with gdbserver.
> 
> Tim Hammerquist was also able to reproduce issues when attempting to
> use gdb-remote with gdbserver. (Test with freebsd/gdbserver,
> freebsd/lldb38, freebsd/gdbserver, and macos/lldb-900.0.57.)
> 
> Could we document this somewhere? It's not a large issue since there's
> alternatives like lldb-server and Facebook's ds2, but it's a bit
> confusing to new users who fairly expect a command called "gdb-remote"
> to work with gdbserver.
> 
> root@17e840390f4d:~# lldb-3.8 date
> (lldb) target create "date"
> Current executable set to 'date' (x86_64).
> (lldb) # In another terminal: gdbserver localhost: /tmp/date
> (lldb) gdb-remote localhost:
> Process 6593 stopped
> * thread #1: tid = 6593, stop reason = signal SIGTRAP
>frame #0: 0x
> (lldb) c
> Process 6593 resuming
> Process 6593 stopped
> * thread #1: tid = 6593, stop reason = signal SIGTRAP
>frame #0: 0x
> (lldb) c
> Process 6593 resuming
> Process 6593 stopped
> * thread #1: tid = 6593, stop reason = signal SIGSEGV
>frame #0: 0x
> (lldb) c
> Process 6593 resuming
> Process 6593 exited with status = 11 (0x000b)
> 
> Thanks,
> 
> David Manouchehri
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Dlopen extremely slow while LLDB is attached

2018-04-24 Thread Jason Molenda via lldb-dev
Was liblldb.so build with debug information?  You're probably looking at lldb 
scanning the DWARF to make up its symbol table.  That would be re-used on 
subsequent reruns so you're only seeing the cost that first time through.  gdb 
may be using the standard dwarf accelerator tables, or it may be delaying the 
cost of the scan until you try to do something like a breakpoint by name.  


J

> On Apr 24, 2018, at 12:26 PM, Scott Funkenhauser via lldb-dev 
>  wrote:
> 
> Hey guys,
> 
> I'm trying to track down an issue I'm seeing where dlopen takes significantly 
> longer to execute when LLDB is attached vs GDB (I've attached a simple 
> program that I used to reproduce the issue).
> I was wondering if anybody had any idea what might be contributing to the 
> additional execution time?
> 
> Running without any debugger attached:
> $ ./lldb-load-sample
> Handle: 0x55768c80
> Done loading. 848.27ms
> $ ./lldb-load-sample
> Handle: 0x55768c80
> Done loading. 19.6047ms
> 
> I noticed that the first run was significantly slower than any subsequent 
> runs. Most likely due to some caching in Linux.
> 
> 
> For LLDB:
> (lldb) file lldb-load-sample
> Current executable set to 'lldb-load-sample' (x86_64).
> (lldb) run
> Process 82804 launched: '/lldb-load-sample' (x86_64)
> Handle: 0x55768c80
> Done loading. 5742.78ms
> Process 82804 exited with status = 0 (0x) 
> (lldb) run
> Process 83454 launched: '/lldb-load-sample' (x86_64)
> Handle: 0x55768c80
> Done loading. 19.4184ms
> Process 83454 exited with status = 0 (0x)
> 
> I noticed that subsequent runs were much faster (most likely due to some 
> caching in Linux / LLDB), but that isn't relevant in my situation. Exiting 
> LLDB and starting a new LLDB process still has an extremely long first run 
> (In this case ~5.5s). There are other real world cases (initializing Vulkan 
> which does a bunch of dlopens) where this can add 10s of seconds really 
> slowing down iteration time.
> 
> 
> For GDB:
> (gdb) file lldb-load-sample
> Reading symbols from a.out...done.
> (gdb) run
> Starting program: /lldb-load-sample
> Handle: 0x55768c80
> Done loading. 79.7276ms
> [Inferior 1 (process 85063) exited normally]
> (gdb) run
> Starting program: /lldb-load-sample
> Handle: 0x55768c80
> Done loading. 80.325ms
> [Inferior 1 (process 85063) exited normally]
> 
> As you can see the first run is slightly slower than running without a 
> debugger attached, but it's not enough to be noticeable.
> 
> Thanks,
> Scott
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Jason Molenda via lldb-dev
fwiw I had to prototype a new LC_NOTE load command a year ago in Mach-O core 
files, to specify where the kernel binary was located.  I wrote a utility to 
add the data to an existing corefile - both load command and payload - and it 
was only about five hundred lines of C++.  I didn't link against anything but 
libc, it's such  a simple task I didn't sweat trying to find an 
object-file-reader/writer library.  ELF may be more complicated though.  

> On Jun 13, 2018, at 2:51 PM, Zachary Turner via lldb-dev 
>  wrote:
> 
> What about the case where you already have a Unix core file and you aren't in 
> a debugger but just want to convert it?  It seems like we could have a 
> standalone utility that did that (one could imagine doing the reverse too).  
> I'm wondering if it wouldn't be possible to do this as a library or something 
> that didn't have any dependencies on LLDB, that way a standalone tool could 
> link against this library, and so could LLDB.  I think this would improve its 
> usefulness quite a bit.
> 
> On Wed, Jun 13, 2018 at 2:42 PM Greg Clayton  wrote:
> The goal is to take a live process (regular process just stopped, or a core 
> file) and run "save_minidump ..." as a command and export a minidump file 
> that can be sent elsewhere. Unix core files are too large to always send and 
> they are less useful if they are not examined in the machine that they were 
> produced on. So LLDB gives us the connection to the live process, and we can 
> then create a minidump file. I am going to create a python module that can do 
> this for us.
> 
> Greg 
> 
> 
>> On Jun 13, 2018, at 2:29 PM, Zachary Turner via lldb-dev 
>>  wrote:
>> 
>> Also, if the goal is to have this upstream somewhere, it would be nice to 
>> have a tool this be a standalone tool.  This seems like something that you 
>> shouldn't be required to start up a debugger to do, and probably doesn't 
>> have many (or any for that matters) on the rest of LLDB.
>> 
>> On Wed, Jun 13, 2018 at 1:58 PM Leonard Mosescu  wrote:
>> That being said, it's not exactly trivial to produce a good minidump. 
>> Crashpad has a native & cross-platform minidump writer, that's what I'd 
>> start with. 
>> 
>> Addendum: I realized after sending the email that if the goal is to convert 
>> core files -> LLDB -> minidump a lot of the complexity found in Crashpad can 
>> be avoided, so perhaps writing an LLDB minidump writer from scratch would 
>> not be too bad.
>> 
>> On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu  wrote:
>> The minidump format is more or less documented in MSDN. 
>> 
>> That being said, it's not exactly trivial to produce a good minidump. 
>> Crashpad has a native & cross-platform minidump writer, that's what I'd 
>> start with.
>> 
>> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev 
>>  wrote:
>> Zach's right.  On Windows, lldb can produce a minidump, but it just calls 
>> out to a Microsoft library to do so.  We don't have any platform-agnostic 
>> code for producing a minidump.
>> 
>> I've also pinged another Googler who I know might be interested in 
>> converting between minidumps and core files (the opposite direction) to see 
>> if he has any additional info.  I don't think he's on lldb-dev, though, so 
>> I'll act as a relay if necessary.
>> 
>> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev 
>>  wrote:
>> We can’t produce them, but you should check out the source code of google 
>> breakpad / crashpad which can.
>> 
>> That said it’s a pretty simple format, there may be enough in our consumer 
>> code that should allow you to produce them
>> 
>> 
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> 
>> 
>> 
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> 
>> 
>> 
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] I'm going to sort the two big lists of files in the lldb xcode project file later today

2018-06-15 Thread Jason Molenda via lldb-dev
We maintain a few different branches of the lldb sources, e.g. to add swift 
support, and the xcode project files have diverged over time from llvm.org to 
our github repositories making git merging a real pain.  The two biggest 
sources of pain are the BuildFiles and FileReferences sections of the project 
file, where we've managed to get the files in completely different orders 
across our different branches.

I threw together a quick script to sort the files in these sections, and to 
segregate the swift etc files that aren't on llvm.org into their own sections 
so they don't cause merge headaches as often.

If anyone is maintaining a fork/branch of lldb where they've added files to the 
xcode project file, this sorting will be need hand cleanup from them too.  I 
suspect that everyone outside apple would be using cmake and ignoring the xcode 
project files -- but if this is a problem, please let me know and I'll delay.

Right now my cleanup script looks like this

#! /usr/bin/ruby
#


## Sort the BuildFile and FileReference sections of an Xcode project file,
## putting Apple/github-local files at the front to avoid merge conflicts.
#
## Run this in a directory with a project.pbxproj file.  The sorted version
## is printed on standard output.
#


# Files with these words in the names will be sorted into a separate section;
# they are only present in some repositories and so having them intermixed 
# can lead to merge failures.
segregated_filenames = ["Swift", "repl", "RPC"]

if !File.exists?("project.pbxproj")
puts "ERROR: project.pbxproj does not exist."
exit(1)
end

beginning  = Array.new   # All lines before "PBXBuildFile section"
files  = Array.new   # PBXBuildFile section lines -- sort these
middle = Array.new   # All lines between PBXBuildFile and PBXFileReference sections
refs   = Array.new   # PBXFileReference section lines -- sort these
ending = Array.new   # All lines after PBXFileReference section

all_lines = File.readlines 'project.pbxproj'

state = 1 # "begin"
all_lines.each do |l|
l.chomp
if state == 1 && l =~ /Begin PBXBuildFile section/
beginning.push(l)
state = 2
next
end
if state == 2 && l =~ /End PBXBuildFile section/
middle.push(l)
state = 3
next
end
if state == 3 && l =~ /Begin PBXFileReference section/
middle.push(l)
state = 4
next
end
if state == 4 && l =~ /End PBXFileReference section/
ending.push(l)
state = 5
next
end

if state == 1
beginning.push(l)
elsif state == 2
files.push(l)
elsif state == 3
middle.push(l)
elsif state == 4
refs.push(l)
else
ending.push(l)
end
end

# Sort FILES by the filename, putting swift etc in front

# key is filename
# value is array of text lines for that filename in the FILES text
# (libraries like libz.dylib seem to occur multiple times, probably
# once each for different targets).

files_by_filename = Hash.new { |k, v| k[v] = Array.new }

files.each do |l|
# 2669421A1A6DC2AC0063BE93 /* MICmdCmdTarget.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 266941941A6DC2AC0063BE93 /* MICmdCmdTarget.cpp */; };

if l =~ /^\s+([A-F0-9]{24})\s+\/\*\s+(.*?)\sin.*?\*\/.*?fileRef = ([A-F0-9]{24})\s.*$/
uuid = $1
filename = $2
fileref = $3
files_by_filename[filename].push(l)
end
end

# clear the FILES array

files = Array.new

# add the lines in sorted order.  First swift/etc, then everything else.

segregated_filenames.each do |keyword|
filenames = files_by_filename.keys
filenames.select {|l| l.include?(keyword) }.sort.each do |fn|
# re-add all the lines for the filename FN to our FILES array that we'll
# be outputting.
files_by_filename[fn].sort.each do |l|
files.push(l)
end
files_by_filename.delete(fn)
end
end

# All segregated filenames have been added to the FILES output array.
# Now add all the other lines, sorted by filename.

files_by_filename.keys.sort.each do |fn|
files_by_filename[fn].sort.each do |l|
files.push(l)
end
end

# Sort REFS by the filename, putting swift etc in front

refs_by_filename = Hash.new { |k, v| k[v] = Array.new }
refs.each do |l|
# 2611FF12142D83060017FEA3 /* SBValue.i */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c.preprocessed; path = SBValue.i; sourceTree = ""; };

if l =~ /^\s+([A-F0-9]{24})\s+\/\*\s+(.*?)\s\*\/.*$/
uuid = $1
filename = $2
refs_by_filename[filename].push(l)
end
end

# clear the refs array

refs = Array.new

# add the lines in sorted order.  First swift/etc, then everything else.


segregated_filenames.each do |keyword|
filenames = refs_by_filename.keys
filenames.select {|l| l.include?(keyword) }.sort.each do |fn|
# re-add all the lines for the filename FN to our refs array that we'll
  

Re: [lldb-dev] [llvm-dev] RFC: libtrace

2018-06-26 Thread Jason Molenda via lldb-dev


> On Jun 26, 2018, at 2:00 PM, Jim Ingham via lldb-dev 
>  wrote:
> 
> 
>> * unwinding and backtrace generation
> 
> Jason says this will be somewhat tricky to pull out of lldb.  OTOH much of 
> the complexity of unwind is reconstructing all the non-volatile registers, 
> and if you don't care about values, you don't really need that.  So some kind 
> of lightweight pc/sp only backtrace would be more appropriate, and probably 
> faster for your needs.

If it were me & performance were the utmost concern, and I had a restricted 
platform set that I needed to support where I can assume the presence of 
eh_frame and that it is trustworthy in prologue/epilogues, then I'd probably 
just write a simple Unwind/RegisterContext plugin pair that exclusively live 
off of that.

If it's just stack walking, and we can assume no omit-frame-pointer code and we 
can assume the 0th function is always stopped in a non-prologue/epilogue 
location, then even simpler would be the old 
RegisterContextMacOSXFrameBackchain plugin would get you there.  That's what we 
used before we had the modern unwind/registercontext plugin that we use today.  
It doesn't track spilled registers at all, it just looks for saved 
pc/framepointer values on the stack.


A general problem with stopping the inferior process and examining things is 
that it is slow.  Even if you use a NativeHost approach and get 
debugserver/lldb-server out of the equation, if you stop in a hot location it's 
very difficult to make this performant.  We've prototyped things like this in 
the past and it was always far too slow.  I don't know what your use case looks 
like, but I do worry about having one process controlling an inferior process 
in general for fast-turnaround data collection/experiments, it doesn't seem 
like the best way to go about it. 




> 
> Jim
> 
>> 
>> 
>> 
>>> At the same time we think that in doing so we can break things up into more 
>>> granular pieces, ultimately exposing a larger testing surface and enabling 
>>> us to create exhaustive tests, giving LLDB more fine grained testing of 
>>> important subsystems.
>> 
>> Are you thinking of the new utility as something that would naturally live 
>> in llvm/tools or as something that would live in the LLDB repository?
>> I would rather put it under LLDB and then link LLDB against certain pieces 
>> in cases where that makes sense.
>> 
>> 
>>> 
>>> A good example of this would be LLDB’s DWARF parsing code, which is more 
>>> featureful than LLVM’s but has kind of evolved in parallel.  Sinking this 
>>> into LLVM would be one early target of such an effort, although over time 
>>> there would likely be more.
>> 
>> As you are undoubtedly aware we've been carefully rearchitecting LLVM's 
>> DWARF parser over the last few years to eventually become featureful enough 
>> so that LLDB could use it, so any help on that front would be most welcome. 
>> As long as we are careful to not regress in performance/lazyness, features 
>> and fault-tolerance, deduplicating the implementations can only be good for 
>> LLVM and LLDB.
>> 
>> Yea, this is the general idea.   Has anyone actively been working on this 
>> specific effort recently?  To my knowledge someone started and then never 
>> finished, but the efforts also never made it upstream, so my understanding 
>> is that it's a goal, but one that nobody has made significant headway on.
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Xcode project creates too many diffs if I just add or remove a file...

2018-07-23 Thread Jason Molenda via lldb-dev
Yeah, I wrote the script to accept stdin or a filename argument, and print its 
results.  I'll fix it to hardcode looking for lldb.xcodeproj/project.pbxproj or 
project.pbxproj in cwd and update it in place.


> On Jul 23, 2018, at 3:37 PM, Raphael “Teemperor” Isemann 
>  wrote:
> 
> That’s just how IO redirection works in sh. But I agree that the expected 
> default behavior is to just overwrite the file without having to redirect any 
> streams/tmp files (+ Jason because he probably can easily fix this).
> 
> - Raphael
> 
>> On Jul 23, 2018, at 3:24 PM, Greg Clayton  wrote:
>> 
>> The script will nuke the project.pbxproj file you do:
>> 
>> ../scripts/sort-pbxproj.rb > project.pbxproj
>> 
>> So it seems you must do:
>> 
>> ../scripts/sort-pbxproj.rb > project.pbxproj2
>> mv project.pbxproj2 project.pbxproj
>> 
>> Is this expected??
>> 
>>> On Jul 23, 2018, at 3:07 PM, Raphael “Teemperor” Isemann 
>>>  wrote:
>>> 
>>> See Jason’s email from two weeks ago:
>>> 
 I didn't intend for it, but when you add a file to the xcode project file, 
 Xcode will reorder all the currently-sorted files and we'll get all the 
 same merge conflicts we've had for the past couple years again.
 We could either back out my sorting of the project files (and continue to 
 have constant merge conflicts like always) or we can sort the xcode 
 project file every time we have to add something to it.  That's what we're 
 doing.
 scripts/sort-pbxproj.rb is the script I threw together to do this.  Run it 
 in the lldb.xcodeproj directory and it will output a new project file on 
 stdout.  Not the most friendly UI, I can go back and revisit that later.
 J
>>> 
 On Jul 23, 2018, at 3:04 PM, Greg Clayton via lldb-dev 
  wrote:
 
 Anyone know if something has happened to the Xcode project file? Did 
 someone sort or try to manually do something to the Xcode project? If I 
 add or remove a file, then I end up with 1000s of diffs...
 
 Greg
 
 ___
 lldb-dev mailing list
 lldb-dev@lists.llvm.org
 http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>> 
>> 
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Jason Molenda via lldb-dev
I'd argue against this approach because it's exactly why the lit tests don't 
run against the lldb driver -- they're hardcoding the output of the lldb driver 
command into the testsuite and these will eventually make it much more 
difficult to change and improve the driver as we've accumulated this style of 
test.

This is a perfect test for a normal SB API.  Run to your breakpoints and check 
the stack frames.

  f0 = thread.GetFrameAtIndex(0)
  check that f0.GetFunctionName() == sink
  check that f0.IsArtifical() == True
  check that f0.GetLineEntry().GetLine() == expected line number


it's more verbose, but it's also much more explicit about what it's checking, 
and easy to see what has changed if there is a failure.


J

> On Aug 14, 2018, at 5:31 PM, Vedant Kumar via lldb-dev 
>  wrote:
> 
> Hello,
> 
> I'd like to make FileCheck available within lldb inline tests, in addition to 
> existing helpers like 'runCmd' and 'expect'.
> 
> My motivation is that several tests I'm working on can't be made as rigorous 
> as they need to be without FileCheck-style checks. In particular, the 
> 'matching', 'substrs', and 'patterns' arguments to runCmd/expect don't allow 
> me to verify the ordering of checked input, to be stringent about line 
> numbers, or to capture & reuse snippets of text from the input stream.
> 
> I'd curious to know if anyone else is interested or would be willing to 
> review this (https://reviews.llvm.org/D50751).
> 
> Here's an example of an inline test which benefits from FileCheck-style 
> checking. This test is trying to check that certain frames appear in a 
> backtrace when stopped inside of the "sink" function. Notice that without 
> FileCheck, it's not possible to verify the order in which frames are printed, 
> and that dealing with line numbers would be cumbersome.
> 
> ```
> --- 
> a/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
> +++ 
> b/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
> @@ -9,16 +9,21 @@
>  
>  volatile int x;
>  
> +// CHECK: frame #0: {{.*}}sink() at main.cpp:[[@LINE+2]] [opt]
>  void __attribute__((noinline)) sink() {
> -  x++; //% self.expect("bt", substrs = ['main', 'func1', 'func2', 'func3', 
> 'sink'])
> +  x++; //% self.filecheck("bt", "main.cpp")
>  }
>  
> +// CHECK-NEXT: frame #1: {{.*}}func3() {{.*}}[opt] [artificial]
>  void __attribute__((noinline)) func3() { sink(); /* tail */ }
>  
> +// CHECK-NEXT: frame #2: {{.*}}func2() at main.cpp:[[@LINE+1]] [opt]
>  void __attribute__((disable_tail_calls, noinline)) func2() { func3(); /* 
> regular */ }
>  
> +// CHECK-NEXT: frame #3: {{.*}}func1() {{.*}}[opt] [artificial]
>  void __attribute__((noinline)) func1() { func2(); /* tail */ }
>  
> +// CHECK-NEXT: frame #4: {{.*}}main at main.cpp:[[@LINE+2]] [opt]
>  int __attribute__((disable_tail_calls)) main() {
>func1(); /* regular */
>return 0;
> ```
> 
> For reference, here's the output of the "bt" command:
> 
> ```
> runCmd: bt
> output: * thread #1, queue = 'com.apple.main-thread', stop reason = 
> breakpoint 1.1
>   * frame #0: 0x00010c6a6f64 a.out`sink() at main.cpp:14 [opt]
> frame #1: 0x00010c6a6f70 a.out`func3() at main.cpp:15 [opt] 
> [artificial]
> frame #2: 0x00010c6a6f89 a.out`func2() at main.cpp:21 [opt]
> frame #3: 0x00010c6a6f90 a.out`func1() at main.cpp:21 [opt] 
> [artificial]
> frame #4: 0x00010c6a6fa9 a.out`main at main.cpp:28 [opt]
> ```
> 
> thanks,
> vedant
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Jason Molenda via lldb-dev
It's more verbose, and it does mean test writers need to learn the public API, 
but it's also much more stable and debuggable in the future.  It's a higher up 
front cost but we're paid back in being able to develop lldb more quickly in 
the future, where our published API behaviors are being tested directly, and 
the things that must not be broken.  The lldb driver's output isn't a contract, 
and treating it like one makes the debugger harder to innovate in the future.

It's also helpful when adding new features to ensure you've exposed the feature 
through the API sufficiently.  The first thing I thought to try when writing 
the example below was SBFrame::IsArtificial() (see SBFrame::IsInlined()) which 
doesn't exist.  If a driver / IDE is going to visually indicate artificial 
frames, they'll need that.

J

> On Aug 14, 2018, at 5:56 PM, Vedant Kumar  wrote:
> 
> It'd be easy to update FileCheck tests when changing the debugger (this 
> happens all the time in clang/swift). OTOH, the verbosity of the python API 
> means that fewer tests get written. I see a real need to make expressive 
> tests easier to write.
> 
> vedant
> 
>> On Aug 14, 2018, at 5:38 PM, Jason Molenda  wrote:
>> 
>> I'd argue against this approach because it's exactly why the lit tests don't 
>> run against the lldb driver -- they're hardcoding the output of the lldb 
>> driver command into the testsuite and these will eventually make it much 
>> more difficult to change and improve the driver as we've accumulated this 
>> style of test.
>> 
>> This is a perfect test for a normal SB API.  Run to your breakpoints and 
>> check the stack frames.
>> 
>> f0 = thread.GetFrameAtIndex(0)
>> check that f0.GetFunctionName() == sink
>> check that f0.IsArtifical() == True
>> check that f0.GetLineEntry().GetLine() == expected line number
>> 
>> 
>> it's more verbose, but it's also much more explicit about what it's 
>> checking, and easy to see what has changed if there is a failure.
>> 
>> 
>> J
>> 
>>> On Aug 14, 2018, at 5:31 PM, Vedant Kumar via lldb-dev 
>>>  wrote:
>>> 
>>> Hello,
>>> 
>>> I'd like to make FileCheck available within lldb inline tests, in addition 
>>> to existing helpers like 'runCmd' and 'expect'.
>>> 
>>> My motivation is that several tests I'm working on can't be made as 
>>> rigorous as they need to be without FileCheck-style checks. In particular, 
>>> the 'matching', 'substrs', and 'patterns' arguments to runCmd/expect don't 
>>> allow me to verify the ordering of checked input, to be stringent about 
>>> line numbers, or to capture & reuse snippets of text from the input stream.
>>> 
>>> I'd curious to know if anyone else is interested or would be willing to 
>>> review this (https://reviews.llvm.org/D50751).
>>> 
>>> Here's an example of an inline test which benefits from FileCheck-style 
>>> checking. This test is trying to check that certain frames appear in a 
>>> backtrace when stopped inside of the "sink" function. Notice that without 
>>> FileCheck, it's not possible to verify the order in which frames are 
>>> printed, and that dealing with line numbers would be cumbersome.
>>> 
>>> ```
>>> --- 
>>> a/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
>>> +++ 
>>> b/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
>>> @@ -9,16 +9,21 @@
>>> 
>>> volatile int x;
>>> 
>>> +// CHECK: frame #0: {{.*}}sink() at main.cpp:[[@LINE+2]] [opt]
>>> void __attribute__((noinline)) sink() {
>>> -  x++; //% self.expect("bt", substrs = ['main', 'func1', 'func2', 'func3', 
>>> 'sink'])
>>> +  x++; //% self.filecheck("bt", "main.cpp")
>>> }
>>> 
>>> +// CHECK-NEXT: frame #1: {{.*}}func3() {{.*}}[opt] [artificial]
>>> void __attribute__((noinline)) func3() { sink(); /* tail */ }
>>> 
>>> +// CHECK-NEXT: frame #2: {{.*}}func2() at main.cpp:[[@LINE+1]] [opt]
>>> void __attribute__((disable_tail_calls, noinline)) func2() { func3(); /* 
>>> regular */ }
>>> 
>>> +// CHECK-NEXT: frame #3: {{.*}}func1() {{.*}}[opt] [artificial]
>>> void __attribute__((noinline)) func1() { func2(); /* tail */ }
>>> 
>>> +// CHECK-NEXT: frame #4: {{.*}}main at main.cpp:[[@LINE+2]] [opt]
>>> int __attribute__((disable_tail_calls)) main() {
>>>  func1(); /* regular */
>>>  return 0;
>>> ```
>>> 
>>> For reference, here's the output of the "bt" command:
>>> 
>>> ```
>>> runCmd: bt
>>> output: * thread #1, queue = 'com.apple.main-thread', stop reason = 
>>> breakpoint 1.1
>>> * frame #0: 0x00010c6a6f64 a.out`sink() at main.cpp:14 [opt]
>>>   frame #1: 0x00010c6a6f70 a.out`func3() at main.cpp:15 [opt] 
>>> [artificial]
>>>   frame #2: 0x00010c6a6f89 a.out`func2() at main.cpp:21 [opt]
>>>   frame #3: 0x00010c6a6f90 a.out`func1() at main.cpp:21 [opt] 
>>> [artificial]
>>>   frame #4: 0x00010c6a6fa9 a.out`main at main.cpp:28 [opt]
>>> ```
>>> 
>>> thanks,
>>> vedant
>>> ___
>>> lldb-dev mailing list
>>

Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Jason Molenda via lldb-dev


> On Aug 14, 2018, at 6:39 PM, Zachary Turner  wrote:
> 
> Having bugs also makes the debugger harder to innovate in the future because 
> it’s, not having tests leads to having bugs, and sb api tests leads to not 
> having tests. At the end of the day, it doesn’t matter how stable the tests 
> are if there arent enough of them. There should be about 10x-20x as many 
> tests as there are currently, and that will simply never happen under the 
> current approach. If it means we need to have multiple different styles of 
> test, so be it. The problem we face right now has nothing to do with command 
> output changing, and in fact I don’t that we’ve *ever* had this problem. So 
> we should be focusing on problems we have, not problems we don’t have.


I think we've had this discussion many times over the years, so I apologize for 
not reiterating what I've said in the past.  I worked on gdb for a decade 
before this, where the entire testsuite was filecheck style tests based on 
gdb's output.  It made it easy for them to write, and after a couple decades of 
tests had accumulated, it became a nightmare to change or improve any of gdb's 
commands, we all avoided it like the plague because it was so onerous.  The 
tests themselves would accumulate more and more complicated regular expressions 
to handle different output that happened to be seen, so debugging WHY a given 
test was failing required an amazing amount of work.

Yes, lldb does not have these problems -- because we learned from our decades 
working on gdb, and did not repeat that mistake.  To be honest, lldb is such a 
young debugger - barely a decade old, depending on how you count it, that ANY 
testsuite approach would be fine at this point.  Add a couple more decades and 
we'd be back into the hole that gdb was in.  {I have not worked on gdb in over 
a decade, so I don't know how their testing methodology may be today}

It's always important to remember that lldb is first and foremost a debugger 
library.  It also includes a driver program, lldb, but it is designed as a 
library and it should be tested as a library.


> 
> Note that it is not strictly necessary for a test to check the debuggers 
> command output. There could be a different set of commands whose only purpose 
> is to print information for the purposes of debugging. One idea would be to 
> introduce the notion of a debugger object model, where you print various 
> aspects of the debuggers state with an object like syntax. For example,

This was the whole point of the lit tests, wasn't it?  To have a driver 
program, or driver programs, designed explicitly for filecheck, where the 
output IS API and can be relied on.  There hasn't been disagreement about this.

> 
> (lldb) p debugger.targets
> ~/foo (running, pid: 123)
> 
> (lldb) p debugger.targets[0].threads[0].frames[1]
> int main(int argc=3, char **argv=0x12345678) + 0x72
> 
> (lldb) p debugger.targets[0].threads[0].frames[1].params[0]
> int argc=3
> 
> (lldb) p debugger.targets[0].breakpoints
> [1] main.cpp:72
> 
> Etc. you can get arbitrarily granular and dxpose every detail of the 
> debuggers internal state this way, and the output is so simple that you never 
> have to worry about it changing.
> 
> That said, I think history has shown that limiting ourselves to sb api tests, 
> despite all the theoretical benefits, leads to insufficient test coverage. So 
> while it has benefits, it also has problems for which we need a better 
> solution
> On Tue, Aug 14, 2018 at 6:19 PM Jason Molenda via lldb-dev 
>  wrote:
> It's more verbose, and it does mean test writers need to learn the public 
> API, but it's also much more stable and debuggable in the future.  It's a 
> higher up front cost but we're paid back in being able to develop lldb more 
> quickly in the future, where our published API behaviors are being tested 
> directly, and the things that must not be broken.  The lldb driver's output 
> isn't a contract, and treating it like one makes the debugger harder to 
> innovate in the future.
> 
> It's also helpful when adding new features to ensure you've exposed the 
> feature through the API sufficiently.  The first thing I thought to try when 
> writing the example below was SBFrame::IsArtificial() (see 
> SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to 
> visually indicate artificial frames, they'll need that.
> 
> J
> 
> > On Aug 14, 2018, at 5:56 PM, Vedant Kumar  wrote:
> > 
> > It'd be easy to update FileCheck tests when changing the debugger (this 
> > happens all the time in clang/swift). OTOH, the verbosity of the python API 
> > means that fewer tests get written. I see a real need to ma

Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-15 Thread Jason Molenda via lldb-dev


> On Aug 15, 2018, at 11:34 AM, Vedant Kumar  wrote:
> 
> 
> 
>> On Aug 14, 2018, at 6:19 PM, Jason Molenda  wrote:
>> 
>> It's more verbose, and it does mean test writers need to learn the public 
>> API, but it's also much more stable and debuggable in the future.
> 
> I'm not sure about this. Having looked at failing sb api tests for a while 
> now, I find them about as easy to navigate and fix as FileCheck tests in llvm.

I don't find that to be true.  I see a failing test on line 79 or whatever, and 
depending on what line 79 is doing, I'll throw in some self.runCmd("bt")'s or 
self.runCmd("fr v") to the test, re-run, and see what the relevant context is 
quickly. For most simple tests, I can usually spot the issue in under a minute. 
 dotest.py likes to eat output when it's run in multiprocess mode these days, 
so I have to remember to add --no-multiprocess.  If I'm adding something that I 
think is generally useful to debug the test case, I'll add a conditional block 
testing again self.TraceOn() and print things that may help people who are 
running dotest.py with -t trace mode enabled.

Sometimes there is a test written so it has a "verify this value" function that 
is run over a variety of different variables during the test timeframe, and 
debugging that can take a little more work to understand the context that is 
failing.  But that kind of test would be harder (or at least much more 
redundant) to express in a FileCheck style system anyway, so I can't ding it.


As for the difficulty of writing SB API tests, you do need to know the general 
architecture of lldb (a target has a process, a process has threads, a thread 
has frames, a frame has variables), the public API which quickly becomes second 
nature because it is so regular, and then there's the testsuite specific setup 
and template code.  But is that that intimidating to anyone familiar with lldb? 
 packages/Python/lldbsuite/test/sample_test/TestSampleTest.py is 50 lines 
including comments; there's about ten lines of source related to initializing / 
setting up the testsuite, and then 6 lines is what's needed to run to a 
breakpoint, get a local variable, check the value. 


J



> 
> 
>> It's a higher up front cost but we're paid back in being able to develop 
>> lldb more quickly in the future, where our published API behaviors are being 
>> tested directly, and the things that must not be broken.
> 
> I think the right solution here is to require API tests when new 
> functionality is introduced. We can enforce this during code review. Making 
> it impossible to write tests against the driver's output doesn't seem like 
> the best solution. It means that far fewer tests will be written (note that a 
> test suite run of lldb gives less than 60% code coverage). It also means that 
> the driver's output isn't tested as much as it should be.
> 
> 
>> The lldb driver's output isn't a contract, and treating it like one makes 
>> the debugger harder to innovate in the future.
> 
> I appreciate your experience with this (pattern matching on driver input) in 
> gdb. That said, I think there are reliable/maintainable ways to do this, and 
> proven examples we can learn from in llvm/clang/etc.
> 
> 
>> It's also helpful when adding new features to ensure you've exposed the 
>> feature through the API sufficiently.  The first thing I thought to try when 
>> writing the example below was SBFrame::IsArtificial() (see 
>> SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to 
>> visually indicate artificial frames, they'll need that.
> 
> Sure. That's true, we do need API exposure for new features, and again we can 
> enforce that during code review. The reason you didn't find IsArtificial() is 
> because it's sitting on my disk :). Haven't shared the patch yet.
> 
> vedant
> 
>> 
>> J
>> 
>>> On Aug 14, 2018, at 5:56 PM, Vedant Kumar  wrote:
>>> 
>>> It'd be easy to update FileCheck tests when changing the debugger (this 
>>> happens all the time in clang/swift). OTOH, the verbosity of the python API 
>>> means that fewer tests get written. I see a real need to make expressive 
>>> tests easier to write.
>>> 
>>> vedant
>>> 
 On Aug 14, 2018, at 5:38 PM, Jason Molenda  wrote:
 
 I'd argue against this approach because it's exactly why the lit tests 
 don't run against the lldb driver -- they're hardcoding the output of the 
 lldb driver command into the testsuite and these will eventually make it 
 much more difficult to change and improve the driver as we've accumulated 
 this style of test.
 
 This is a perfect test for a normal SB API.  Run to your breakpoints and 
 check the stack frames.
 
 f0 = thread.GetFrameAtIndex(0)
 check that f0.GetFunctionName() == sink
 check that f0.IsArtifical() == True
 check that f0.GetLineEntry().GetLine() == expected line number
 
 
 it's more verbose, but it's also much more explicit about what it's 
 checking, 

Re: [lldb-dev] When should ArchSpecs match?

2018-12-06 Thread Jason Molenda via lldb-dev
I think the confusing thing is when "unspecified" means "there is no OS" or 
"there is no vendor" versus "vendor/OS is unspecified".

Imagine debugging a firmware environment where we have a cpu arch, and we may 
have a vendor, but we specifically do not have an OS.  Say armv7-apple-none (I 
make up "none", I don't think that's valid).  If lldb is looking for a binary 
and it finds one with armv7-apple-ios, it should reject that binary, they are 
incompatible.

As opposed to a triple of "armv7-*-*" saying "I know this is an armv7 system 
target, but I don't know anything about the vendor or the OS" in which case an 
armv7-apple-ios binary is compatible.

My naive reading of "arm64-*-*" means vendor & OS are unspecified and should 
match anything.

My naive reading of "arm64" is that it is the same as "arm64-*-*".

I don't know what a triple string looks like where we specify "none" for a 
field.  Is it armv7-apple-- ?  I know Triple has Unknown enums, but "Unknown" 
is ambiguous between "I don't know it yet" versus "It not any Vendor/OS".

Some of the confusion is the textual representation of the triples, some of it 
is the llvm Triple class not having a way to express (afaik) "do not match this 
field against anything" aka "none".



> On Dec 6, 2018, at 3:19 PM, Adrian Prantl via lldb-dev 
>  wrote:
> 
> I was puzzled by the behavior of ArchSpec::IsExactMatch() and 
> IsCompatibleMatch() yesterday, so I created a couple of unit tests to 
> document the current behavior. Most of the tests make perfect sense, but a 
> few edge cases really don't behave like I would have expected them to.
> 
>>  {
>>ArchSpec A("arm64-*-*");
>>ArchSpec B("arm64-apple-ios");
>>ASSERT_FALSE(A.IsExactMatch(B));
>>// FIXME: This looks unintuitive and we should investigate whether
>>// this is the desired behavior.
>>ASSERT_FALSE(A.IsCompatibleMatch(B));
>>  }
>>  {
>>ArchSpec A("x86_64-*-*");
>>ArchSpec B("x86_64-apple-ios-simulator");
>>ASSERT_FALSE(A.IsExactMatch(B));
>>// FIXME: See above, though the extra environment complicates things.
>>ASSERT_FALSE(A.IsCompatibleMatch(B));
>>  }
>>  {
>>ArchSpec A("x86_64");
>>ArchSpec B("x86_64-apple-macosx10.14");
>>// FIXME: The exact match also looks unintuitive.
>>ASSERT_TRUE(A.IsExactMatch(B));
>>ASSERT_TRUE(A.IsCompatibleMatch(B));
>>  }
>> 
> 
> Particularly, I believe that:
> - ArchSpec("x86_64-*-*") and ArchSpec("x86_64") should behave the same.
> - ArchSpec("x86_64").IsExactMatch("x86_64-apple-macosx10.14") should be false.
> - ArchSpec("x86_64-*-*").IsCompatibleMath("x86_64-apple-macosx") should be 
> true.
> 
> Does anyone disagree with any of these statements?
> 
> I fully understand that changing any of these behaviors will undoubtedly 
> break one or the other edge case, but I think it would be important to build 
> on a foundation that actually makes sense if we want to be able to reason 
> about the architecture matching logic at all.
> 
> let me know what you think!
> -- adrian
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] When should ArchSpecs match?

2018-12-06 Thread Jason Molenda via lldb-dev
There is genuinely no OS in some cases, like people who debug the software that 
runs in a keyboard or a mouse.  And to higher-level coprocessors in a modern 
phones; the SOCs on all these devices have a cluster of processors, and only 
some of them are running an identifiable operating system, like iOS or Android.

I'll be honest, it's not often that we'll be debugging an arm64-apple-none 
target and have to decide whether an arm64-apple-ios binary should be loaded or 
not.  But we need some way to express this kind of environment.


> On Dec 6, 2018, at 3:50 PM, Zachary Turner  wrote:
> 
> Is there some reason we can’t define vendors, environments, arches, and oses 
> for all supported use cases? That way “there is no os” would not ever be a 
> thing.
> On Thu, Dec 6, 2018 at 3:37 PM Jason Molenda via lldb-dev 
>  wrote:
> I think the confusing thing is when "unspecified" means "there is no OS" or 
> "there is no vendor" versus "vendor/OS is unspecified".
> 
> Imagine debugging a firmware environment where we have a cpu arch, and we may 
> have a vendor, but we specifically do not have an OS.  Say armv7-apple-none 
> (I make up "none", I don't think that's valid).  If lldb is looking for a 
> binary and it finds one with armv7-apple-ios, it should reject that binary, 
> they are incompatible.
> 
> As opposed to a triple of "armv7-*-*" saying "I know this is an armv7 system 
> target, but I don't know anything about the vendor or the OS" in which case 
> an armv7-apple-ios binary is compatible.
> 
> My naive reading of "arm64-*-*" means vendor & OS are unspecified and should 
> match anything.
> 
> My naive reading of "arm64" is that it is the same as "arm64-*-*".
> 
> I don't know what a triple string looks like where we specify "none" for a 
> field.  Is it armv7-apple-- ?  I know Triple has Unknown enums, but "Unknown" 
> is ambiguous between "I don't know it yet" versus "It not any Vendor/OS".
> 
> Some of the confusion is the textual representation of the triples, some of 
> it is the llvm Triple class not having a way to express (afaik) "do not match 
> this field against anything" aka "none".
> 
> 
> 
> > On Dec 6, 2018, at 3:19 PM, Adrian Prantl via lldb-dev 
> >  wrote:
> > 
> > I was puzzled by the behavior of ArchSpec::IsExactMatch() and 
> > IsCompatibleMatch() yesterday, so I created a couple of unit tests to 
> > document the current behavior. Most of the tests make perfect sense, but a 
> > few edge cases really don't behave like I would have expected them to.
> > 
> >>  {
> >>ArchSpec A("arm64-*-*");
> >>ArchSpec B("arm64-apple-ios");
> >>ASSERT_FALSE(A.IsExactMatch(B));
> >>// FIXME: This looks unintuitive and we should investigate whether
> >>// this is the desired behavior.
> >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> >>  }
> >>  {
> >>ArchSpec A("x86_64-*-*");
> >>ArchSpec B("x86_64-apple-ios-simulator");
> >>ASSERT_FALSE(A.IsExactMatch(B));
> >>// FIXME: See above, though the extra environment complicates things.
> >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> >>  }
> >>  {
> >>ArchSpec A("x86_64");
> >>ArchSpec B("x86_64-apple-macosx10.14");
> >>// FIXME: The exact match also looks unintuitive.
> >>ASSERT_TRUE(A.IsExactMatch(B));
> >>ASSERT_TRUE(A.IsCompatibleMatch(B));
> >>  }
> >> 
> > 
> > Particularly, I believe that:
> > - ArchSpec("x86_64-*-*") and ArchSpec("x86_64") should behave the same.
> > - ArchSpec("x86_64").IsExactMatch("x86_64-apple-macosx10.14") should be 
> > false.
> > - ArchSpec("x86_64-*-*").IsCompatibleMath("x86_64-apple-macosx") should be 
> > true.
> > 
> > Does anyone disagree with any of these statements?
> > 
> > I fully understand that changing any of these behaviors will undoubtedly 
> > break one or the other edge case, but I think it would be important to 
> > build on a foundation that actually makes sense if we want to be able to 
> > reason about the architecture matching logic at all.
> > 
> > let me know what you think!
> > -- adrian
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] When should ArchSpecs match?

2018-12-06 Thread Jason Molenda via lldb-dev
Oh sorry I missed that.  Yes, I think a value added to the OSType for NoOS or 
something would work.  We need to standardize on a textual representation for 
this in a triple string as well, like 'none'.  Then with arm64-- and arm64-*-* 
as UnknownVendor + UnknownOS we can have these marked as "compatible" with any 
other value in the case Adrian is looking at.


> On Dec 6, 2018, at 4:05 PM, Zachary Turner  wrote:
> 
> That's what I mean though, perhaps we could add a value to the OSType 
> enumeration like BareMetal or None to explicitly represent this.  the 
> SubArchType enum has NoSubArch, so it's not without precedent.  As long as 
> you can express it in the triple format, the problem goes away.  
> 
> On Thu, Dec 6, 2018 at 3:55 PM Jason Molenda  wrote:
> There is genuinely no OS in some cases, like people who debug the software 
> that runs in a keyboard or a mouse.  And to higher-level coprocessors in a 
> modern phones; the SOCs on all these devices have a cluster of processors, 
> and only some of them are running an identifiable operating system, like iOS 
> or Android.
> 
> I'll be honest, it's not often that we'll be debugging an arm64-apple-none 
> target and have to decide whether an arm64-apple-ios binary should be loaded 
> or not.  But we need some way to express this kind of environment.
> 
> 
> > On Dec 6, 2018, at 3:50 PM, Zachary Turner  wrote:
> > 
> > Is there some reason we can’t define vendors, environments, arches, and 
> > oses for all supported use cases? That way “there is no os” would not ever 
> > be a thing.
> > On Thu, Dec 6, 2018 at 3:37 PM Jason Molenda via lldb-dev 
> >  wrote:
> > I think the confusing thing is when "unspecified" means "there is no OS" or 
> > "there is no vendor" versus "vendor/OS is unspecified".
> > 
> > Imagine debugging a firmware environment where we have a cpu arch, and we 
> > may have a vendor, but we specifically do not have an OS.  Say 
> > armv7-apple-none (I make up "none", I don't think that's valid).  If lldb 
> > is looking for a binary and it finds one with armv7-apple-ios, it should 
> > reject that binary, they are incompatible.
> > 
> > As opposed to a triple of "armv7-*-*" saying "I know this is an armv7 
> > system target, but I don't know anything about the vendor or the OS" in 
> > which case an armv7-apple-ios binary is compatible.
> > 
> > My naive reading of "arm64-*-*" means vendor & OS are unspecified and 
> > should match anything.
> > 
> > My naive reading of "arm64" is that it is the same as "arm64-*-*".
> > 
> > I don't know what a triple string looks like where we specify "none" for a 
> > field.  Is it armv7-apple-- ?  I know Triple has Unknown enums, but 
> > "Unknown" is ambiguous between "I don't know it yet" versus "It not any 
> > Vendor/OS".
> > 
> > Some of the confusion is the textual representation of the triples, some of 
> > it is the llvm Triple class not having a way to express (afaik) "do not 
> > match this field against anything" aka "none".
> > 
> > 
> > 
> > > On Dec 6, 2018, at 3:19 PM, Adrian Prantl via lldb-dev 
> > >  wrote:
> > > 
> > > I was puzzled by the behavior of ArchSpec::IsExactMatch() and 
> > > IsCompatibleMatch() yesterday, so I created a couple of unit tests to 
> > > document the current behavior. Most of the tests make perfect sense, but 
> > > a few edge cases really don't behave like I would have expected them to.
> > > 
> > >>  {
> > >>ArchSpec A("arm64-*-*");
> > >>ArchSpec B("arm64-apple-ios");
> > >>ASSERT_FALSE(A.IsExactMatch(B));
> > >>// FIXME: This looks unintuitive and we should investigate whether
> > >>// this is the desired behavior.
> > >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> > >>  }
> > >>  {
> > >>ArchSpec A("x86_64-*-*");
> > >>ArchSpec B("x86_64-apple-ios-simulator");
> > >>ASSERT_FALSE(A.IsExactMatch(B));
> > >>// FIXME: See above, though the extra environment complicates things.
> > >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> > >>  }
> > >>  {
> > >>ArchSpec A("x86_64");
> > >>ArchSpec B("x86_64-apple-macosx10.14");
> > >>// FIXME: The ex

Re: [lldb-dev] RFC: Unwinding + Symbol Files: How To?

2019-02-07 Thread Jason Molenda via lldb-dev
Hi Pavel,

I'm open to this. I don't think there was any specific reason why UnwindTable 
is in the ObjectFile over the Module - it was very likely not an intentional 
choice when I put it there.

To recap, each "binary" in the system has an UnwindTable. The UnwindTable has a 
list of functions it has unwind plans for, and it has a list of sources of 
unwind sources that it can get from object files. The UnwindTable starts with a 
full list of unwind tables and no 

For every function that we've parsed unwind information, we create a 
FuncUnwinder object which the UnwindTable stores. Some unwind information is 
expensive to get -- scanning the instructions, for instance -- so we only want 
to do this on-demand, and we don't want to do it twice in the same debug 
session.

FuncUnwinders has a variety of public methods (probably more than it should) 
but at a high level, users of this object ask it "give me the best unwind plan 
if this function is currently-executing aka on the 0th stack frame" or "give me 
the best unwind plan if this function is in the middle of the stack". 
RegisterContextLLDB / UnwindLLDB are the main bits of code that call these 
methods. 

FuncUnwinders gets the different unwind sources from UnwindTable (e.g. 
eh_frame, compact_unwind, debug_info) or via a Thread object at run-time for 
unwind sources that require an executing runtime (instruction profiling) (and 
some unwind plans from the ABI plugin via the Thread).

Are you proposing removing the hardcoded rules in FuncUnwinders of which unwind 
plan sources to prefer in different situations? I'm not against it, but the # 
of unwind sources has been small enough that it hasn't been too much of a 
problem imo. If we wanted to do it algorithmically, I think the important 
attributes for an unwind plan are whether it is (1) known to be accurate at 
every instruction location, (2) known to be accurate only at throwable 
locations. And whether it is (a) sourced directly from the compiler or (b) 
constructed heuristically (assembly instruction analysis, eh_frame augmentation 
via instruction analysis). 

eh_frame and debug_frame are the most annoying formats because they CAN be 
accurate at every instruction location, if the producer decided to do that. Or 
they may only be accurate for the prologue. (often called "asynchronous unwind 
tables" when it is accurate at every instruction) But there's nothing in the 
eh_frame/debug_frame spec that tells the consumer (lldb) what kind of unwind 
source this is.

Well anyway, that's a lot of words - but in short, if it makes something easier 
for you to move UnwindTable into Module, I don't see any problems with that. 
Making a cleaner way of getting unwind sources, or deciding between them, I'm 
interested to see what that might look like. The current setup is a lot of 
manual coding, but then again there aren't a very large number of unwind 
sources so far.



J 




On 02/07/19 08:54 AM, Pavel Labath   wrote: 
> 
> Hello all,
> 
> currently I am at at stage where the only large piece of functionality 
> missing from the Breakpad symbol file plugin is the unwind information. The 
> PDB plugin has been in that state for a while. I'd like to rectify this 
> situation, but doing that in the present lldb model is somewhat challenging.
> 
> The main problem is that in the current model, the unwind strategies are a 
> property of the ObjectFile. I am guessing the reason for that is historical: 
> most of our unwind strategies (instruction emulation, eh_frame, ...) are 
> independent of symbol file information. One exception here is debug_frame, 
> which is technically a part of DWARF, but in reality is completely 
> independent of it, so it's not completely unreasonable to for it to be 
> implemented outside of SymbolFileDWARF.
> 
> However, I don't think this is a good strategy going forward. Following the 
> current approach the parsing code for breakpad and pdb unwind info would have 
> to live somewhere under source/Symbol, but these are pretty specialist 
> formats, and I don't think it makes sense to pollute generic code with 
> something like this.
> 
> Therefore, I propose to modify the current model to allow SymbolFile plugins 
> to provide additional unwind strategies. A prerequisite for that would be 
> moving the unwind data source handling code (UnwindTable class) from 
> ObjectFile to Module level.
> 
> The overall idea I have is to have the Module, when constructing the 
> UnwindTable, consult the chosen SymbolFile plugin. The plugin would provide 
> an additional data source (hidden behind some abstract interface), which 
> could be stored in the UnwindTable, next to all other sources of unwind info. 
> Then the unwind strategy (class UnwindPlan) generated by this source would be 
> considered for unwinding along with all existing strategies (relative 
> priorities TBD).
> 
> Any thoughts on this approach?
> 
> regards,
> pavel
> 

___
lldb

Re: [lldb-dev] Unwinding call frames with separated data and return address stacks

2019-03-04 Thread Jason Molenda via lldb-dev
Hi Tom, interesting problem you're working on there.

I'm not sure any of the DWARF expression operators would work here.  You want 
to have an expression that works for a given frame, saying "to find the 
caller's pc value, look at the saved-pc stack, third entry from the bottom of 
that stack."  But that would require generating a different DWARF expression 
for the frame each time it shows up in a backtrace - which is unlike lldb's 
normal design of having an UnwindPlan for a function which is computed once and 
reused for the duration of the debug session.

I supposed you could add a user-defined DW_OP which means "get the current 
stack frame number" and then have your expression deref the emulated saved-pc 
stack to get the value?

lldb uses an intermediate representation of unwind information (UnwindPlan) 
which will use a DWARF expression, but you could also add an entry to 
UnwindPlan::Row::RegisterLocation::RestoreType which handled this, I suppose.


> On Mar 4, 2019, at 2:46 AM, Thomas Goodfellow via lldb-dev 
>  wrote:
> 
> I'm adding LLDB support for an unconventional platform which uses two
> stacks: one purely for return addresses and another for frame context
> (spilled registers, local variables, etc). There is no explicit link
> between the two stacks, i.e. the frame context doesn't include any
> pointer or index to identify the return address: the epilog for a
> subroutine amounts to unwinding the frame context then finally popping
> the top return address from the return stack. It has some resemblance
> to the Intel CET scheme of shadow stacks, but without the primary
> stack having a copy of the return address.
> 
> I can extend the emulation of the platform to better support LLDB. For
> example while the real hardware platform provides no access to the
> return address stack the emulation can expose it in the memory map,
> provide an additional debug register for querying it, etc, which DWARF
> expressions could then extract return addresses from. However doing
> this seems to require knowing the frame number and I haven't found a
> way of doing this (a pseudo-register manipulated by DWARF expressions
> worked but needed some LLDB hacks to sneak it through the existing
> link register handling, also seemed likely to be unstable against LLDB
> implementation changes)
> 
> Is there a way to access the call frame number (or a reliable proxy)
> from a DWARF expression? Or an existing example of unwinding a shadow
> stack?
> 
> Thanks,
> Tom
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Unwinding call frames with separated data and return address stacks

2019-03-05 Thread Jason Molenda via lldb-dev
Yeah, if you don't need to find a way to express this in DWARF, then adding a 
type to RestoreType would be very simple.  lldb maps all the different unwind 
sources (debug_frame, eh_frame, arm index, compact unwind, assembly instruction 
scanning) into its internal intermediate representation (UnwindPlan) - so if 
you had an assembly-scanning unwind implementation for your target, you could 
add the appropriate RestoreType's.  There are also architectural default unwind 
plans that are provided by the ABI plugin, both a default one (usually 
appropriate for frames up the stack) and an unwind plan that is valid at the 
first instruction of a function.  These are good starting points for a new 
port, where you won't step through the prologue/epilogue correctly, but once 
you're in the middle of a function they can do a correct unwind on most 
architectures.

J

> On Mar 5, 2019, at 12:09 AM, Thomas Goodfellow  
> wrote:
> 
> Hi Jason
> 
> Thanks for the advice - I've been surprised overall how capable DWARF
> expressions are so wouldn't have been surprised to learn that there is
> also a category of pseudo-variables (not that I can think of any
> others, or other circumstances where it would be useful: the usual
> combined code/data stack is ubiquitous). The RestoreType suggestion is
> interesting as it might be a less-intrusive change.
> 
> Cheers,
> Tom
> 
> On Mon, 4 Mar 2019 at 22:05, Jason Molenda  wrote:
>> 
>> Hi Tom, interesting problem you're working on there.
>> 
>> I'm not sure any of the DWARF expression operators would work here.  You 
>> want to have an expression that works for a given frame, saying "to find the 
>> caller's pc value, look at the saved-pc stack, third entry from the bottom 
>> of that stack."  But that would require generating a different DWARF 
>> expression for the frame each time it shows up in a backtrace - which is 
>> unlike lldb's normal design of having an UnwindPlan for a function which is 
>> computed once and reused for the duration of the debug session.
>> 
>> I supposed you could add a user-defined DW_OP which means "get the current 
>> stack frame number" and then have your expression deref the emulated 
>> saved-pc stack to get the value?
>> 
>> lldb uses an intermediate representation of unwind information (UnwindPlan) 
>> which will use a DWARF expression, but you could also add an entry to 
>> UnwindPlan::Row::RegisterLocation::RestoreType which handled this, I suppose.
>> 
>> 
>>> On Mar 4, 2019, at 2:46 AM, Thomas Goodfellow via lldb-dev 
>>>  wrote:
>>> 
>>> I'm adding LLDB support for an unconventional platform which uses two
>>> stacks: one purely for return addresses and another for frame context
>>> (spilled registers, local variables, etc). There is no explicit link
>>> between the two stacks, i.e. the frame context doesn't include any
>>> pointer or index to identify the return address: the epilog for a
>>> subroutine amounts to unwinding the frame context then finally popping
>>> the top return address from the return stack. It has some resemblance
>>> to the Intel CET scheme of shadow stacks, but without the primary
>>> stack having a copy of the return address.
>>> 
>>> I can extend the emulation of the platform to better support LLDB. For
>>> example while the real hardware platform provides no access to the
>>> return address stack the emulation can expose it in the memory map,
>>> provide an additional debug register for querying it, etc, which DWARF
>>> expressions could then extract return addresses from. However doing
>>> this seems to require knowing the frame number and I haven't found a
>>> way of doing this (a pseudo-register manipulated by DWARF expressions
>>> worked but needed some LLDB hacks to sneak it through the existing
>>> link register handling, also seemed likely to be unstable against LLDB
>>> implementation changes)
>>> 
>>> Is there a way to access the call frame number (or a reliable proxy)
>>> from a DWARF expression? Or an existing example of unwinding a shadow
>>> stack?
>>> 
>>> Thanks,
>>> Tom
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Evaluating the same expression at the same breakpoint gets slower after a certain number of steps

2019-07-08 Thread Jason Molenda via lldb-dev
Hm, that's interesting.

I tried running a debug lldb on /bin/ls.  then I attached from another lldb.  I 
put a breakpoint on CommandObjectTargetModulesLookup::DoExecute and resumed 
execution.  In the debuggee lldb, I did

tar mod loo -a 0


and auto-repeated return so the same command would be executed over and over.

In the debugger lldb, I tried adding a command to the breakpoint,

br comm add 
p does_not_exist
DONE

and continuing - after a couple dozen times, I didn't see a slowdown.  I tried 
adding a breakpoint condition,

br mod -c 'doesNotExist == 1' 1

and continuing, and didn't see a slowdown after a few dozen repetitions.

I'm on macOS using .o file DWARF debugging.

I'm sure there's a bug here, but it may be more specific to the platform and 
type of debug info that you're using? It could be that lldb is too small of a 
project to repo this problem too.



> On Jul 4, 2019, at 11:38 AM, Guilherme Andrade via lldb-dev 
>  wrote:
> 
> I have two breakpoint inside methods that are called every frame (C++ project 
> using Unreal), and every time one of them is reached, I evaluate one 
> expression (I'm being able to reproduce this using an undefined name, say 
> "undefinedVariable"). After a few iterations (usually tens), the time it 
> takes for LLDB to say that name doesn't exist increases, despite being the 
> same expression, at the same breakpoint and the call stack remaining 
> unchanged.
> 
> I've noticed that the number of lexical Decl queries and imports conducted by 
> Clang reported in 'Local metrics' increase.
> 
> They go from:
> Number of visible Decl queries by name : 29
> Number of lexical Decl queries: 9
> Number of imports initiated by LLDB   : 15
> Number of imports conducted by Clang  : 827
> Number of Decls completed: 5
> Number of records laid out  : 2 
> 
> To:
> Number of visible Decl queries by name : 29
> Number of lexical Decl queries: 14
> Number of imports initiated by LLDB   : 15
> Number of imports conducted by Clang  : 1342
> Number of Decls completed: 5
> Number of records laid out  : 2
> 
> Also, the number of SymbolFileDWARF operations in the logs jumps from 366 to 
> 406.
> 
> So, I've got two questions. 1) Is it safe to say that those extra imports and 
> Decl queries are responsible for the performance loss? 2) Why do they happen?
> 
> Thanks!
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Signal stack unwinding and __kernel_rt_sigreturn

2019-07-15 Thread Jason Molenda via lldb-dev


> On Jul 12, 2019, at 2:04 PM, Greg Clayton  wrote:
> 
> 
> 
>> On Jun 21, 2019, at 1:24 PM, Joseph Tremoulet via lldb-dev 
>>  wrote:
>> 
> 
>> 2 - When the user handler is invoked, its return address is set to the very 
>> first byte of __kernel_rt_sigreturn, which throws off unwinding because we 
>> assume that frame must really be at a call in the preceding function.  I 
>> asked about this on IRC, where Jan Kratochvil mentioned that the decrement 
>> shouldn't happen for frames with S in the eh_frame's augmentation.  I've 
>> verified that __kernel_rt_sigreturn indeed has the S.  I'm not sure where 
>> I'd find official documentation about that, but the DWARF Standards 
>> Committee's wiki[1] does link to Ian Lance Taylor's blog[2] which says "The 
>> character ‘S’ in the augmentation string means that this CIE represents a 
>> stack frame for the invocation of a signal handler. When unwinding the 
>> stack, signal stack frames are handled slightly differently: the instruction 
>> pointer is assumed to be before the next instruction to execute rather than 
>> after it."  So I'm interested in encoding that knowledge in LLDB, but not 
>> sure architecturally whether it would be more appropriate to dig into the 
>> eh_frame record earlier, or to just have this be a property of symbols 
>> flagged as trap handlers, or something else.
> 
> If we have hints that unwinding should not backup the PC, then this is fine 
> to use. We need the ability to indicate that a lldb_private::StackFrame frame 
> behaves like frame zero even when it is in the middle. I believe the code for 
> sigtramp already does this somehow. I CC'ed Jason Molenda so he can chime in.


Sorry for the delay in replying, yes the discussion over on 
https://reviews.llvm.org/D63667 is also related - we should record the S flag 
in the UnwindPlan but because of the order of operations, always getting the 
eh_frame UnwindPlan to see if this is a signal handler would be expensive (we 
try to delay fetching the eh_frame as much as possible because we pay a 
one-time cost per binary to scan the section).


> 
> 
>> I'd very much appreciate any feedback on this.  I've put up a patch[3] on 
>> Phab with a testcase that demonstrates the issue (on aarch64 linux) and an 
>> implementation of the low-churn "communicate this in the trap handler symbol 
>> list" approach.
>>  
>> Thanks,
>> -Joseph
>>  
>> [1] - http://wiki.dwarfstd.org/index.php?title=Exception_Handling
>> [2] - https://www.airs.com/blog/archives/460
>> [3] - https://reviews.llvm.org/D63667
>>  
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Fast Conditional Breakpoints (FCB)

2019-08-22 Thread Jason Molenda via lldb-dev


> On Aug 22, 2019, at 3:58 PM, Ismail Bennani via lldb-dev 
>  wrote:
> 
> Hi Greg,
> 
> Thanks for your suggestion!
> 
>> On Aug 22, 2019, at 3:35 PM, Greg Clayton  wrote:
>> 
>> Another possibility is to have the IDE insert NOP opcodes for you when you 
>> write a breakpoint with a condition and compile NOPs into your program. 
>> 
>> So the flow is:
>> - set a breakpoint in IDE
>> - modify breakpoint to add a condition
>> - compile and debug, the IDE inserts NOP instructions at the right places
> 
> We’re trying to avoid rebuilding every time we want to debug, but I’ll keep
> this in mind as an eventual fallback.

It's also valuable to use FCBs on third party code.  You might want to put a 
FCB on dlopen(), strcmp'ing the first argument for a specific argument, without 
rebuilding the C libraries.  Recompilation/instrumentation makes this a lot 
simpler, but it also reduces the usefulness of the feature.


> 
>> - now when you debug you have a NOP you can use and not have to worry about 
>> moving instructions
>> 
>> 
>>> On Aug 22, 2019, at 5:29 AM, Pedro Alves via lldb-dev 
>>>  wrote:
>>> 
>>> On 8/22/19 12:36 AM, Ismail Bennani via lldb-dev wrote:
> On Aug 21, 2019, at 3:48 PM, Pedro Alves  wrote:
>>> 
> Say, you're using a 5 bytes jmp instruction to jump to the
> trampoline, so you need to replace 5 bytes at the breakpoint address.
> But the instruction at the breakpoint address is shorter than
> 5 bytes.  Like:
> 
> ADDR | BEFORE   | AFTER
> ---
>  | INSN1 (1 byte)   | JMP (5 bytes)
> 0001 | INSN2 (2 bytes)  |   <<< thread T's PC points here
> 0002 |  |
> 0003 | INSN3 (2 bytes)  |
> 
> Now once you resume execution, thread T is going to execute a bogus
> instruction at ADDR 0001.
 
 That’s a relevant point.
 
 I haven’t thought of it, but I think this can be mitigated by checking at
 the time of replacing the instructions if any thread is within the copied
 instructions bounds.
 
 If so, I’ll change all the threads' pcs that are in the critical region to
 point to new copied instruction location (inside the trampoline).
 
 This way, it won’t change the execution flow of the program.
>>> 
>>> Yes, I think that would work, assuming that you can stop all threads, 
>>> or all threads are already stopped, which I believe is true with
>>> LLDB currently.  If any thread is running (like in gdb's non-stop mode)
>>> then you can't do that, of course.
>>> 
 
 Thanks for pointing out this issue, I’ll make sure to add a fix to my
 implementation.
 
 If you have any other suggestion on how to tackle this problem, I’d like
 really to know about it :).
>>> 
>>> Not off hand.  I think I'd take a look at Dyninst, see if they have
>>> some sophisticated way to handle this scenario.
>>> 
>>> Thanks,
>>> Pedro Alves
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> 
> 
> Sincerely,
> 
> Ismail
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] does vFile:Open actually work?

2019-10-10 Thread Jason Molenda via lldb-dev
Yeah, this is a bug in lldb's implementation of vFile:open.  lldb talks to 
lldb-server (in platform mode) so it will work with itself, but it will not 
interoperate with any other implementations.  That's the down side to having 
the client and server literally built from the same sources. :) 


I have a small self-contained platform implementation I wrote locally from the 
specification, and I stumbled across the bug last winter.  We'll need to add a 
packet so lldb can request standards-correct vFile:open: behavior, and fall 
back to its original implementation if it that is not understood for a while so 
we interoperate with existing platforms in the wild.

A similar type of bug is lldb's incorrect implementation of the A packet, see 
https://bugs.llvm.org/show_bug.cgi?id=42471 - Spencer had the good suggestion 
of creating a protocol fixes packet to query for these so additional ones can 
be added in the future, he suggested:


> Request: "qProtocolFixes:fix;…" where each 'fix' is one of the following 
> strings:
>   - "GDBFlagsInvFileOpen"
>   - "GDBBaseInAPacket"
>   - ...
> Unknown strings are acceptable, but ignored. If a fix  string is not present, 
> it is assumed that that fix is not present.
> 
> Response: "fix;…", same definition as 'fix' above.


I have a little TODO on myself to do this ().



J



> On Oct 10, 2019, at 2:39 PM, Larry D'Anna via lldb-dev 
>  wrote:
> 
> The comments in File.h say:
> 
>   // NB this enum is used in the lldb platform gdb-remote packet
>   // vFile:open: and existing values cannot be modified.
>   enum OpenOptions {
> eOpenOptionRead = (1u << 0),  // Open file for reading
> eOpenOptionWrite = (1u << 1), // Open file for writing
> eOpenOptionAppend =
> (1u << 2), // Don't truncate file when opening, append to end of file
> 
> And In GDBRemoteCommunicationServerCommon.cpp it says:
> 
>   uint32_t flags = packet.GetHexMaxU32(false, 0);
>   if (packet.GetChar() == ',') {
> mode_t mode = packet.GetHexMaxU32(false, 0600);
> FileSpec path_spec(path);
> FileSystem::Instance().Resolve(path_spec);
> // Do not close fd.
> auto file = FileSystem::Instance().Open(path_spec, flags, mode, 
> false);
> 
> 
> But in the GDB documentation it says:
> 
> @node Open Flags
> @unnumberedsubsubsec Open Flags
> @cindex open flags, in file-i/o protocol
> 
> All values are given in hexadecimal representation.
> 
> @smallexample
>   O_RDONLY0x0
>   O_WRONLY0x1
>   O_RDWR  0x2
>   O_APPEND0x8
>   O_CREAT   0x200
>   O_TRUNC   0x400
>   O_EXCL0x800
> @end smallexample
> 
> 
> Does vFile:Open actually work?  Are there any tests that cover it?
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] gdb-remote protocol questions

2020-01-26 Thread Jason Molenda via lldb-dev
I suspect your problem may be related to lldb not knowing how to walk the stack 
on this target.  Is  mips-unknown-linux-gnu correct?  What do you see if you 
turn on unwind logging, 'log enable lldb unwind'.  You can also have lldb show 
you the unwind rules at the current pc value with 'image show-unwind -a $pc'.  
I don't know what unwinders we have defined for this target in lldb right now 
-- if you have eh_frame information in your binary, lldb should read & use 
that.  Otherwise, if you have an assembly instruction profiler in lldb for 
mips, and start addresses for your functions, lldb should be able to inspect 
the instruction stream and figure out how to unwind out of the function 
correctly.  As a last resort, it will fall back to architecture rules for how 
to backtrace out of a function (defined in the ABI plugin) but those are often 
incorrect in prologue/epilogues (start & end of a function).



(btw if you support no-ack mode, there's a lot less packet traffic between your 
stub and lldb - recommended.)


J




> On Jan 25, 2020, at 3:08 PM, Alexander Zhang via lldb-dev 
>  wrote:
> 
> Hi,
> 
> I've been implementing a basic RSP protocol server for remotely debugging a 
> MIPS simulator, and have been having some trouble getting certain lldb 
> features to work there, in particular backtraces (bt) and instruction step 
> over (ni). Does someone know what packets these commands rely on to work? 
> I've attached some communication logs, and if it helps my code is at 
> https://github.com/CQCumbers/nmulator/blob/master/src/debugger.h
> 
> Please forgive me if this isn't the right place to ask - I know this isn't 
> directly part of lldb development but I've tried several other places and 
> haven't been able to find anyone familiar with the subject.
> 
> Also, just a user question, but is there a way to show register values in hex 
> format without leading zeros?
> 
> Thanks,
> Alexander
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] gdb-remote protocol questions

2020-01-28 Thread Jason Molenda via lldb-dev
Hi Alexander, sorry for the delay in replying.

The attached unwind log shows that lldb is using the architectural default 
unwind plan for this target.  You can see where this unwind plan in constructed 
at

ABISysV_mips::CreateDefaultUnwindPlan

it says to find the caller's pc value at *($r29),

  // Our Call Frame Address is the stack pointer value
  row->GetCFAValue().SetIsRegisterPlusOffset(dwarf_r29, 0);

The default unwind plan says to find the caller's pc value in $r31,

  // The previous PC is in the RA
  row->SetRegisterLocationToRegister(dwarf_pc, dwarf_r31, true);
  unwind_plan.AppendRow(row);

which is fine for frame 0, we can look at $r31, but as soon as we move off of 
frame 0, we have to find the saved $r31 value on the stack (frame 0 had to 
spill it to the stack, right).  

Unfortunately we don't have the function bounds of frame 0, we only have the 
architectural default unwind plan.  This default unwind plan cannot do the 
right thing except on frame 0.


On other architectures where a return address register is used, like arm, the 
default unwind plan assumes that the pc and saved frame pointer have been 
spilled to stack, and there is a convention that they're the first two things 
spilled to stack.  So we see in ABISysV_arm::CreateDefaultUnwindPlan,

  row->SetRegisterLocationToAtCFAPlusOffset(fp_reg_num, ptr_size * -2, true);
  row->SetRegisterLocationToAtCFAPlusOffset(pc_reg_num, ptr_size * -1, true);

We also have a ABISysV_arm::CreateFunctionEntryUnwindPlan unwind plan that is 
guaranteed to be valid at the first instruction of a function; it says that the 
saved PC is in the return address register,

  // Our Call Frame Address is the stack pointer value
  row->GetCFAValue().SetIsRegisterPlusOffset(sp_reg_num, 0);

  // The previous PC is in the LR
  row->SetRegisterLocationToRegister(pc_reg_num, lr_reg_num, true);
  unwind_plan.AppendRow(row);

although I should warn that I'm 99% sure that "nexti" doesn't currently record 
the fact that it is potentially stepping into a function, so lldb doesn't know 
to use the FunctionEntryUnwindPlan.  We prob should.


fwiw the 0x value is lldb's LLDB_INVALID_ADDRESS.

J



> On Jan 27, 2020, at 10:43 AM, Alexander Zhang via lldb-dev 
>  wrote:
> 
> Hi,
> 
> Thanks for pointing me towards stack unwinding. I don't have debug 
> information much of the time, so I'm depending on the architecture rules for 
> backtracing. A look at the mips ABI plugin shows it uses dwarf register 
> numbers to get the register values it needs, and I wasn't including them in 
> my qRegisterInfo responses. After fixing this, step over and step out appear 
> to work correctly, which is a great help.
> 
> However, backtraces only show 2 frames with the current pc and ra values, no 
> matter where I am, so it seems there's some problem getting stack frame info 
> from the actual stack. I've attached an unwind log from running bt inside a 
> function that should have a deeper backtrace. The afa value of 
> 0x looks suspicious to me, but I don't really understand 
> where it comes from. The frame before 0x8002ee70 should, I think, be 
> 0x80026a6c, as that's the pc after stepping out twice.
> 
> Thanks,
> Alexander 
> 
> On Sun, Jan 26, 2020 at 4:21 PM Jason Molenda  wrote:
> I suspect your problem may be related to lldb not knowing how to walk the 
> stack on this target.  Is  mips-unknown-linux-gnu correct?  What do you see 
> if you turn on unwind logging, 'log enable lldb unwind'.  You can also have 
> lldb show you the unwind rules at the current pc value with 'image 
> show-unwind -a $pc'.  I don't know what unwinders we have defined for this 
> target in lldb right now -- if you have eh_frame information in your binary, 
> lldb should read & use that.  Otherwise, if you have an assembly instruction 
> profiler in lldb for mips, and start addresses for your functions, lldb 
> should be able to inspect the instruction stream and figure out how to unwind 
> out of the function correctly.  As a last resort, it will fall back to 
> architecture rules for how to backtrace out of a function (defined in the ABI 
> plugin) but those are often incorrect in prologue/epilogues (start & end of a 
> function).
> 
> 
> 
> (btw if you support no-ack mode, there's a lot less packet traffic between 
> your stub and lldb - recommended.)
> 
> 
> J
> 
> 
> 
> 
> > On Jan 25, 2020, at 3:08 PM, Alexander Zhang via lldb-dev 
> >  wrote:
> > 
> > Hi,
> > 
> > I've been implementing a basic RSP protocol server for remotely debugging a 
> > MIPS simulator, and have been having some trouble getting certain lldb 
> > features to work there, in particular backtraces (bt) and instruction step 
> > over (ni). Does someone know what packets these commands rely on to work? 
> > I've attached some communication logs, and if it helps my code is at 
> > https://github.com/CQCumbers/nmulator/blob/master/src/debugger.h
> > 
> > Please forgive me if this isn't the right

Re: [lldb-dev] LLDB C++ API causes SIGSEGV

2020-03-08 Thread Jason Molenda via lldb-dev
Hi Rui, you need to call SBDebugger::Terminate() before your program exits.


On 03/08/20 01:54 PM, Rui Liu via lldb-dev   wrote: 
> 
> 
> Hi LLDB devs,
> 
> I'm trying to build a debugger integration that uses LLDB C++ API. Due to 
> lack of documentation, I'm basically using the examples in the python API as 
> a guidance.
> 
> I had following code, which basically contains one line creating a 
> SBDebugger, but it generates a SIGSEGV fault..
> 
> #include 
> using namespace lldb;
> 
> int main()
> {
>  SBDebugger debugger = SBDebugger::Create();
> }
> 
> Did I do something wrong?
> 
> Kind regards,
> Rui
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: AArch64 Linux Memory Tagging Support for LLDB

2020-08-10 Thread Jason Molenda via lldb-dev
Hi David, thanks for the great writeup.  I hadn't been following the gdb MTE 
support.

This all looks reasonable to me.  A few quick thoughts --

The initial idea of commands like "memory showptrtag", "memory showtag", 
"memory checktag" - it might be better to put all of these under "memory tag 
...", similar to how "breakpoint command ..." works.

It makes sense to have lldb read/write the control pseudo-register as if it 
were a normal reg, in its own register grouping.  You mentioned that you had 
some thoughts about how to make it more readable to users - I know this is 
something Greg has been hoping to do / see done at some point, for control 
registers where we could annotate the registers a lot better.  I noticed that 
qemu for x86 provides exactly this kind of annotation information in its 
register target.xml definitions (v. 
lldb/test/API/functionalities/gdb_remote_client/TestRegDefinitionInParts.py ) 
but I don't THINK we do anything with these annotations today.  Not at all 
essential to this work, but just noting that this is something we all would 
like to see better support for.

As for annotating the reason the program stopped on an MTE exception, Ismail 
was working on something similar in the past - although as you note, the really 
cool thing would be decoding the faulting instruction to understand what target 
register was responsible for the fault (and maybe even working back via the 
debug info to figure out what user-level variable it was??) to annotate it, 
which is something we're not doing anywhere right now.  There was a little 
proof-of-concept thing that Sean Callanan did years ago "frame diagnose" which 
would try to annotate to the user in high-level source terms why a fault 
happened, but I think it was using some string matching of x86 instructions to 
figure out what happened. :)

We're overdue to upstream the PAC support for lldb that we're using, it's 
dependent on some other work being upstreamed that hasn't been done yet, but 
the general scheme involves querying the lldb-server / debugserver / corefile 
to get the number of bits used for virtual addressing, and then it just stomps 
on all the other high bits when trying to dereference values.  If you do 
'register read' of a function pointer, we show the actual value with PAC bits, 
then we strip the PAC bits off and if it resolves to a symbol, we print the 
stripped value and symbol that we're pointing to. It seems similar to what MTE 
will need -- if you have a variable pointing to heap using MTE, and you do `x/g 
var`, lldb should strip off the MTE bits before sending the address to read to 
lldb-server. The goal with the PAC UI design is to never hide the PAC details 
from the user, but to additionally show the PAC-less address when we're sure 
that it's an address in memory.  Tougher to do that with MTE because we'll 
never be pointing to a symbol, it will be heap or stack.

J





> On Aug 10, 2020, at 3:41 AM, David Spickett via lldb-dev 
>  wrote:
> 
> Hi all,
> 
> What follows is my proposal for supporting AArch64's memory tagging
> extension in LLDB. I think the link in the first paragraph is a good
> introduction if you haven't come across memory tagging before.
> 
> I've also put the document in a Google Doc if that's easier for you to
> read: 
> https://docs.google.com/document/d/13oRtTujCrWOS_2RSciYoaBPNPgxIvTF2qyOfhhUTj1U/edit?usp=sharing
> (please keep comments to this list though)
> 
> Any and all comments welcome. Particularly I would like opinions on
> the naming of the commands, as this extension is AArch64 specific but
> the concept of memory tagging itself is not.
> (I've added some people on Cc who might have particular interest)
> 
> Thanks,
> David Spickett.
> 
> 
> 
> # RFC: AArch64 Linux Memory Tagging Support for LLDB
> 
> ## What is memory tagging?
> 
> Memory tagging is an extension added in the Armv8.5-a architecture for 
> AArch64.
> It allows tagging pointers and storing those tags so that hardware can 
> validate
> that a pointer matches the memory address it is trying to access. These paired
> tags are stored in the upper bits of the pointer (the “logical” tag) and in
> special memory in hardware (the “allocation” tag). Each tag is 4 bits in size.
> 
> https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety
> 
> ## Definitions
> 
> * memtag - This is the clang name for the extension as in
> “-march=armv8.5-a+memtag”
> * mte - An alternative name for mmtag, also the llvm backend name for
> the extension.
>  This document may use memtag/memory tagging/MTE at times, they mean
> the same thing.
> * logical tag - The tag stored inside a pointer variable (accessible
> via normal shift and mask)
> * allocation tag - The tag stored in tag memory (which the hardware provides)
>  for a particular tag granule
> * tag granule - The amount of memory that a single tag applies to,
> which is 16 bytes.
> 
> ## Existing Tool Support
> 
> * GCC/

Re: [lldb-dev] RFC: AArch64 Linux Memory Tagging Support for LLDB

2020-08-13 Thread Jason Molenda via lldb-dev
read/write 
> pauth Cmask/Dmask registers when available. I am currently investigating 
> unwinder support which means any further implementation from my side will be 
> an overlap with what you guys have done already. There can also be design 
> conflicts and I would really appreciate it if we can come on some common 
> ground regarding upstreaming of Apple's downstream pointer authentication 
> patches. We may collaborate and upstream unwinder support.
> 
> Thanks!
> 
> On Tue, 11 Aug 2020 at 04:13, Jason Molenda via lldb-dev 
>  wrote:
> Hi David, thanks for the great writeup.  I hadn't been following the gdb MTE 
> support.
> 
> This all looks reasonable to me.  A few quick thoughts --
> 
> The initial idea of commands like "memory showptrtag", "memory showtag", 
> "memory checktag" - it might be better to put all of these under "memory tag 
> ...", similar to how "breakpoint command ..." works.
> 
> It makes sense to have lldb read/write the control pseudo-register as if it 
> were a normal reg, in its own register grouping.  You mentioned that you had 
> some thoughts about how to make it more readable to users - I know this is 
> something Greg has been hoping to do / see done at some point, for control 
> registers where we could annotate the registers a lot better.  I noticed that 
> qemu for x86 provides exactly this kind of annotation information in its 
> register target.xml definitions (v. 
> lldb/test/API/functionalities/gdb_remote_client/TestRegDefinitionInParts.py ) 
> but I don't THINK we do anything with these annotations today.  Not at all 
> essential to this work, but just noting that this is something we all would 
> like to see better support for.
> 
> As for annotating the reason the program stopped on an MTE exception, Ismail 
> was working on something similar in the past - although as you note, the 
> really cool thing would be decoding the faulting instruction to understand 
> what target register was responsible for the fault (and maybe even working 
> back via the debug info to figure out what user-level variable it was??) to 
> annotate it, which is something we're not doing anywhere right now.  There 
> was a little proof-of-concept thing that Sean Callanan did years ago "frame 
> diagnose" which would try to annotate to the user in high-level source terms 
> why a fault happened, but I think it was using some string matching of x86 
> instructions to figure out what happened. :)
> 
> We're overdue to upstream the PAC support for lldb that we're using, it's 
> dependent on some other work being upstreamed that hasn't been done yet, but 
> the general scheme involves querying the lldb-server / debugserver / corefile 
> to get the number of bits used for virtual addressing, and then it just 
> stomps on all the other high bits when trying to dereference values.  If you 
> do 'register read' of a function pointer, we show the actual value with PAC 
> bits, then we strip the PAC bits off and if it resolves to a symbol, we print 
> the stripped value and symbol that we're pointing to. It seems similar to 
> what MTE will need -- if you have a variable pointing to heap using MTE, and 
> you do `x/g var`, lldb should strip off the MTE bits before sending the 
> address to read to lldb-server. The goal with the PAC UI design is to never 
> hide the PAC details from the user, but to additionally show the PAC-less 
> address when we're sure that it's an address in memory.  Tougher to do that 
> with MTE because we'll never be pointing to a symbol, it will be heap or 
> stack.
> 
> J
> 
> 
> 
> 
> 
> > On Aug 10, 2020, at 3:41 AM, David Spickett via lldb-dev 
> >  wrote:
> > 
> > Hi all,
> > 
> > What follows is my proposal for supporting AArch64's memory tagging
> > extension in LLDB. I think the link in the first paragraph is a good
> > introduction if you haven't come across memory tagging before.
> > 
> > I've also put the document in a Google Doc if that's easier for you to
> > read: 
> > https://docs.google.com/document/d/13oRtTujCrWOS_2RSciYoaBPNPgxIvTF2qyOfhhUTj1U/edit?usp=sharing
> > (please keep comments to this list though)
> > 
> > Any and all comments welcome. Particularly I would like opinions on
> > the naming of the commands, as this extension is AArch64 specific but
> > the concept of memory tagging itself is not.
> > (I've added some people on Cc who might have particular interest)
> > 
> > Thanks,
> > David Spickett.
> > 
> > 
> > 
> > # RFC: AArch64 Linux Memory Tagging Support fo

Re: [lldb-dev] RFC: AArch64 Linux Memory Tagging Support for LLDB

2020-08-13 Thread Jason Molenda via lldb-dev
t; 
> There's some additional changes for jitting expressions, and to be honest 
> it's been ages since I've looked at that code so I can't speak on it very 
> authoritatively without re-reading a bunch (and I authored very little of 
> it).  
> 
> 
> 
> A good place to start IMO, is the base idea of Process being the knower of 
> the # of addressing bits, and the ABI being the knower of how to strip PAC 
> bits off.  I chose the ABI for this method because this *is* ABI, but given 
> my ideas about an all-encompassing ABI::GetAsVirtualAddress that different 
> parts of lldb pass things that can be signed in every ABI, maybe it doesn't 
> even make sense to bother putting it in the ABI, it could go in Process and 
> only strip off bits if the # of virtual addressing bits has been set.
> 
> 
>> On Aug 13, 2020, at 12:56 AM, Omair Javaid  wrote:
>> 
>> Hi Jason,
>> 
>> I wanted to bring this to your attention that we are also working on pointer 
>> authentication support. We have so far only done register context changes to 
>> allow for enabling/disabling pointer authentication features and read/write 
>> pauth Cmask/Dmask registers when available. I am currently investigating 
>> unwinder support which means any further implementation from my side will be 
>> an overlap with what you guys have done already. There can also be design 
>> conflicts and I would really appreciate it if we can come on some common 
>> ground regarding upstreaming of Apple's downstream pointer authentication 
>> patches. We may collaborate and upstream unwinder support.
>> 
>> Thanks!
>> 
>> On Tue, 11 Aug 2020 at 04:13, Jason Molenda via lldb-dev 
>>  wrote:
>> Hi David, thanks for the great writeup.  I hadn't been following the gdb MTE 
>> support.
>> 
>> This all looks reasonable to me.  A few quick thoughts --
>> 
>> The initial idea of commands like "memory showptrtag", "memory showtag", 
>> "memory checktag" - it might be better to put all of these under "memory tag 
>> ...", similar to how "breakpoint command ..." works.
>> 
>> It makes sense to have lldb read/write the control pseudo-register as if it 
>> were a normal reg, in its own register grouping.  You mentioned that you had 
>> some thoughts about how to make it more readable to users - I know this is 
>> something Greg has been hoping to do / see done at some point, for control 
>> registers where we could annotate the registers a lot better.  I noticed 
>> that qemu for x86 provides exactly this kind of annotation information in 
>> its register target.xml definitions (v. 
>> lldb/test/API/functionalities/gdb_remote_client/TestRegDefinitionInParts.py 
>> ) but I don't THINK we do anything with these annotations today.  Not at all 
>> essential to this work, but just noting that this is something we all would 
>> like to see better support for.
>> 
>> As for annotating the reason the program stopped on an MTE exception, Ismail 
>> was working on something similar in the past - although as you note, the 
>> really cool thing would be decoding the faulting instruction to understand 
>> what target register was responsible for the fault (and maybe even working 
>> back via the debug info to figure out what user-level variable it was??) to 
>> annotate it, which is something we're not doing anywhere right now.  There 
>> was a little proof-of-concept thing that Sean Callanan did years ago "frame 
>> diagnose" which would try to annotate to the user in high-level source terms 
>> why a fault happened, but I think it was using some string matching of x86 
>> instructions to figure out what happened. :)
>> 
>> We're overdue to upstream the PAC support for lldb that we're using, it's 
>> dependent on some other work being upstreamed that hasn't been done yet, but 
>> the general scheme involves querying the lldb-server / debugserver / 
>> corefile to get the number of bits used for virtual addressing, and then it 
>> just stomps on all the other high bits when trying to dereference values.  
>> If you do 'register read' of a function pointer, we show the actual value 
>> with PAC bits, then we strip the PAC bits off and if it resolves to a 
>> symbol, we print the stripped value and symbol that we're pointing to. It 
>> seems similar to what MTE will need -- if you have a variable pointing to 
>> heap using MTE, and you do `x/g var`, lldb should strip off the MTE bits 
>> before sending the address to read to lldb-server. The goal with

Re: [lldb-dev] LLDB might not be handling DW_CFA_restore or DW_CFA_restore_extended correctly in all cases

2020-10-08 Thread Jason Molenda via lldb-dev
Good bug find!

It seems to me that DWARFCallFrameInfo should push the initial CIE register 
setup instructions as being the state at offset 0 in the function (in fact I'd 
say it's a bug if it's not). If that were done, then getting RowAtIndex(0) 
should be synonymous with get-the-CIE-register-unwind-rules, and this code 
would be correct.

Looking at DWARFCallFrameInfo::FDEToUnwindPlan, we do

  unwind_plan.SetPlanValidAddressRange(range);
  UnwindPlan::Row *cie_initial_row = new UnwindPlan::Row;
  *cie_initial_row = cie->initial_row;
  UnwindPlan::RowSP row(cie_initial_row);

  unwind_plan.SetRegisterKind(GetRegisterKind());
  unwind_plan.SetReturnAddressRegister(cie->return_addr_reg_num);

cie->initial_row is set by DWARFCallFrameInfo::HandleCommonDwarfOpcode.

I think the bug here is DWARFCallFrameInfo::FDEToUnwindPlan not pushing that 
initial row at offset 0, isn't it? We don't really use DWARF CFI on darwin any 
more so I don't have a lot of real world experience here.



> On Oct 8, 2020, at 4:01 PM, Greg Clayton  wrote:
> 
> Hello LLDB devs,
> 
> This is a deep dive into an issue I found in the LLDB handling of DWARF call 
> frame information, so no need to read further if this doesn't interest you!
> 
> I am in the process of adding some support to LLVM for parsing the opcode 
> state machines for CIE and FDE objects that produces unwind rows. While 
> making unit tests to test DW_CFA_restore and DW_CFA_restore_extended opcodes, 
> I read the DWARF spec that states:
> 
> "The DW_CFA_restore instruction takes a single operand (encoded with the 
> opcode) that represents a register number. The required action is to change 
> the rule for the indicated register to the rule assigned it by the 
> initial_instructions in the CIE."
> 
> Looking at the LLDB code in DWARFCallFrameInfo.cpp I see code that is 
> simplified to:
> 
> case DW_CFA_restore:
>  if (unwind_plan.IsValidRowIndex(0) && 
>  unwind_plan.GetRowAtIndex(0)->GetRegisterInfo(reg_num, reg_location))
>  row->SetRegisterInfo(reg_num, reg_location);
>  break;
> 
> 
> The issue is, the CIE contains initial instructions, but it doesn't push a 
> row after doing these instructions, the FDE will push a row when it emits a 
> DW_CFA_advance_loc, DW_CFA_advance_loc1, DW_CFA_advance_loc2, 
> DW_CFA_advance_loc4 or DW_CFA_set_loc opcode. So the DWARF spec says we 
> should restore the register rule to be what it was in the CIE's initial 
> instructions, but we are restoring it to the first row that was parsed. This 
> will mostly not get us into trouble because .debug_frame and .eh_frame 
> usually have a DW_CFA_advance_locXXX or DW_CFA_set_loc opcode as the first 
> opcode, but it isn't a requirement and a FDE could modify a register value 
> prior to pushing the first row at index zero. So we might be restoring the 
> register incorrectly in some cases according to the spec. Also, what if there 
> was no value specified in the CIE's initial instructions for a register? 
> Should we remove the register value to match the state of the CIE's initial 
> instructions if there is no rule for the register? We are currently leaving 
> this register as having the same value if there is no value for the register 
> in the first row.
> 
> Let me know what you think.
> 
> Greg Clayton
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB might not be handling DW_CFA_restore or DW_CFA_restore_extended correctly in all cases

2020-10-08 Thread Jason Molenda via lldb-dev


> On Oct 8, 2020, at 9:17 PM, Greg Clayton  wrote:
> 
> 
> 
>> On Oct 8, 2020, at 8:55 PM, Jason Molenda  wrote:
>> 
>> Good bug find!
>> 
>> It seems to me that DWARFCallFrameInfo should push the initial CIE register 
>> setup instructions as being the state at offset 0 in the function (in fact 
>> I'd say it's a bug if it's not). If that were done, then getting 
>> RowAtIndex(0) should be synonymous with get-the-CIE-register-unwind-rules, 
>> and this code would be correct.
>> 
>> Looking at DWARFCallFrameInfo::FDEToUnwindPlan, we do
>> 
>> unwind_plan.SetPlanValidAddressRange(range);
>> UnwindPlan::Row *cie_initial_row = new UnwindPlan::Row;
>> *cie_initial_row = cie->initial_row;
>> UnwindPlan::RowSP row(cie_initial_row);
>> 
>> unwind_plan.SetRegisterKind(GetRegisterKind());
>> unwind_plan.SetReturnAddressRegister(cie->return_addr_reg_num);
>> 
>> cie->initial_row is set by DWARFCallFrameInfo::HandleCommonDwarfOpcode.
>> 
>> I think the bug here is DWARFCallFrameInfo::FDEToUnwindPlan not pushing that 
>> initial row at offset 0, isn't it? We don't really use DWARF CFI on darwin 
>> any more so I don't have a lot of real world experience here.
> 
> The only opcodes that push a row are DW_CFA_advance_locXXX and 
> DW_CFA_set_loc, so I don't think that is the right fix. I think we need to 
> pass a copy of just the registers from the "cie->initial_row" object around 
> to the opcode parsing code for restoration purposes.

The Rows in an UnwindPlan at each function offset describes the register unwind 
rules there.  If the CIE has a register unwind rule, we should push that Row 
for offset 0 in the function.  The function today may only push a Row when the 
pc has been advanced, but that is not (IMO) correct.

I can't think of any register rule like this on arm64 or x86_64.  You could say 
the CFA is $sp+8 at function entry, and the caller's $sp value is CFA+0 (aka 
$sp+8) to reflect the return-pc value that was pushed by the CALLQ.  But I 
don't know if actual CFI on x86_64 includes an SP unwind rule in the CIE like 
this.

> 
> 
>> 
>> 
>> 
>>> On Oct 8, 2020, at 4:01 PM, Greg Clayton  wrote:
>>> 
>>> Hello LLDB devs,
>>> 
>>> This is a deep dive into an issue I found in the LLDB handling of DWARF 
>>> call frame information, so no need to read further if this doesn't interest 
>>> you!
>>> 
>>> I am in the process of adding some support to LLVM for parsing the opcode 
>>> state machines for CIE and FDE objects that produces unwind rows. While 
>>> making unit tests to test DW_CFA_restore and DW_CFA_restore_extended 
>>> opcodes, I read the DWARF spec that states:
>>> 
>>> "The DW_CFA_restore instruction takes a single operand (encoded with the 
>>> opcode) that represents a register number. The required action is to change 
>>> the rule for the indicated register to the rule assigned it by the 
>>> initial_instructions in the CIE."
>>> 
>>> Looking at the LLDB code in DWARFCallFrameInfo.cpp I see code that is 
>>> simplified to:
>>> 
>>> case DW_CFA_restore:
>>> if (unwind_plan.IsValidRowIndex(0) && 
>>>unwind_plan.GetRowAtIndex(0)->GetRegisterInfo(reg_num, reg_location))
>>>row->SetRegisterInfo(reg_num, reg_location);
>>> break;
>>> 
>>> 
>>> The issue is, the CIE contains initial instructions, but it doesn't push a 
>>> row after doing these instructions, the FDE will push a row when it emits a 
>>> DW_CFA_advance_loc, DW_CFA_advance_loc1, DW_CFA_advance_loc2, 
>>> DW_CFA_advance_loc4 or DW_CFA_set_loc opcode. So the DWARF spec says we 
>>> should restore the register rule to be what it was in the CIE's initial 
>>> instructions, but we are restoring it to the first row that was parsed. 
>>> This will mostly not get us into trouble because .debug_frame and .eh_frame 
>>> usually have a DW_CFA_advance_locXXX or DW_CFA_set_loc opcode as the first 
>>> opcode, but it isn't a requirement and a FDE could modify a register value 
>>> prior to pushing the first row at index zero. So we might be restoring the 
>>> register incorrectly in some cases according to the spec. Also, what if 
>>> there was no value specified in the CIE's initial instructions for a 
>>> register? Should we remove the register value to match the state of the 
>>> CIE's initial instructions if there is no rule for the register? We are 
>>> currently leaving this register as having the same value if there is no 
>>> value for the register in the first row.
>>> 
>>> Let me know what you think.
>>> 
>>> Greg Clayton
>>> 
>> 
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB might not be handling DW_CFA_restore or DW_CFA_restore_extended correctly in all cases

2020-10-08 Thread Jason Molenda via lldb-dev


> On Oct 8, 2020, at 9:17 PM, Greg Clayton  wrote:
> 
> 
> 
>> On Oct 8, 2020, at 8:55 PM, Jason Molenda  wrote:
>> 
>> Good bug find!
>> 
>> It seems to me that DWARFCallFrameInfo should push the initial CIE register 
>> setup instructions as being the state at offset 0 in the function (in fact 
>> I'd say it's a bug if it's not). If that were done, then getting 
>> RowAtIndex(0) should be synonymous with get-the-CIE-register-unwind-rules, 
>> and this code would be correct.
>> 
>> Looking at DWARFCallFrameInfo::FDEToUnwindPlan, we do
>> 
>> unwind_plan.SetPlanValidAddressRange(range);
>> UnwindPlan::Row *cie_initial_row = new UnwindPlan::Row;
>> *cie_initial_row = cie->initial_row;
>> UnwindPlan::RowSP row(cie_initial_row);
>> 
>> unwind_plan.SetRegisterKind(GetRegisterKind());
>> unwind_plan.SetReturnAddressRegister(cie->return_addr_reg_num);
>> 
>> cie->initial_row is set by DWARFCallFrameInfo::HandleCommonDwarfOpcode.
>> 
>> I think the bug here is DWARFCallFrameInfo::FDEToUnwindPlan not pushing that 
>> initial row at offset 0, isn't it? We don't really use DWARF CFI on darwin 
>> any more so I don't have a lot of real world experience here.
> 
> The only opcodes that push a row are DW_CFA_advance_locXXX and 
> DW_CFA_set_loc, so I don't think that is the right fix. I think we need to 
> pass a copy of just the registers from the "cie->initial_row" object around 
> to the opcode parsing code for restoration purposes.


I think everything I'm saying here is besides the point, though.  Unless we 
ALWAYS push the initial unwind state (from the CIE) to an UnwindPlan, the 
DW_CFA_restore is not going to work.  If an unwind rule for r12 says 
"DW_CFA_restore" and at offset 0 in the function, the unwind rule for r12 was 
"same" (i.e. no rule), but we return the RowAtIndex(0) and the first 
instruction, I don't know, spills it or something, then the DW_CFA_restore 
would set the r12 rule to "r12 was spilled" instead of "r12 is same".

So the only way DW_CFA_restore would behave correctly, with this, is if we 
always push a Row at offset 0 with the rules from the CIE, or with no rules at 
all, just the initial unwind state showing how the CFA is set and no register 
rules.


> 
> 
>> 
>> 
>> 
>>> On Oct 8, 2020, at 4:01 PM, Greg Clayton  wrote:
>>> 
>>> Hello LLDB devs,
>>> 
>>> This is a deep dive into an issue I found in the LLDB handling of DWARF 
>>> call frame information, so no need to read further if this doesn't interest 
>>> you!
>>> 
>>> I am in the process of adding some support to LLVM for parsing the opcode 
>>> state machines for CIE and FDE objects that produces unwind rows. While 
>>> making unit tests to test DW_CFA_restore and DW_CFA_restore_extended 
>>> opcodes, I read the DWARF spec that states:
>>> 
>>> "The DW_CFA_restore instruction takes a single operand (encoded with the 
>>> opcode) that represents a register number. The required action is to change 
>>> the rule for the indicated register to the rule assigned it by the 
>>> initial_instructions in the CIE."
>>> 
>>> Looking at the LLDB code in DWARFCallFrameInfo.cpp I see code that is 
>>> simplified to:
>>> 
>>> case DW_CFA_restore:
>>> if (unwind_plan.IsValidRowIndex(0) && 
>>>unwind_plan.GetRowAtIndex(0)->GetRegisterInfo(reg_num, reg_location))
>>>row->SetRegisterInfo(reg_num, reg_location);
>>> break;
>>> 
>>> 
>>> The issue is, the CIE contains initial instructions, but it doesn't push a 
>>> row after doing these instructions, the FDE will push a row when it emits a 
>>> DW_CFA_advance_loc, DW_CFA_advance_loc1, DW_CFA_advance_loc2, 
>>> DW_CFA_advance_loc4 or DW_CFA_set_loc opcode. So the DWARF spec says we 
>>> should restore the register rule to be what it was in the CIE's initial 
>>> instructions, but we are restoring it to the first row that was parsed. 
>>> This will mostly not get us into trouble because .debug_frame and .eh_frame 
>>> usually have a DW_CFA_advance_locXXX or DW_CFA_set_loc opcode as the first 
>>> opcode, but it isn't a requirement and a FDE could modify a register value 
>>> prior to pushing the first row at index zero. So we might be restoring the 
>>> register incorrectly in some cases according to the spec. Also, what if 
>>> there was no value specified in the CIE's initial instructions for a 
>>> register? Should we remove the register value to match the state of the 
>>> CIE's initial instructions if there is no rule for the register? We are 
>>> currently leaving this register as having the same value if there is no 
>>> value for the register in the first row.
>>> 
>>> Let me know what you think.
>>> 
>>> Greg Clayton
>>> 
>> 
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB might not be handling DW_CFA_restore or DW_CFA_restore_extended correctly in all cases

2020-10-08 Thread Jason Molenda via lldb-dev


> On Oct 8, 2020, at 10:06 PM, Jason Molenda  wrote:
> 
> 
> 
>> On Oct 8, 2020, at 9:17 PM, Greg Clayton  wrote:
>> 
>> 
>> 
>>> On Oct 8, 2020, at 8:55 PM, Jason Molenda  wrote:
>>> 
>>> Good bug find!
>>> 
>>> It seems to me that DWARFCallFrameInfo should push the initial CIE register 
>>> setup instructions as being the state at offset 0 in the function (in fact 
>>> I'd say it's a bug if it's not). If that were done, then getting 
>>> RowAtIndex(0) should be synonymous with get-the-CIE-register-unwind-rules, 
>>> and this code would be correct.
>>> 
>>> Looking at DWARFCallFrameInfo::FDEToUnwindPlan, we do
>>> 
>>> unwind_plan.SetPlanValidAddressRange(range);
>>> UnwindPlan::Row *cie_initial_row = new UnwindPlan::Row;
>>> *cie_initial_row = cie->initial_row;
>>> UnwindPlan::RowSP row(cie_initial_row);
>>> 
>>> unwind_plan.SetRegisterKind(GetRegisterKind());
>>> unwind_plan.SetReturnAddressRegister(cie->return_addr_reg_num);
>>> 
>>> cie->initial_row is set by DWARFCallFrameInfo::HandleCommonDwarfOpcode.
>>> 
>>> I think the bug here is DWARFCallFrameInfo::FDEToUnwindPlan not pushing 
>>> that initial row at offset 0, isn't it? We don't really use DWARF CFI on 
>>> darwin any more so I don't have a lot of real world experience here.
>> 
>> The only opcodes that push a row are DW_CFA_advance_locXXX and 
>> DW_CFA_set_loc, so I don't think that is the right fix. I think we need to 
>> pass a copy of just the registers from the "cie->initial_row" object around 
>> to the opcode parsing code for restoration purposes.
> 
> 
> I think everything I'm saying here is besides the point, though.  Unless we 
> ALWAYS push the initial unwind state (from the CIE) to an UnwindPlan, the 
> DW_CFA_restore is not going to work.  If an unwind rule for r12 says 
> "DW_CFA_restore" and at offset 0 in the function, the unwind rule for r12 was 
> "same" (i.e. no rule), but we return the RowAtIndex(0) and the first 
> instruction, I don't know, spills it or something, then the DW_CFA_restore 
> would set the r12 rule to "r12 was spilled" instead of "r12 is same".
> 
> So the only way DW_CFA_restore would behave correctly, with this, is if we 
> always push a Row at offset 0 with the rules from the CIE, or with no rules 
> at all, just the initial unwind state showing how the CFA is set and no 
> register rules.



just to be clear, though, my initial reaction to this bug is "we should always 
push a row at offset 0."  I don't want to sound dumb or anything, but I don't 
understand how unwinding would work if we didn't have a Row at offset 0.  You 
step into the function, you're at the first instruction, you want to find the 
caller stack frame, and without knowing the rule for establishing the CFA and 
finding the saved pc, I don't know how you get that.  And the only way to get 
the CFA / saved pc rule is to get the Row.  Do we really not have a Row at 
offset 0 when an UnwindPlan is created from CFI?  I might be forgetting some 
trick of UnwindPlans, but I don't see how it works.


> 
> 
>> 
>> 
>>> 
>>> 
>>> 
 On Oct 8, 2020, at 4:01 PM, Greg Clayton  wrote:
 
 Hello LLDB devs,
 
 This is a deep dive into an issue I found in the LLDB handling of DWARF 
 call frame information, so no need to read further if this doesn't 
 interest you!
 
 I am in the process of adding some support to LLVM for parsing the opcode 
 state machines for CIE and FDE objects that produces unwind rows. While 
 making unit tests to test DW_CFA_restore and DW_CFA_restore_extended 
 opcodes, I read the DWARF spec that states:
 
 "The DW_CFA_restore instruction takes a single operand (encoded with the 
 opcode) that represents a register number. The required action is to 
 change the rule for the indicated register to the rule assigned it by the 
 initial_instructions in the CIE."
 
 Looking at the LLDB code in DWARFCallFrameInfo.cpp I see code that is 
 simplified to:
 
 case DW_CFA_restore:
 if (unwind_plan.IsValidRowIndex(0) && 
   unwind_plan.GetRowAtIndex(0)->GetRegisterInfo(reg_num, reg_location))
   row->SetRegisterInfo(reg_num, reg_location);
 break;
 
 
 The issue is, the CIE contains initial instructions, but it doesn't push a 
 row after doing these instructions, the FDE will push a row when it emits 
 a DW_CFA_advance_loc, DW_CFA_advance_loc1, DW_CFA_advance_loc2, 
 DW_CFA_advance_loc4 or DW_CFA_set_loc opcode. So the DWARF spec says we 
 should restore the register rule to be what it was in the CIE's initial 
 instructions, but we are restoring it to the first row that was parsed. 
 This will mostly not get us into trouble because .debug_frame and 
 .eh_frame usually have a DW_CFA_advance_locXXX or DW_CFA_set_loc opcode as 
 the first opcode, but it isn't a requirement and a FDE could modify a 
 register value prior to pushing the first row at index zero. So we might 
 be restorin

Re: [lldb-dev] LLDB might not be handling DW_CFA_restore or DW_CFA_restore_extended correctly in all cases

2020-10-08 Thread Jason Molenda via lldb-dev


> On Oct 8, 2020, at 10:37 PM, Greg Clayton  wrote:
> 
> 
> 
>> On Oct 8, 2020, at 10:29 PM, Jason Molenda  wrote:
>> 
>> 
>> 
>>> On Oct 8, 2020, at 10:06 PM, Jason Molenda  wrote:
>>> 
>>> 
>>> 
 On Oct 8, 2020, at 9:17 PM, Greg Clayton  wrote:
 
 
 
> On Oct 8, 2020, at 8:55 PM, Jason Molenda  wrote:
> 
> Good bug find!
> 
> It seems to me that DWARFCallFrameInfo should push the initial CIE 
> register setup instructions as being the state at offset 0 in the 
> function (in fact I'd say it's a bug if it's not). If that were done, 
> then getting RowAtIndex(0) should be synonymous with 
> get-the-CIE-register-unwind-rules, and this code would be correct.
> 
> Looking at DWARFCallFrameInfo::FDEToUnwindPlan, we do
> 
> unwind_plan.SetPlanValidAddressRange(range);
> UnwindPlan::Row *cie_initial_row = new UnwindPlan::Row;
> *cie_initial_row = cie->initial_row;
> UnwindPlan::RowSP row(cie_initial_row);
> 
> unwind_plan.SetRegisterKind(GetRegisterKind());
> unwind_plan.SetReturnAddressRegister(cie->return_addr_reg_num);
> 
> cie->initial_row is set by DWARFCallFrameInfo::HandleCommonDwarfOpcode.
> 
> I think the bug here is DWARFCallFrameInfo::FDEToUnwindPlan not pushing 
> that initial row at offset 0, isn't it? We don't really use DWARF CFI on 
> darwin any more so I don't have a lot of real world experience here.
 
 The only opcodes that push a row are DW_CFA_advance_locXXX and 
 DW_CFA_set_loc, so I don't think that is the right fix. I think we need to 
 pass a copy of just the registers from the "cie->initial_row" object 
 around to the opcode parsing code for restoration purposes.
>>> 
>>> 
>>> I think everything I'm saying here is besides the point, though.  Unless we 
>>> ALWAYS push the initial unwind state (from the CIE) to an UnwindPlan, the 
>>> DW_CFA_restore is not going to work.  If an unwind rule for r12 says 
>>> "DW_CFA_restore" and at offset 0 in the function, the unwind rule for r12 
>>> was "same" (i.e. no rule), but we return the RowAtIndex(0) and the first 
>>> instruction, I don't know, spills it or something, then the DW_CFA_restore 
>>> would set the r12 rule to "r12 was spilled" instead of "r12 is same".
>>> 
>>> So the only way DW_CFA_restore would behave correctly, with this, is if we 
>>> always push a Row at offset 0 with the rules from the CIE, or with no rules 
>>> at all, just the initial unwind state showing how the CFA is set and no 
>>> register rules.
>> 
>> 
>> 
>> just to be clear, though, my initial reaction to this bug is "we should 
>> always push a row at offset 0."  I don't want to sound dumb or anything, but 
>> I don't understand how unwinding would work if we didn't have a Row at 
>> offset 0.  You step into the function, you're at the first instruction, you 
>> want to find the caller stack frame, and without knowing the rule for 
>> establishing the CFA and finding the saved pc, I don't know how you get 
>> that.  And the only way to get the CFA / saved pc rule is to get the Row.  
>> Do we really not have a Row at offset 0 when an UnwindPlan is created from 
>> CFI?  I might be forgetting some trick of UnwindPlans, but I don't see how 
>> it works.
> 
> What you are saying makes sense, but that isn't how it is encoded. A quick 
> example:
> 
>  0010  CIE
>   Version:   4
>   Augmentation:  ""
>   Address size:  4
>   Segment desc size: 0
>   Code alignment factor: 1
>   Data alignment factor: -4
>   Return address column: 14
> 
>   DW_CFA_def_cfa: reg13 +0
>   DW_CFA_nop:
>   DW_CFA_nop:
> 
> 0014 0024  FDE cie= pc=0001bb2c...0001bc90
>   DW_CFA_advance_loc: 4
>   DW_CFA_def_cfa_offset: +32
>   DW_CFA_offset: reg14 -4
>   DW_CFA_offset: reg10 -8
>   DW_CFA_offset: reg9 -12
>   DW_CFA_offset: reg8 -16
>   DW_CFA_offset: reg7 -20
>   DW_CFA_offset: reg6 -24
>   DW_CFA_offset: reg5 -28
>   DW_CFA_offset: reg4 -32
>   DW_CFA_advance_loc: 2
>   DW_CFA_def_cfa_offset: +112
>   DW_CFA_nop:
>   DW_CFA_nop:
> 
> 
> DW_CFA_advance_loc is what pushes a row. As you can see in the FDE, it is the 
> first thing it does.


Ah, cool, thanks for the example CFI.  I believe 
DWARFCallFrameInfo::FDEToUnwindPlan is doing the right thing here.  We start by 
initializing the local variable 'row' to the CIE's initial register state:

  UnwindPlan::Row *cie_initial_row = new UnwindPlan::Row;
  *cie_initial_row = cie->initial_row;
  UnwindPlan::RowSP row(cie_initial_row);

The first instruction we hit is the advance loc,

  if (primary_opcode) {
switch (primary_opcode) {
case DW_CFA_advance_loc: // (Row Creation Instruction)
  unwind_plan.AppendRow(row);
  UnwindPlan::Row *newrow = new UnwindPlan::Row;
  *newrow = *row.get();
  row.reset(newrow);
  row->SlideOffset(extended_opcode * code_align);
  break;

an

Re: [lldb-dev] [RFC] Segmented Address Space Support in LLDB

2020-10-22 Thread Jason Molenda via lldb-dev
Hi Greg, Pavel.

I think it's worth saying that this is very early in this project.  We know 
we're going to need the ability to track segments on addresses, but honestly a 
lot of the finer details aren't clear yet.  It's such a fundamental change that 
we wanted to start a discussion, even though I know it's hard to have detailed 
discussions still.

In the envisioned environment, there will be a default segment, and most 
addresses will be in the default segment.  DWARF, user input (lldb cmdline), SB 
API, and clang expressions are going to be the places where segments are 
specified --- Dump methods and ProcessGDBRemote will be the main place where 
the segments are displayed/used.  There will be modifications to the memory 
read/write gdb RSP packets to include these.

This early in the project, it's hard to tell what will be upstreamed to the 
llvm.org monorepo, or when.  My personal opinion is that we don't actually want 
to add segment support to llvm.org lldb at this point.  We'd be initializing 
every address object with LLDB_INVALID_SEGMENT or LLDB_DEFAULT_SEGMENT, and 
then testing that each object is initialized this way?  I don't see this 
actually being useful.

However, changing lldb's target addresses to be strictly handled in terms of 
objects will allow us to add a segment discriminator ivar to Address and 
ProcessAddress on our local branch while this is in development, and minimize 
the places where we're diverging from the llvm.org sources.  We'll need to have 
local modifications at the places where a segment is input (DWARF, cmdline, SB 
API, compiler type) or output (Dump, ProcesssGDBRemote) and, hopefully, the 
vast majority of lldb can be unmodified.

The proposal was written in terms of what we need to accomplish based on our 
current understanding for this project, but I think there will be a lot of 
details figured out as we get more concrete experience of how this all works.  
And when it's appropriate to upstream to llvm.org, we'll be better prepared to 
discuss the tradeoffs of the approaches we took in extending 
Address/ProcessAddress to incorporate a segment.

My hope is that these generic OO'ification of target addresses will not change 
lldb beyond moving off of addr_t for now.

I included a couple of inlined comments, but I need to address more of yours & 
Pavel's notes later, I've been dealing with a few crazy things and am way 
behind on emails but didn't want to wait any longer to send something out.



> On Oct 19, 2020, at 4:11 PM, Greg Clayton via lldb-dev 
>  wrote:
> 
> 
> 
>> On Oct 19, 2020, at 2:56 PM, Jonas Devlieghere via lldb-dev 
>>  wrote:
>> 
>> We want to support segmented address spaces in LLDB. Currently, all of 
>> LLDB’s external API, command line interface, and internals assume that an 
>> address in memory can be addressed unambiguously as an addr_t (aka 
>> uint64_t). To support a segmented address space we’d need to extend addr_t 
>> with a discriminator (an aspace_t) to uniquely identify a location in 
>> memory. This RFC outlines what would need to change and how we propose to do 
>> that.
>> 
>> ### Addresses in LLDB
>> 
>> Currently, LLDB has two ways of representing an address:
>> 
>> - Address object. Mostly represents addresses as Section+offset for a binary 
>> image loaded in the Target. An Address in this form can persist across 
>> executions, e.g. an address breakpoint in a binary image that loads at a 
>> different address every execution. An Address object can represent memory 
>> not mapped to a binary image. Heap, stack, jitted items, will all be 
>> represented as the uint64_t load address of the object, and cannot persist 
>> across multiple executions. You must have the Target object available to get 
>> the current load address of an Address object in the current process run. 
>> Some parts of lldb do not have a Target available to them, so they require 
>> that the Address can be devolved to an addr_t (aka uint64_t) and passed in.
>> - The addr_t (aka uint64_t) type. Primarily used when receiving input (e.g. 
>> from a user on the command line) or when interacting with the inferior 
>> (reading/writing memory) for addresses that need not persist across runs. 
>> Also used when reading DWARF and in our symbol tables to represent file 
>> offset addresses, where the size of an Address object would be objectionable.
>> 
> 
> Correction: LLDB has 3 kinds of uint64_t addresses:
> - "file address" which are always mapped to a section + offset if put into a 
> Address object. This value only makes sense to the lldb_private::Module that 
> contains it. The only way to pass this around is as a lldb_private::Address. 
> You can make queries on a file address using "image lookup --address" before 
> you are debugging, but a single file address can result in multiple matches 
> in multiple modules because each module might contain something at this 
> virtual address. This object might be able to be converted to a "load 
> address" i

[lldb-dev] RFC: packet to identify a standalone aka firmware binary UUID / location

2021-03-22 Thread Jason Molenda via lldb-dev
Hi, I'm working with an Apple team that has a gdb RSP server for JTAG 
debugging, and we're working to add the ability for it to tell lldb about the 
UUID and possibly address of a no-dynamic-linker standalone binary, or firmware 
binary.  Discovery of these today is ad-hoc and each different processor has a 
different way of locating the main binary (and possibly sliding it to the 
correct load address).

We have two main ways of asking the remote stub about binary images today:  
jGetLoadedDynamicLibrariesInfos on Darwin systems with debugserver, and 
qXfer:libraries-svr4: on Linux. 

 jGetLoadedDynamicLibrariesInfos has two modes: "tell me about all libraries" 
and "tell me about libraries at these load addresses" (we get notified about 
libraries being loaded/unloaded as a list of load addresses of the binary 
images; binaries are loaded in waves on a Darwin system).  The returned JSON 
packet is heavily tailored to include everything lldb needs to know about the 
binary image so it can match a file it finds on the local disk to the 
description and not read any memory at debug time -- we get the mach-o header, 
the UUID, the deployment target OS version, the load address of all the 
segments.  The packets lldb sends to debugserver look like
jGetLoadedDynamicLibrariesInfos:{"fetch_all_solibs":true}
or
jGetLoadedDynamicLibrariesInfos:{"solib_addresses":[4294967296,140733735313408,..]}


qXfer:libraries-svr4: returns an XML description of all binary images loaded, 
tailored towards an ELF view of binaries from a brief skim of ProcessGDBRemote. 
 I chose not to use this because we'd have an entirely different set of values 
returned in our xml reply for Mach-O binaries and to eliminate extraneous read 
packets from lldb, plus we needed a way of asking for a subset of all binary 
images.  A rich UI app these days can link to five hundred binary images, so 
fetching the full list when only a couple of binaries was just loaded would be 
unfortunate.


I'm trying to decide whether to (1) add a new qStandaloneBinaryInfo packet 
which returns the simple gdb RSP style "uuid:;address:0xADDR;" response, 
or (2) if we add a third mode to jGetLoadedDynamicLibrariesInfos 
(jGetLoadedDynamicLibrariesInfos:{"standalone_binary_image_info":true}) or (3) 
have the JTAG stub support a qXfer XML request (I wouldn't want to reuse the 
libraries-svr4 name and return an XML completely different, but it could have a 
qXfer:standalone-binary-image-info: or whatever).  


I figured folks might have opinions on this so I wanted to see if anyone cares 
before I pick one and get everyone to implement it.  For me, I'm inclined 
towards adding a qStandaloneBinaryInfo packet - the jtag stub already knows how 
to construct these traditional gdb RSP style responses - but it would be 
trivially easy for the stub to also assemble a fake XML response as raw text 
with the two fields.



J
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: packet to identify a standalone aka firmware binary UUID / location

2021-03-23 Thread Jason Molenda via lldb-dev
Hi Ted, I think that group's code is really specific to our environment and I 
don't see it being open sourced. 

> On Mar 23, 2021, at 7:04 AM, Ted Woodward  wrote:
> 
> Hi Jason,
> 
> A bit of a tangent here, but would you guys consider making your JTAG RSP 
> server a bit more generic and releasing it open source for use with OpenOCD? 
> They've got a stub for gdb, but it needs some work to behave better with lldb.
> 
> Ted
> 
>> -Original Message-----
>> From: lldb-dev  On Behalf Of Jason
>> Molenda via lldb-dev
>> Sent: Tuesday, March 23, 2021 1:02 AM
>> To: Greg Clayton ; Pavel Labath 
>> Cc: LLDB 
>> Subject: [EXT] [lldb-dev] RFC: packet to identify a standalone aka firmware
>> binary UUID / location
>> 
>> Hi, I'm working with an Apple team that has a gdb RSP server for JTAG
>> debugging, and we're working to add the ability for it to tell lldb about the
>> UUID and possibly address of a no-dynamic-linker standalone binary, or
>> firmware binary.  Discovery of these today is ad-hoc and each different
>> processor has a different way of locating the main binary (and possibly 
>> sliding
>> it to the correct load address).
>> 
>> We have two main ways of asking the remote stub about binary images
>> today:  jGetLoadedDynamicLibrariesInfos on Darwin systems with
>> debugserver, and qXfer:libraries-svr4: on Linux.
>> 
>> jGetLoadedDynamicLibrariesInfos has two modes: "tell me about all
>> libraries" and "tell me about libraries at these load addresses" (we get
>> notified about libraries being loaded/unloaded as a list of load addresses of
>> the binary images; binaries are loaded in waves on a Darwin system).  The
>> returned JSON packet is heavily tailored to include everything lldb needs to
>> know about the binary image so it can match a file it finds on the local 
>> disk to
>> the description and not read any memory at debug time -- we get the mach-
>> o header, the UUID, the deployment target OS version, the load address of
>> all the segments.  The packets lldb sends to debugserver look like
>> jGetLoadedDynamicLibrariesInfos:{"fetch_all_solibs":true}
>> or
>> jGetLoadedDynamicLibrariesInfos:{"solib_addresses":[4294967296,14073373
>> 5313408,..]}
>> 
>> 
>> qXfer:libraries-svr4: returns an XML description of all binary images loaded,
>> tailored towards an ELF view of binaries from a brief skim of
>> ProcessGDBRemote.  I chose not to use this because we'd have an entirely
>> different set of values returned in our xml reply for Mach-O binaries and to
>> eliminate extraneous read packets from lldb, plus we needed a way of asking
>> for a subset of all binary images.  A rich UI app these days can link to five
>> hundred binary images, so fetching the full list when only a couple of 
>> binaries
>> was just loaded would be unfortunate.
>> 
>> 
>> I'm trying to decide whether to (1) add a new qStandaloneBinaryInfo packet
>> which returns the simple gdb RSP style "uuid:;address:0xADDR;"
>> response, or (2) if we add a third mode to jGetLoadedDynamicLibrariesInfos
>> (jGetLoadedDynamicLibrariesInfos:{"standalone_binary_image_info":true})
>> or (3) have the JTAG stub support a qXfer XML request (I wouldn't want to
>> reuse the libraries-svr4 name and return an XML completely different, but it
>> could have a qXfer:standalone-binary-image-info: or whatever).
>> 
>> 
>> I figured folks might have opinions on this so I wanted to see if anyone 
>> cares
>> before I pick one and get everyone to implement it.  For me, I'm inclined
>> towards adding a qStandaloneBinaryInfo packet - the jtag stub already knows
>> how to construct these traditional gdb RSP style responses - but it would be
>> trivially easy for the stub to also assemble a fake XML response as raw text
>> with the two fields.
>> 
>> 
>> 
>> J
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: packet to identify a standalone aka firmware binary UUID / location

2021-03-23 Thread Jason Molenda via lldb-dev


> On Mar 23, 2021, at 1:36 PM, Greg Clayton  wrote:
> 
> 
> 
>> On Mar 22, 2021, at 11:01 PM, Jason Molenda  wrote:
>> 
>> Hi, I'm working with an Apple team that has a gdb RSP server for JTAG 
>> debugging, and we're working to add the ability for it to tell lldb about 
>> the UUID and possibly address of a no-dynamic-linker standalone binary, or 
>> firmware binary.  Discovery of these today is ad-hoc and each different 
>> processor has a different way of locating the main binary (and possibly 
>> sliding it to the correct load address).
>> 
>> We have two main ways of asking the remote stub about binary images today:  
>> jGetLoadedDynamicLibrariesInfos on Darwin systems with debugserver, and 
>> qXfer:libraries-svr4: on Linux. 
>> 
>> jGetLoadedDynamicLibrariesInfos has two modes: "tell me about all libraries" 
>> and "tell me about libraries at these load addresses" (we get notified about 
>> libraries being loaded/unloaded as a list of load addresses of the binary 
>> images; binaries are loaded in waves on a Darwin system).  The returned JSON 
>> packet is heavily tailored to include everything lldb needs to know about 
>> the binary image so it can match a file it finds on the local disk to the 
>> description and not read any memory at debug time -- we get the mach-o 
>> header, the UUID, the deployment target OS version, the load address of all 
>> the segments.  The packets lldb sends to debugserver look like
>> jGetLoadedDynamicLibrariesInfos:{"fetch_all_solibs":true}
>> or
>> jGetLoadedDynamicLibrariesInfos:{"solib_addresses":[4294967296,140733735313408,..]}
>> 
>> 
>> qXfer:libraries-svr4: returns an XML description of all binary images 
>> loaded, tailored towards an ELF view of binaries from a brief skim of 
>> ProcessGDBRemote.  I chose not to use this because we'd have an entirely 
>> different set of values returned in our xml reply for Mach-O binaries and to 
>> eliminate extraneous read packets from lldb, plus we needed a way of asking 
>> for a subset of all binary images.  A rich UI app these days can link to 
>> five hundred binary images, so fetching the full list when only a couple of 
>> binaries was just loaded would be unfortunate.
>> 
>> 
>> I'm trying to decide whether to (1) add a new qStandaloneBinaryInfo packet 
>> which returns the simple gdb RSP style "uuid:;address:0xADDR;" 
>> response, or (2) if we add a third mode to jGetLoadedDynamicLibrariesInfos 
>> (jGetLoadedDynamicLibrariesInfos:{"standalone_binary_image_info":true}) or 
>> (3) have the JTAG stub support a qXfer XML request (I wouldn't want to reuse 
>> the libraries-svr4 name and return an XML completely different, but it could 
>> have a qXfer:standalone-binary-image-info: or whatever).  
>> 
>> 
>> I figured folks might have opinions on this so I wanted to see if anyone 
>> cares before I pick one and get everyone to implement it.  For me, I'm 
>> inclined towards adding a qStandaloneBinaryInfo packet - the jtag stub 
>> already knows how to construct these traditional gdb RSP style responses - 
>> but it would be trivially easy for the stub to also assemble a fake XML 
>> response as raw text with the two fields.
> 
> 
> Any reason to not just return any stand alone binary image information along 
> with the dynamic libraries from the 
> "jGetLoadedDynamicLibrariesInfos:{"fetch_all_solibs":true}" or 
> "qXfer:libraries-svr4" packet? If all of the information is the same anyway, 
> no need to treat them any differently. We already return the main 
> executable's info in those packets and that isn't a shared library.

My preference for an entirely different packet (or different qXfer request) is 
that it simplifies the ProcessGDBRemote decision of whether there is a 
user-process DynamicLoader in effect, or and it simplifies the parsing of the 
returned values because we can't expect the stub to provide everything that 
lldb-server/debugserver return in jGetLoadedDynamicLibrariesInfos and 
libraries-svr4; it's a lot of stuff.  At the beginning of the debug session 
when we're sniffing out what type of connection this is, we can try a dedicated 
packet for getting the standalone binary information and that tells us what it 
is.  Or we can send the "tell me about all the libraries" darwin/elf packet and 
get back a result which has two possible formats -- the ones from 
debugserver/lldb-server with all of the information they include, or the 
minimal response that this JTAG stub can supply.

It may just be laziness on my part, which is why I wanted to raise this here -- 
whether to create a new packet or to have 
jGetLoadedDynamicLibrariesInfos/libraries-svr4 return a new style of result and 
have the parsing code detect which style it is, and decide the dynamic linker 
based on that.  I think the implementation of the former approach, adding a 
qStandaloneBinaryInfo packet (or whatever), would be easier than reusing one of 
the existing packets for really different purpose.

> 
> I would vote to stay with t

Re: [lldb-dev] [RFC] Improving protocol-level compatibility between LLDB and GDB

2021-04-22 Thread Jason Molenda via lldb-dev


> On Apr 20, 2021, at 11:39 PM, Pavel Labath via lldb-dev 
>  wrote:
> 
> I am very happy to see this effort and I fully encourage it.


Completely agree.  Thanks for Cc:'ing me Pavel, I hadn't seen Michał's thread.


> 
> On 20/04/2021 09:13, Michał Górny via lldb-dev wrote:
>> On Mon, 2021-04-19 at 16:29 -0700, Greg Clayton wrote:
 I think the first blocker towards this project are existing
 implementation bugs in LLDB. For example, the vFile implementation is
 documented as using incorrect data encoding and open flags. This is not
 something that can be trivially fixed without breaking compatibility
 between different versions of LLDB.
>>> 
>>> We should just fix this bug in LLDB in both LLDB's logic and lldb-server 
>>> IMHO. We typically distribute both "lldb" and "lldb-server" together so 
>>> this shouldn't be a huge problem.
>> Hmm, I've focused on this because I recall hearing that OSX users
>> sometimes run new client against system server... but now I realized
>> this isn't relevant to LLGS ;-).  Still, I'm happy to do things
>> the right way if people feel like it's needed, or the easy way if it's
>> not.
> 
> The vFile packets are, used in the "platform" mode of the connection (which, 
> btw, is also something that gdb does not have), and that is implemented by 
> lldb-server on all hosts (although I think apple may have some custom 
> platform implementations as well). In any case though, changing flag values 
> on the client will affect all servers that it communicates with, regardless 
> of the platform.
> 
> At one point, Jason cared enough about this to add a warning about not 
> changing these constants to the code. I'd suggest checking with him whether 
> this is still relevant.
> 
> Or just going with your proposed solution, which sounds perfectly reasonable 
> to me


The main backwards compatibility issue for Apple is that lldb needs to talk to 
old debugservers on iOS devices, where debugserver can't be updated.  I know of 
three protocol bugs we have today:

vFile:open flags
vFile:pread/pwrite base   https://bugs.llvm.org/show_bug.cgi?id=47820
A packet base   https://bugs.llvm.org/show_bug.cgi?id=42471

debugserver doesn't implement vFile packets.  So for those, we only need to 
worry about lldb/lldb-server/lldb-platform.


lldb-platform is a freestanding platform packets stub I wrote for Darwin 
systems a while back.  Real smol, it doesn't link to/use any llvm/lldb code.  I 
never upstreamed it because it doesn't really fit in with llvm/lldb projects in 
any way and it's not super interesting, it is very smol and simple.  I was 
tired of tracking down complicated bugs and wanted easier bugs.  It implements 
the vFile packets; it only does the platform packets and runs debugserver for 
everything else.

Technically a modern lldb could need to communicate with an old lldb-platform, 
but it's much more of a corner case and I'm not super worried about it, we can 
deal with that inside Apple (that is, I can be responsible for worrying about 
it.)

For vFile:open and vFile:pread/pwrite, I say we just change them in 
lldb/lldb-server and it's up to me to change them in lldb-platform at the same 
time.


For the A packet, debugserver is using base 10,

errno = 0;
arglen = strtoul(buf, &c, 10);
if (errno != 0 && arglen == 0) {
  return HandlePacket_ILLFORMED(__FILE__, __LINE__, p,
"arglen not a number on 'A' pkt");
}
[..]
errno = 0;
argnum = strtoul(buf, &c, 10);
if (errno != 0 && argnum == 0) {
  return HandlePacket_ILLFORMED(__FILE__, __LINE__, p,
"argnum not a number on 'A' pkt");
}

as does lldb,

packet.PutChar('A');
for (size_t i = 0, n = argv.size(); i < n; ++i) {
  arg = argv[i];
  const int arg_len = strlen(arg);
  if (i > 0)
packet.PutChar(',');
  packet.Printf("%i,%i,", arg_len * 2, (int)i);
  packet.PutBytesAsRawHex8(arg, arg_len);


and lldb-server,

// Decode the decimal argument string length. This length is the number of
// hex nibbles in the argument string value.
const uint32_t arg_len = packet.GetU32(UINT32_MAX);
if (arg_len == UINT32_MAX)
  success = false;
else {
  // Make sure the argument hex string length is followed by a comma
  if (packet.GetChar() != ',')
success = false;
  else {
// Decode the argument index. We ignore this really because who would
// really send down the arguments in a random order???
const uint32_t arg_idx = packet.GetU32(UINT32_MAX);

uint32_t StringExtractor::GetU32(uint32_t fail_value, int base) {
  if (m_index < m_packet.size()) {
char *end = nullptr;
const char *start = m_packet.c_str();
const char *cstr = start + m_index;
uint32_t result = static_cast(::strtoul(cstr, &end, base));

where 'base' defaults to 0 which strtoul treats as base 10 unless the number 
starts with 0x.


The A pa

Re: [lldb-dev] [RFC] Improving protocol-level compatibility between LLDB and GDB

2021-04-25 Thread Jason Molenda via lldb-dev
I was looking at lldb-platform and I noticed I implemented the A packet in it, 
and I was worried I might have the same radix error as lldb in there, but this 
code I wrote made me laugh:

const char *p = pkt.c_str() + 1;   // skip the 'A'
std::vector packet_contents = get_fields_from_delimited_string 
(p, ',');
std::vector inferior_arguments;
std::string executable_filename;

if (packet_contents.size() % 3 != 0)
{
log_error ("A packet received with fields that are not a multiple of 3: 
 %s\n", pkt.c_str());
}

unsigned long tuples = packet_contents.size() / 3;
for (int i = 0; i < tuples; i++)
{
std::string length_of_argument_str = packet_contents[i * 3];
std::string argument_number_str = packet_contents[(i * 3) + 1];
std::string argument = decode_asciihex (packet_contents[(i * 3) + 
2].c_str());

int len_of_argument;
if (ascii_to_i (length_of_argument_str, 16, len_of_argument) == false)
log_error ("Unable to parse length-of-argument field of A packet: 
%s in full packet %s\n",
   length_of_argument_str.c_str(), pkt.c_str());

int argument_number;
if (ascii_to_i (argument_number_str, 16, argument_number) == false)
log_error ("Unable to parse argument-number field of A packet: %s 
in full packet %s\n",
   argument_number_str.c_str(), pkt.c_str());

if (argument_number == 0)
{
executable_filename = argument;
}
inferior_arguments.push_back (argument);
}


These A packet fields give you the name of the binary and the arguments to pass 
on the cmdline.  My guess is at some point in the past the arguments were not 
asciihex encoded, so you genuinely needed to know the length of each argument.  
But now, of course, and you could write a perfectly fine client that mostly 
ignores argnum and arglen altogether.


I wrote a fix for the A packet for debugserver using a 'a-packet-base16' 
feature in qSupported to activate it, and tested it by hand, works correctly.  
If we're all agreed that this is how we'll request/indicate these protocol 
fixes, I can put up a phab etc and get this started.

diff --git a/lldb/tools/debugserver/source/RNBRemote.cpp 
b/lldb/tools/debugserver/source/RNBRemote.cpp
index 586336a21b6..996ce2f96cf 100644
--- a/lldb/tools/debugserver/source/RNBRemote.cpp
+++ b/lldb/tools/debugserver/source/RNBRemote.cpp
@@ -176,7 +176,7 @@ RNBRemote::RNBRemote()
   m_extended_mode(false), m_noack_mode(false),
   m_thread_suffix_supported(false), m_list_threads_in_stop_reply(false),
   m_compression_minsize(384), m_enable_compression_next_send_packet(false),
-  m_compression_mode(compression_types::none) {
+  m_compression_mode(compression_types::none), m_a_packet_base16(false) {
   DNBLogThreadedIf(LOG_RNB_REMOTE, "%s", __PRETTY_FUNCTION__);
   CreatePacketTable();
 }
@@ -1530,8 +1530,9 @@ void RNBRemote::NotifyThatProcessStopped(void) {
 
  6,0,676462,4,1,2d71,10,2,612e6f7574
 
- Note that "argnum" and "arglen" are numbers in base 10.  Again, that's
- not documented either way but I'm assuming it's so.  */
+ lldb would use base 10 for "argnum" and "arglen" but that is incorrect.
+ Default behavior is currently still base10, but when m_a_packet_base16 is
+ via the qSupported packet negotiation, use base16. */
 
 rnb_err_t RNBRemote::HandlePacket_A(const char *p) {
   if (p == NULL || *p == '\0') {
@@ -1548,6 +1549,7 @@ rnb_err_t RNBRemote::HandlePacket_A(const char *p) {
2nd arg has to be non-const which makes it problematic to step
through the string easily.  */
   char *buf = const_cast(p);
+  const char *end_of_buf = buf + strlen(buf);
 
   RNBContext &ctx = Context();
 
@@ -1557,7 +1559,7 @@ rnb_err_t RNBRemote::HandlePacket_A(const char *p) {
 char *c;
 
 errno = 0;
-arglen = strtoul(buf, &c, 10);
+arglen = strtoul(buf, &c, m_a_packet_base16 ? 16 : 10);
 if (errno != 0 && arglen == 0) {
   return HandlePacket_ILLFORMED(__FILE__, __LINE__, p,
 "arglen not a number on 'A' pkt");
@@ -1569,7 +1571,7 @@ rnb_err_t RNBRemote::HandlePacket_A(const char *p) {
 buf = c + 1;
 
 errno = 0;
-argnum = strtoul(buf, &c, 10);
+argnum = strtoul(buf, &c, m_a_packet_base16 ? 16 : 10);
 if (errno != 0 && argnum == 0) {
   return HandlePacket_ILLFORMED(__FILE__, __LINE__, p,
 "argnum not a number on 'A' pkt");
@@ -1582,6 +1584,10 @@ rnb_err_t RNBRemote::HandlePacket_A(const char *p) {
 
 c = buf;
 buf = buf + arglen;
+
+if (buf > end_of_buf)
+  break;
+
 while (c < buf && *c != '\0' && c + 1 < buf && *(c + 1) != '\0') {
   char smallbuf[3];
   smallbuf[0] = *c;
@@ -3651,8 +3657,12 @@ rnb_err_t RNBRemote::HandlePacket_qSupported(const char 
*p) {
 snprintf(numbuf, sizeof(numbuf), "%zu", m_compression_minsize);
 numbuf[sizeof(numbuf)

Re: [lldb-dev] [RFC] Improving protocol-level compatibility between LLDB and GDB

2021-04-27 Thread Jason Molenda via lldb-dev


> On Apr 27, 2021, at 4:56 AM, Pavel Labath  wrote:
> 
> I think that's fine, though possible changing the servers to just ignore the 
> length fields, like you did above, might be even better, as then they will 
> work fine regardless of which client they are talking to. They still should 
> advertise their non-brokenness so that the client can form the right packet, 
> but this will be just a formality to satisfy protocol purists (or pickier 
> servers), and not make a functional difference.


Ah, good point.  Let me rework the debugserver patch and look at lldb-server.  
I wrote lldb-platform to spec and hadn't even noticed at the time that it was 
expecting (and ignoring) base 16 here when lldb was using base 10.

The only possible wrinkle I can imagine is if someone took advantage of the 
argnum to specify a zero-length string argument.  Like they specify args 0, 1, 
3, and expect the remote stub to pass an empty string as arg 2.  It's weird 
that the packet even includes argnum tbh, I can't think of any other reason why 
you would do it except this.  


J
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Can't debug with a -g compiled binary as a non-root user on OS X 10.11 El Capitan

2015-10-02 Thread Jason Molenda via lldb-dev
Hi Tony.  There are new security mechanisms in Mac OS X 10.11 El Capitan, 
called System Integrity Protection, but I don't recognize this failure mode.  
Try a stripped down example, like

$ echo 'int main () { }' > /tmp/a.c
$ xcrun clang /tmp/a.c -o /tmp/a.out
$ xcrun lldb /tmp/a.out
(lldb) br s -n main
(lldb) r

That should work.  That's a baseline, then we can start working on why your 
example isn't working.



> On Oct 2, 2015, at 2:35 PM, Tony Perrie via lldb-dev 
>  wrote:
> 
> I can only seem to debug our binary as the root user om 10.11.  I rebooted at 
> one point, and lldb did work briefly with a system user but then after the 
> machine ran for a bit, it proceeded to not work again.  Rebooted again, and 
> again, lldb failes with this error...
> 
> lldb /opt/aspera/bin/ascp
> (lldb) target create "/opt/aspera/bin/ascp"
> 2015-10-02 14:24:17.091 lldb[1721:12884] Metadata.framework [Error]: couldn't 
> get the client port
> Current executable set to '/opt/aspera/bin/ascp' (x86_64).
> (lldb) r -i ~/.ssh/id_rsa /tmp/mp_source/* localhost:/tmp/mp_dest/
> error: process exited with status -1 (unable to attach)
> 
> As root, I can reproduce the error:
> 
> root# lldb /opt/aspera/bin/ascp
> (lldb) target create "/opt/aspera/bin/ascp"
> 2015-10-02 14:30:40.515 lldb[1864:14630] Metadata.framework [Error]: couldn't 
> get the client port
> Current executable set to '/opt/aspera/bin/ascp' (x86_64).
> (lldb) r -i /var/root/.ssh/id_rsa /tmp/mp_source/* localhost:/tmp/mp_dest/
> Process 1866 launched: '/opt/aspera/bin/ascp' (x86_64)
> 
> Session Stop  (Error: Session initiation failed, Server process failed to 
> start: permissions?)
> Process 1866 exited with status = 1 (0x0001) 
> 
> I have another machine running OS X 10.9 and lldb where everything works 
> flawlessly.
> 
> The problem with out binary seems to be that OS X is prohibiting our binary 
> from starting another process (even as root).  Not sure if this is the right 
> list for that question though.  Assume it's something to do with 10.11's 
> security model.
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Can't debug with a -g compiled binary as a non-root user on OS X 10.11 El Capitan

2015-10-02 Thread Jason Molenda via lldb-dev
The fact that it doesn't work as root makes it less likely it's an unsigned 
debugserver / missigned debugserver issue.  You can run an unsigned / 
mis-signed lldb as root and it will still work on os x 10.11, as well as a 
signed one run by a user account.

Is the binary you're running under the debugger signed?  I think it needs the 
get-task-allow entitlement if the debugger is going to attach/run it.


> On Oct 2, 2015, at 5:58 PM, Todd Fiala via lldb-dev  
> wrote:
> 
> Hi Tony,
> 
> This is the right list.
> 
> Are you using an LLDB that you built locally?  If so, can you move aside the 
> debugserver that you find somewhere under in your LLDB.framework bundle 
> directory, and make a symlink to the debugserver that comes out of your 
> /Applications/Xcode.app bundle?  Your official Xcode.app one should be in a 
> location like:
> /Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Versions/A/Resources/debugserver
> 
> The other thing it could be is that I think your lldb_codesign cert may need 
> to be recreated on a new OS.  I seem to recall the instructions there 
> indicate the code signing cert does not survive OS updates but I might be 
> mistaken.
> 
> I suspect the symlink will resolve your issue, though.  With tighter 
> security, it is likely that a home-built debugserver is no longer going to 
> work without being Apple signed.  We may need to adjust the Xcode build to 
> create a symlink to the official one if that's the case.
> 
> -Todd
> 
> On Fri, Oct 2, 2015 at 2:35 PM, Tony Perrie via lldb-dev 
>  wrote:
> I can only seem to debug our binary as the root user om 10.11.  I rebooted at 
> one point, and lldb did work briefly with a system user but then after the 
> machine ran for a bit, it proceeded to not work again.  Rebooted again, and 
> again, lldb failes with this error...
> 
> lldb /opt/aspera/bin/ascp
> (lldb) target create "/opt/aspera/bin/ascp"
> 2015-10-02 14:24:17.091 lldb[1721:12884] Metadata.framework [Error]: couldn't 
> get the client port
> Current executable set to '/opt/aspera/bin/ascp' (x86_64).
> (lldb) r -i ~/.ssh/id_rsa /tmp/mp_source/* localhost:/tmp/mp_dest/
> error: process exited with status -1 (unable to attach)
> 
> As root, I can reproduce the error:
> 
> root# lldb /opt/aspera/bin/ascp
> (lldb) target create "/opt/aspera/bin/ascp"
> 2015-10-02 14:30:40.515 lldb[1864:14630] Metadata.framework [Error]: couldn't 
> get the client port
> Current executable set to '/opt/aspera/bin/ascp' (x86_64).
> (lldb) r -i /var/root/.ssh/id_rsa /tmp/mp_source/* localhost:/tmp/mp_dest/
> Process 1866 launched: '/opt/aspera/bin/ascp' (x86_64)
> 
> Session Stop  (Error: Session initiation failed, Server process failed to 
> start: permissions?)
> Process 1866 exited with status = 1 (0x0001) 
> 
> I have another machine running OS X 10.9 and lldb where everything works 
> flawlessly.
> 
> The problem with out binary seems to be that OS X is prohibiting our binary 
> from starting another process (even as root).  Not sure if this is the right 
> list for that question though.  Assume it's something to do with 10.11's 
> security model.
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> 
> 
> 
> -- 
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Can't debug with a -g compiled binary as a non-root user on OS X 10.11 El Capitan

2015-10-03 Thread Jason Molenda via lldb-dev
(resending my reply with a Cc to lldb-dev - System Integrity Protection in Mac 
OS X 10.11 El Capitan has impact on lldb in several important ways, I think 
others will be interested.)


Yes, System Integrity Protection in El Capitan includes the restriction that 
nothing can be modified in directories like /usr, /System, /bin, /sbin, even by 
the root user.  /usr/local is excepted from these restrictions.

SIP also restricts the debugger from being able to attach to system processes.  
e.g. trying to attach to Mail.app will fail, regardless of whether you're root 
or a regular user, official lldb or self-built/self-signed lldb.

More complete details on exactly what SIP includes:

https://developer.apple.com/library/prerelease/ios/documentation/Security/Conceptual/System_Integrity_Protection_Guide/System_Integrity_Protection_Guide.pdf


J



> On Oct 2, 2015, at 6:34 PM, Tony Perrie via lldb-dev 
>  wrote:
> 
> The lldb I'm using is from XCode 7.0.1.
> 
> I can debug my binary if I run lldb as root.
> 
> I eventually figured out my actual bug from log messages without lldb.  Turns 
> out, Mac OS X 10.11 El Capitan doesn't allow the root user to deploy binaries 
> to /usr/bin which our installer does so that our binary is in the default ssh 
> environment path.  I setup a custom ~/.ssh/environment and configured 
> PermitUserEnvironment to yes in /etc/sshd_config and that let my binary run 
> normally.
> 
> But, still, I can't seem to run lldb as a normal system user with our binary.
> 
> Tony
> 
>> On Oct 2, 2015, at 6:05 PM, Jason Molenda  wrote:
>> 
>> The fact that it doesn't work as root makes it less likely it's an unsigned 
>> debugserver / missigned debugserver issue.  You can run an unsigned / 
>> mis-signed lldb as root and it will still work on os x 10.11, as well as a 
>> signed one run by a user account.
>> 
>> Is the binary you're running under the debugger signed?  I think it needs 
>> the get-task-allow entitlement if the debugger is going to attach/run it.
>> 
>> 
>>> On Oct 2, 2015, at 5:58 PM, Todd Fiala via lldb-dev 
>>>  wrote:
>>> 
>>> Hi Tony,
>>> 
>>> This is the right list.
>>> 
>>> Are you using an LLDB that you built locally?  If so, can you move aside 
>>> the debugserver that you find somewhere under in your LLDB.framework bundle 
>>> directory, and make a symlink to the debugserver that comes out of your 
>>> /Applications/Xcode.app bundle?  Your official Xcode.app one should be in a 
>>> location like:
>>> /Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Versions/A/Resources/debugserver
>>> 
>>> The other thing it could be is that I think your lldb_codesign cert may 
>>> need to be recreated on a new OS.  I seem to recall the instructions there 
>>> indicate the code signing cert does not survive OS updates but I might be 
>>> mistaken.
>>> 
>>> I suspect the symlink will resolve your issue, though.  With tighter 
>>> security, it is likely that a home-built debugserver is no longer going to 
>>> work without being Apple signed.  We may need to adjust the Xcode build to 
>>> create a symlink to the official one if that's the case.
>>> 
>>> -Todd
>>> 
>>> On Fri, Oct 2, 2015 at 2:35 PM, Tony Perrie via lldb-dev 
>>>  wrote:
>>> I can only seem to debug our binary as the root user om 10.11.  I rebooted 
>>> at one point, and lldb did work briefly with a system user but then after 
>>> the machine ran for a bit, it proceeded to not work again.  Rebooted again, 
>>> and again, lldb failes with this error...
>>> 
>>> lldb /opt/aspera/bin/ascp
>>> (lldb) target create "/opt/aspera/bin/ascp"
>>> 2015-10-02 14:24:17.091 lldb[1721:12884] Metadata.framework [Error]: 
>>> couldn't get the client port
>>> Current executable set to '/opt/aspera/bin/ascp' (x86_64).
>>> (lldb) r -i ~/.ssh/id_rsa /tmp/mp_source/* localhost:/tmp/mp_dest/
>>> error: process exited with status -1 (unable to attach)
>>> 
>>> As root, I can reproduce the error:
>>> 
>>> root# lldb /opt/aspera/bin/ascp
>>> (lldb) target create "/opt/aspera/bin/ascp"
>>> 2015-10-02 14:30:40.515 lldb[1864:14630] Metadata.framework [Error]: 
>>> couldn't get the client port
>>> Current executable set to '/opt/aspera/bin/ascp' (x86_64).
>>> (lldb) r -i /var/root/.ssh/id_rsa /tmp/mp_source/* localhost:/tmp/mp_dest/
>>> Process 1866 launched: '/opt/aspera/bin/ascp' (x86_64)
>>> 
>>> Session Stop  (Error: Session initiation failed, Server process failed to 
>>> start: permissions?)
>>> Process 1866 exited with status = 1 (0x0001) 
>>> 
>>> I have another machine running OS X 10.9 and lldb where everything works 
>>> flawlessly.
>>> 
>>> The problem with out binary seems to be that OS X is prohibiting our binary 
>>> from starting another process (even as root).  Not sure if this is the 
>>> right list for that question though.  Assume it's something to do with 
>>> 10.11's security model.
>>> 
>>> 
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://list

Re: [lldb-dev] LLDB: Unwinding based on Assembly Instruction Profiling

2015-10-19 Thread Jason Molenda via lldb-dev
Hi all, sorry I missed this discussion last week, I was a little busy.

Greg's original statement isn't correct -- about a year ago Tong Shen changed 
lldb to using eh_frame for the currently-executing frame.  While it is true 
that eh_frame is not guaranteed to describe the prologue/epilogue, in practice 
eh_frame always describes the epilogue (gdb wouldn't couldn't without this, 
with its much more simplistic unwinder).  Newer gcc's also describe the 
epilogue.  clang does not (currently) describe the epilogue.  Tong's changes 
*augment* the eh_frame with an epilogue description if it doesn't already have 
one.

gcc does have an "asynchronous unwind tables" option -- "asynchronous" meaning 
the unwind rules are defined at every instruction location.  But the last time 
I tried it, it did nothing.  They've settled on an unfortunate middle ground 
where eh_frame (which should be compact and only describe enough for exception 
handling) has *some* async unwind instructions.  And the same unwind rules are 
emitted into the debug_frame section, even if -fasynchronous-unwind-tables is 
used.  

In the ideal world, eh_frame should be extremely compact and only sufficient 
for exception handling.  debug_frame should be extremely verbose and describe 
the unwind rules at all unwind locations.

As Tamas says, there's no indication in eh_frame or debug_frame as to how much 
is described:  call-sites only (for exception handling), call-sites + prologue, 
call-sites + prologue + epilogue, or fully asynchronous.  It's a drag, if the 
DWARF committee ever has enough reason to break open the debug_frame format for 
some other changes, I'd like to get more information in there.


Anyway, point is, we're living off of eh_frame (possibly "augmented") for the 
currently-executing stack frame these days.  lldb may avoid using the assembly 
unwinder altogether in an environment where it finds eh_frame unwind 
instructions for every stack frame.


(on Mac, we've switched to a format called "compact unwind" -- much like the 
ARM unwind info that Tamas recently added support for, this is an extremely 
small bit of information which describes one unwind rule for the entire 
function.  It is only applicable or exception handling, it has no way to 
describe prologues/epilogues.  compact unwind is two 4-byte words per function. 
 lldb will use compact unwind / ARM unwind info for the non-zeroth stack 
frames.  It will use its assembly instruction profiler for the 
currently-executing stack frame.)

Hope that helps.

J


> On Oct 15, 2015, at 2:56 AM, Tamas Berghammer via lldb-dev 
>  wrote:
> 
> If we are trying to unwind from a non call site (frame 0 or signal handler) 
> then the current implementation first try to use the non call site unwind 
> plan (usually assembly emulation) and if that one fails then it will fall 
> back to the call site unwind plan (eh_frame, compact unwind info, etc.) 
> instead of falling back to the architecture default unwind plan because it 
> should be a better guess in general and we usually fail with the assembly 
> emulation based unwind plan for hand written assembly functions where 
> eh_frame is usually valid at all address.
> 
> Generating asynchronous eh_frame (valid at all address) is possible with gcc 
> (I am not sure about clang) but there is no way to tell if a given eh_frame 
> inside an object file is valid at all address or only at call sites. The best 
> approximation what we can do is to say that each eh_frame entry is valid only 
> at the address what it specifies as start address but we don't make a use of 
> it in LLDB at the moment.
> 
> For the 2nd part of the original question, I think changing the eh_frame 
> based unwind plan after a failed unwind using instruction emulation is only a 
> valid option for the PC where we tried to unwind from because the assembly 
> based unwind plan could be valid at other parts of the function. Making the 
> change for that 1 concrete PC address would make sense, but have practically 
> no effect because the next time we want to unwind from the given address we 
> use the same fall back mechanism as in the first case and the change would 
> have only a very small performance gain.
> 
> Tamas
> 
> On Wed, Oct 14, 2015 at 9:36 PM Greg Clayton via lldb-dev 
>  wrote:
> 
> > On Oct 14, 2015, at 1:02 PM, Joerg Sonnenberger via lldb-dev 
> >  wrote:
> >
> > On Wed, Oct 14, 2015 at 11:42:06AM -0700, Greg Clayton via lldb-dev wrote:
> >> EH frame can't be used to unwind when we are in the first frame because
> >> it is only valid at call sites. It also can't be used in frames that
> >> are asynchronously interrupted like signal handler frames.
> >
> > This is not necessarily true, GCC can build them like that. I don't
> > think we have a flag for clang/LLVM to create full async unwind tables.
> 
> Most compilers don't generate stuff that is complete, and if it is complete, 
> I am not aware of any markings on EH frame that states it is complete. So w

Re: [lldb-dev] LLDB: Unwinding based on Assembly Instruction Profiling

2015-10-19 Thread Jason Molenda via lldb-dev

> On Oct 19, 2015, at 2:54 PM, Jason Molenda via lldb-dev 
>  wrote:

> Greg's original statement isn't correct -- about a year ago Tong Shen changed 
> lldb to using eh_frame for the currently-executing frame.  While it is true 
> that eh_frame is not guaranteed to describe the prologue/epilogue, in 
> practice eh_frame always describes the epilogue (gdb wouldn't couldn't 
> without this, with its much more simplistic unwinder).  Newer gcc's also 
> describe the epilogue.  clang does not (currently) describe the epilogue.  
> Tong's changes *augment* the eh_frame with an epilogue description if it 
> doesn't already have one.


Ahhh that paragraph was not clear.  I wrote that "in practice eh_frame 
always describes the epilogue".  I meant "always describes the prologue".

lldb needs the prologue description to step in to/step over functions 
correctly, at least at the first instruction of the function.

It's been five-six years since I worked on gdb's unwinder, but back when I 
worked on it, it didn't have multiple unwind schemes it could pick from, or the 
ability to use different unwind schemes in different contexts, or the ability 
to fall back to different unwind schemes.  That may not be true any longer, I 
don't know.  But back then it was an all-or-nothing approach, so if it was 
going to use eh_frame, it had to use it for everything.




> 
> gcc does have an "asynchronous unwind tables" option -- "asynchronous" 
> meaning the unwind rules are defined at every instruction location.  But the 
> last time I tried it, it did nothing.  They've settled on an unfortunate 
> middle ground where eh_frame (which should be compact and only describe 
> enough for exception handling) has *some* async unwind instructions.  And the 
> same unwind rules are emitted into the debug_frame section, even if 
> -fasynchronous-unwind-tables is used.  
> 
> In the ideal world, eh_frame should be extremely compact and only sufficient 
> for exception handling.  debug_frame should be extremely verbose and describe 
> the unwind rules at all unwind locations.
> 
> As Tamas says, there's no indication in eh_frame or debug_frame as to how 
> much is described:  call-sites only (for exception handling), call-sites + 
> prologue, call-sites + prologue + epilogue, or fully asynchronous.  It's a 
> drag, if the DWARF committee ever has enough reason to break open the 
> debug_frame format for some other changes, I'd like to get more information 
> in there.
> 
> 
> Anyway, point is, we're living off of eh_frame (possibly "augmented") for the 
> currently-executing stack frame these days.  lldb may avoid using the 
> assembly unwinder altogether in an environment where it finds eh_frame unwind 
> instructions for every stack frame.
> 
> 
> (on Mac, we've switched to a format called "compact unwind" -- much like the 
> ARM unwind info that Tamas recently added support for, this is an extremely 
> small bit of information which describes one unwind rule for the entire 
> function.  It is only applicable or exception handling, it has no way to 
> describe prologues/epilogues.  compact unwind is two 4-byte words per 
> function.  lldb will use compact unwind / ARM unwind info for the non-zeroth 
> stack frames.  It will use its assembly instruction profiler for the 
> currently-executing stack frame.)
> 
> Hope that helps.
> 
> J
> 
> 
>> On Oct 15, 2015, at 2:56 AM, Tamas Berghammer via lldb-dev 
>>  wrote:
>> 
>> If we are trying to unwind from a non call site (frame 0 or signal handler) 
>> then the current implementation first try to use the non call site unwind 
>> plan (usually assembly emulation) and if that one fails then it will fall 
>> back to the call site unwind plan (eh_frame, compact unwind info, etc.) 
>> instead of falling back to the architecture default unwind plan because it 
>> should be a better guess in general and we usually fail with the assembly 
>> emulation based unwind plan for hand written assembly functions where 
>> eh_frame is usually valid at all address.
>> 
>> Generating asynchronous eh_frame (valid at all address) is possible with gcc 
>> (I am not sure about clang) but there is no way to tell if a given eh_frame 
>> inside an object file is valid at all address or only at call sites. The 
>> best approximation what we can do is to say that each eh_frame entry is 
>> valid only at the address what it specifies as start address but we don't 
>> make a use of it in LLDB at the moment.
>> 
>> For the 2nd part of the original question, I think changing the eh_frame 
>> based

Re: [lldb-dev] Using DYLD_LIBRARY_PATH and lldb

2015-10-27 Thread Jason Molenda via lldb-dev
If it's on Mac OS X 10.11, I saw this the other day.  e.g.

sh-3.2$ cat a.c
#include 
#include 
int main() 
{
printf("%s\n", getenv("DYLD_LIBRARY_PATH"));
}
sh-3.2$ clang a.c

sh-3.2$ lldb -x a.out
(lldb) target create "a.out"
Current executable set to 'a.out' (x86_64).
(lldb) pro lau -v DYLD_LIBRARY_PATH=/tmp 
Process 66509 launched: '/private/tmp/a.out' (x86_64)
/tmp
Process 66509 exited with status = 0 (0x) 
(lldb) q

sh-3.2$ DYLD_LIBRARY_PATH=/tmp lldb -x -- a.out
(lldb) target create "a.out"
Current executable set to 'a.out' (x86_64).
(lldb) r
Process 66776 launched: '/private/tmp/a.out' (x86_64)
(null)
Process 66776 exited with status = 0 (0x) 
(lldb) q


The DYLD_LIBRARY_PATH isn't being passed into lldb, it seems.  If I attach to 
that lldb with another lldb,

(lldb) pro att -n lldb
Executable module set to 
"/Applications/Xcode.app/Contents/Developer/usr/bin/lldb".
Architecture set to: x86_64-apple-macosx.
(lldb) p (char*)getenv("DYLD_LIBRARY_PATH")
(char *) $0 = 0x
(lldb) ^D

yep, it's not being passed through.




> On Oct 26, 2015, at 10:02 AM, Greg Clayton via lldb-dev 
>  wrote:
> 
> I am surprised that this doesn't work as we make an effort to pass the 
> current environment down to any processes that you spawn by default (at least 
> on MacOSX we do), but the solution is easy: use the --environment variable 
> with the "process launch" command:
> 
> (lldb) process launch --environment DYLD_LIBRARY_PATH= -- arg1 
> arg2 arg3
> 
> or using the short -v option:
> 
> (lldb) process launch -v DYLD_LIBRARY_PATH= -- arg1 arg2 arg3
> 
> r is an alias to "process launch --". Note that "process launch" can have 
> arguments and the arguments you want to pass to your program might have 
> options, so you can terminate your "process launch" arguments with "--" so 
> that you can add your program arguments:
> 
> (lldb) process launch --environment DYLD_LIBRARY_PATH= -- 
> --program-option=123 --environment BAR=BAZ
> 
> Note that I actually used an option "--environment BAR=BAZ" that I am passing 
> to the program to be debugged...
> 
> It is better to use --environment because then your current LLDB or any 
> processes that LLDB spawns won't have that environment variable set. Hope 
> this helps.
> 
> Greg Clayton
> 
> 
>> On Oct 23, 2015, at 1:11 AM, Haakon Sporsheim via lldb-dev 
>>  wrote:
>> 
>> Hi.
>> 
>> I'm fairly new to lldb, been using gdb most of my life.
>> 
>> I'm currently hacking on a small library which I'm building without
>> installing. Usually I'm running tests for the library also without
>> installing, but rather using DYLD_LIBRARY_PATH in this manner:
>> DYLD_LIBRARY_PATH= mytestbinary
>> 
>> On linux using gdb when I  want to debug an issue I usually just stick
>> gdb in there, which I can't do with lldb on darwin it seems:
>> DYLD_LIBRARY_PATH= lldb mytestbinary
>> 
>> lldb gives me this result:
>> (lldb) target create ""
>> Current executable set to '' (x86_64).
>> (lldb) r
>> Process 4904 launched: '' (x86_64)
>> dyld: Library not loaded: /usr/local/lib/
>> Referenced from: 
>> Reason: image not found
>> Process 4904 stopped
>> * thread #1: tid = 0xfe39, 0x7fff5fc01075 dyld`dyld_fatal_error +
>> 1, stop reason = EXC_BREAKPOINT (code=EXC_I386_BPT, subcode=0x0)
>>   frame #0: 0x7fff5fc01075 dyld`dyld_fatal_error + 1
>> dyld`dyld_fatal_error:
>> ->  0x7fff5fc01075 <+1>: nop
>> 
>> dyld`dyldbootstrap::start:
>>   0x7fff5fc01076 <+0>: pushq  %rbp
>>   0x7fff5fc01077 <+1>: movq   %rsp, %rbp
>>   0x7fff5fc0107a <+4>: pushq  %r15
>> (lldb)
>> 
>> so it's not picking up the dylib from DYLD_LIBRARY_PATH.
>> 
>> I guess my question is whether this is a bug or not? Am I doing
>> anything wrong, or should I not use DYLD_LIBRARY_PATH this way? Any
>> suggestions and/or education would be appreciated!
>> To work around this issue I've used 'install_name_tool -change old new
>> ' which obviously works.
>> 
>> Thanks, best regards
>> Haakon Sporsheim
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev