[lldb-dev] [Bug 25081] SBThread::is_stopped shows incorrect value

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25081

lab...@google.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||lab...@google.com
 Resolution|--- |DUPLICATE

--- Comment #1 from lab...@google.com ---


*** This bug has been marked as a duplicate of bug 15824 ***

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25086] New: lldb should use unix socket for communication with server on linux

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25086

Bug ID: 25086
   Summary: lldb should use unix socket for communication with
server on linux
   Product: lldb
   Version: 3.7
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: v...@mixedrealities.no
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

As documented on the webpage, lldb on linux uses lldb-server even for local
debugging. It connects to this stub via loopback device. I believe it should
connect over a UNIX socket instead. (On Windows, named pipes are the
corresponding alternative.)

Explanation: For debugging a network protocol I have introduced packet loss on
the loopback device with the following command:

tc qdisc add dev lo root netem loss random 15

This introduces 15% packet loss and causes lldb to work EXTREMELY slowly
because its communication with the server stub is severely disrupted. It takes
ages to start debugging even a simple "hello world" program.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25087] New: Stripped symbol handling when using dwo sdebug info

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25087

Bug ID: 25087
   Summary: Stripped symbol handling when using dwo sdebug info
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: tbergham...@google.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

If we use dsym/dwarf as a debug info and a symbol is removed from the object
file during linking (because it is unused) then LLDB correctly reports an error
when the user want to set a breakpoint on it.

In case of dwo debug info the symbol will be still available in the *.dwo file
as it isn't modified by the linker when it removed the unused function. Because
of it LLDB will set a breakpoint at address 0x0 (the address stored at the
location where the symbol address should be stored in the executable) instead
of reporting an error.

For a test case see: DeadStripTestCase.test

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25086] lldb should use unix socket for communication with server on linux

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25086

lab...@google.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |INVALID

--- Comment #3 from lab...@google.com ---
I don't believe this is a use case we want to support. I would suggest solving
this problem externally, e.g. by limiting the simulated packet loss to your
application. I seem to recall being able to simulate packet loss using
iptables. I would recommend trying something like

iptables -A INPUT -p udp -m statistic --mode random --probability 0.15 -j DROP

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25086] lldb should use unix socket for communication with server on linux

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25086

Zeljko Vrba  changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 Resolution|INVALID |---

--- Comment #4 from Zeljko Vrba  ---
Using iptables is a non-option because dropping packets will return EPERM error
to the application; see for example
http://www.spinics.net/lists/netfilter/msg42589.html

Besides, tc-netem can simulate also burst losses, not just random uncorrelated
drops.

Is there any reason at all for not switching to unix domain sockets?

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25092] New: Test suit is flaky if 2 tests have the same file name

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25092

Bug ID: 25092
   Summary: Test suit is flaky if 2 tests have the same file name
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: tbergham...@google.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

If we have 2 test case with the same file name then the test suit will become
flaky for those tests.

In some ordering of the events (most likely when the 2 test with the same name
run at the same time) one of the test will fail with the following error:

Traceback (most recent call last):
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/dotest.py", line
2019, in 
resultclass=LLDBTestResult).run(suite)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/runner.py",
line 162, in run
test(result)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
line 64, in __call__
return self.run(*args, **kwds)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
line 84, in run
self._wrapped_run(result)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
line 114, in _wrapped_run
test._wrapped_run(result, debug)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
line 116, in _wrapped_run
test(result)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/case.py",
line 417, in __call__
return self.run(*args, **kwds)
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/case.py",
line 389, in run
self.dumpSessionInfo()
  File
"/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/lldbtest.py",
line 1890, in dumpSessionInfo
os.rename(src, dst)
OSError: [Errno 2] No such file or directory

We should handle the case when 2 test have the same file name or explicitly
disallow it with some check at test case loading time.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Bug 25092] New: Test suit is flaky if 2 tests have the same file name

2015-10-07 Thread Zachary Turner via lldb-dev
We should explicitly disallow it.  You should be able to tell by looking at
a test's filename what it does.  If two files have the same name, then you
wonder why they aren't the same test, and it leaves you with more questions
than answers.  If two tests have the same name and they actually *should*
be different tests, then that's a very good sign that one or both of them
don't have specific enough names.

On Wed, Oct 7, 2015 at 6:19 AM via lldb-dev  wrote:

> Bug ID 25092  Summary Test
> suit is flaky if 2 tests have the same file name Product lldb Version 
> unspecified
> Hardware PC OS Linux Status NEW Severity normal Priority P Component All
> Bugs Assignee lldb-dev@lists.llvm.org Reporter tbergham...@google.com CC
> llvm-b...@lists.llvm.org Classification Unclassified
>
> If we have 2 test case with the same file name then the test suit will become
> flaky for those tests.
>
> In some ordering of the events (most likely when the 2 test with the same name
> run at the same time) one of the test will fail with the following error:
>
> Traceback (most recent call last):
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/dotest.py", 
> line
> 2019, in 
> resultclass=LLDBTestResult).run(suite)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/runner.py",
> line 162, in run
> test(result)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
> line 64, in __call__
> return self.run(*args, **kwds)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
> line 84, in run
> self._wrapped_run(result)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
> line 114, in _wrapped_run
> test._wrapped_run(result, debug)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/suite.py",
> line 116, in _wrapped_run
> test(result)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/case.py",
> line 417, in __call__
> return self.run(*args, **kwds)
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/unittest2/case.py",
> line 389, in run
> self.dumpSessionInfo()
>   File
> "/lldb-buildbot/lldbSlave/buildWorkingDir/llvm/tools/lldb/test/lldbtest.py",
> line 1890, in dumpSessionInfo
> os.rename(src, dst)
> OSError: [Errno 2] No such file or directory
>
> We should handle the case when 2 test have the same file name or explicitly
> disallow it with some check at test case loading time.
>
> --
> You are receiving this mail because:
>
>- You are the assignee for the bug.
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25097] New: LLDB Unit Tests cannot find the shared library directory

2015-10-07 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25097

Bug ID: 25097
   Summary: LLDB Unit Tests cannot find the shared library
directory
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Windows NT
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: ztur...@google.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

When you call HostInfo::ComputeSharedLibraryDirectory, it creates a path
relative to the path of the executable.  For unit tests, this is not the LLDB
executable, and it is not in the same place as the unit test executables
either, so this returns an incorrect path.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Zachary Turner via lldb-dev
Jim, Greg,

Can I get some feedback on this?  I would like to start enforcing this
moving forward.  I want to make sure we're in agreement.

On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala  wrote:

> IMHO that all sounds reasonable.
>
> FWIW - I wrote some tests for the test system changes I put in (for the
> pure-python impl of timeout support), and in the process, I discovered a
> race condition in using a python facility that there really is no way I
> would have found anywhere near as reasonably without having added the
> tests.  (For those of you who are test-centric, this is not a surprising
> outcome, but I'm adding this for those who may be inclined to think of it
> as an afterthought).
>
> -Todd
>
> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
>>
>>> I have held from the beginning that the only tests that should be
>>> written using HandleCommand are those that explicitly test command
>>> behavior, and if it is possible to write a test using the SB API you should
>>> always do it that way for the very reasons you cite.  Not everybody agreed
>>> with me at first, so we ended up with a bunch of tests that do complex
>>> things using HandleCommand where they really ought not to.  I'm not sure it
>>> is worth the time to go rewrite all those tests, but we shouldn't write any
>>> new tests that way.
>>>
>>
>> I would like to revive this thread, because there doesn't seem to be
>> consensus that this is the way to go.  I've suggested on a couple of
>> reviews recently that people put new command api tests under a new
>> top-level folder under tests, and so far the responses I've gotten have not
>> indicated that people are willing to do this.
>>
>> Nobody chimed in on this thread with a disagreement, which indicates to
>> me that we are ok with moving this forward.  So I'm reviving this in hopes
>> that we can come to agreement.  With that in mind, my goal is:
>>
>> 1) Begin enforcing this on new CLs that go in.  We need to maintain a
>> consistent message and direction for the project, and if this is a "good
>> idea", then it should be applied and enforced consistently.  Command api
>> tests should be the exception, not the norm.
>>
>> 2) Begin rejecting or reverting changes that go in without tests.  I
>> understand there are some situations where tests are difficult.  Core dumps
>> and unwinding come to mind.  There are probably others.  But this is the
>> exception, and not the norm.  Almost every change should go in with tests.
>>
>> 3) If a CL cannot be tested without a command api test due to limitations
>> of the SB API, require new changes to go in *with a corresponding SB API
>> change*.  I know that people just want to get their stuff done, but I
>> dont' feel is an excuse for having a subpar testing situation.  For the
>> record, I'm not singling anyone out.  Everyone is guilty, including me.
>> I'm offering to do my part, and I would like to be able to enforce this at
>> the project level.  As with #2, there are times when an SB API isn't
>> appropriate or doesn't make sense.  We can figure that out when we come to
>> it.  But I believe a large majority of these command api tests go in the
>> way they do because there is no corresponding SB API *yet*.  And I think
>> the change should not go in without designing the appropriate SB API at the
>> same time.
>>
>> Zach
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Greg Clayton via lldb-dev


> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev 
>  wrote:
> 
> Jim, Greg,
> 
> Can I get some feedback on this?  I would like to start enforcing this moving 
> forward.  I want to make sure we're in agreement.
> 
> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala  wrote:
> IMHO that all sounds reasonable.
> 
> FWIW - I wrote some tests for the test system changes I put in (for the 
> pure-python impl of timeout support), and in the process, I discovered a race 
> condition in using a python facility that there really is no way I would have 
> found anywhere near as reasonably without having added the tests.  (For those 
> of you who are test-centric, this is not a surprising outcome, but I'm adding 
> this for those who may be inclined to think of it as an afterthought).
> 
> -Todd
> 
> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev 
>  wrote:
> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
> I have held from the beginning that the only tests that should be written 
> using HandleCommand are those that explicitly test command behavior, and if 
> it is possible to write a test using the SB API you should always do it that 
> way for the very reasons you cite.  Not everybody agreed with me at first, so 
> we ended up with a bunch of tests that do complex things using HandleCommand 
> where they really ought not to.  I'm not sure it is worth the time to go 
> rewrite all those tests, but we shouldn't write any new tests that way.
> 
> I would like to revive this thread, because there doesn't seem to be 
> consensus that this is the way to go.  I've suggested on a couple of reviews 
> recently that people put new command api tests under a new top-level folder 
> under tests, and so far the responses I've gotten have not indicated that 
> people are willing to do this.
> 
> Nobody chimed in on this thread with a disagreement, which indicates to me 
> that we are ok with moving this forward.  So I'm reviving this in hopes that 
> we can come to agreement.  With that in mind, my goal is:
> 
> 1) Begin enforcing this on new CLs that go in.  We need to maintain a 
> consistent message and direction for the project, and if this is a "good 
> idea", then it should be applied and enforced consistently.  Command api 
> tests should be the exception, not the norm.

You mean API tests should be the norm right? I don't want people submitting 
command line tests like "file a.out", "run", "step". I want the API to be used. 
Did you get this reversed?
> 
> 2) Begin rejecting or reverting changes that go in without tests.  I 
> understand there are some situations where tests are difficult.  Core dumps 
> and unwinding come to mind.  There are probably others.  But this is the 
> exception, and not the norm.  Almost every change should go in with tests.

As long as it can be tested reasonably I am fine with rejecting changes going 
in that don't have tests.
> 
> 3) If a CL cannot be tested without a command api test due to limitations of 
> the SB API, require new changes to go in with a corresponding SB API change.

One issue here is I don't want stuff added to the SB API just so that it can be 
tested. The SB API must remain clean and consistent and remain an API that 
makes sense for debugging. I don't want internal goo being exposed just so we 
can test things. If we run into this a lot, we might need to make an alternate 
binary that can test internal unit tests. We could make a 
lldb_internal.so/lldb_internal.dylib/lldb_internal.dll that can be linked to by 
internal unit tests and then those unit tests can be run as part of the testing 
process. So lets keep the SB API clean and sensible with no extra fluff, and 
find a way to tests internal stuff in a different way.

>  I know that people just want to get their stuff done, but I dont' feel is an 
> excuse for having a subpar testing situation.  For the record, I'm not 
> singling anyone out.  Everyone is guilty, including me.  I'm offering to do 
> my part, and I would like to be able to enforce this at the project level.  
> As with #2, there are times when an SB API isn't appropriate or doesn't make 
> sense.  We can figure that out when we come to it.

We should do built in unit tests like some things already do if they can't or 
shouldn't be in the SB API as stated above.

>  But I believe a large majority of these command api tests go in the way they 
> do because there is no corresponding SB API yet.  And I think the change 
> should not go in without designing the appropriate SB API at the same time.

Only if it makes sense for the SB API, yes.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] How to debug LLDB server?

2015-10-07 Thread Eugene Birukov via lldb-dev
Hello,
 
I am trying to see what is going inside LLDB server 3.7.0 but there are a lot 
of timeouts scattered everywhere. Say, InferiorCallPOSIX.cpp:74 sets hard-coded 
timeout to 500,000us, etc. These timeouts fire if I spend any time on 
breakpoint inside server and make debugging experience miserable. Is there any 
way to turn them all off?
 
BTW, I am using LLDB as a C++ API, not as standalone program, but I have 
debugger attached to it and can alter its memory state.
 
Thanks,
Eugene
 
  ___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Zachary Turner via lldb-dev
On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton  wrote:

>
>
> > On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Jim, Greg,
> >
> > Can I get some feedback on this?  I would like to start enforcing this
> moving forward.  I want to make sure we're in agreement.
> >
> > On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala  wrote:
> > IMHO that all sounds reasonable.
> >
> > FWIW - I wrote some tests for the test system changes I put in (for the
> pure-python impl of timeout support), and in the process, I discovered a
> race condition in using a python facility that there really is no way I
> would have found anywhere near as reasonably without having added the
> tests.  (For those of you who are test-centric, this is not a surprising
> outcome, but I'm adding this for those who may be inclined to think of it
> as an afterthought).
> >
> > -Todd
> >
> > On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
> > I have held from the beginning that the only tests that should be
> written using HandleCommand are those that explicitly test command
> behavior, and if it is possible to write a test using the SB API you should
> always do it that way for the very reasons you cite.  Not everybody agreed
> with me at first, so we ended up with a bunch of tests that do complex
> things using HandleCommand where they really ought not to.  I'm not sure it
> is worth the time to go rewrite all those tests, but we shouldn't write any
> new tests that way.
> >
> > I would like to revive this thread, because there doesn't seem to be
> consensus that this is the way to go.  I've suggested on a couple of
> reviews recently that people put new command api tests under a new
> top-level folder under tests, and so far the responses I've gotten have not
> indicated that people are willing to do this.
> >
> > Nobody chimed in on this thread with a disagreement, which indicates to
> me that we are ok with moving this forward.  So I'm reviving this in hopes
> that we can come to agreement.  With that in mind, my goal is:
> >
> > 1) Begin enforcing this on new CLs that go in.  We need to maintain a
> consistent message and direction for the project, and if this is a "good
> idea", then it should be applied and enforced consistently.  Command api
> tests should be the exception, not the norm.
>
> You mean API tests should be the norm right? I don't want people
> submitting command line tests like "file a.out", "run", "step". I want the
> API to be used. Did you get this reversed?
>
I didn't get it reversed, but I agree my wording wasn't clear.  By "command
api", I meant HandleCommand / etc.  I *do* want the SB API to be used.


> >
> > 2) Begin rejecting or reverting changes that go in without tests.  I
> understand there are some situations where tests are difficult.  Core dumps
> and unwinding come to mind.  There are probably others.  But this is the
> exception, and not the norm.  Almost every change should go in with tests.
>
> As long as it can be tested reasonably I am fine with rejecting changes
> going in that don't have tests.
>
One of the problem is that most changes go in without review.  I understand
why this is, because Apple especially are code owners of more than 80% of
LLDB, so people adhere to the post-commit review.  This is fine in
principle, but if changes go in without tests and there was no
corresponding code review, then my only option is to either keep pinging
the commit thread in hopes I'll get a response (which I sometimes don't
get), or revert the change.  Often though I get a response that says "Yea
I'll get to adding tests eventually".  I especially want this last type of
response to go the way of the dinosaur.  I don't know how to change
peoples' habits, but if you could bring this up at you daily/weekly
standups or somehow make sure everyone is on the same page, perhaps that
would be a good start.  Reverting is the best way I know to handle this,
because it forces a change.  But at the same time it's disruptive, so I
really don't want to do it.


> >
> > 3) If a CL cannot be tested without a command api test due to
> limitations of the SB API, require new changes to go in with a
> corresponding SB API change.
>
> One issue here is I don't want stuff added to the SB API just so that it
> can be tested. The SB API must remain clean and consistent and remain an
> API that makes sense for debugging. I don't want internal goo being exposed
> just so we can test things. If we run into this a lot, we might need to
> make an alternate binary that can test internal unit tests. We could make a
> lldb_internal.so/lldb_internal.dylib/lldb_internal.dll that can be linked
> to by internal unit tests and then those unit tests can be run as part of
> the testing process. So lets keep the SB API clean and sensible with no
> extra fluff, and find a way to tests internal stuff in a diff

Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Zachary Turner via lldb-dev
On Wed, Oct 7, 2015 at 10:37 AM Zachary Turner  wrote:

> One more question: I mentioned earlier that we should enforce the
> distinction between HandleCommand tests and python api tests at an
> organizational level.  In other words, all HandleCommand tests go in
> lldb/test/command-api, and all new SB API tests go in
> lldb/test/command-api.
>
Sorry, ignore the part about all new SB API tests go in
lldb/test/command-api.  I meant to say python-api, and also this doesn't
need to be all new tests.  This is only the eventual goal (as described
below), so new SB API tests could continue to go where they do normally
(test/functionalities for example), and one day in the future when there
are no more HandleCommand tests in any of these folders, we can create the
python-api top level directory and move everything under there.


> Eventually the goal would be to only have 3 toplevel directories under
> lldb/test.  unittests, command-api, and python-api.  But this would take
> some time since it would be on a move-as-you-touch basis, rather than all
> at once.  Does this seem reasonable as well?
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How to debug LLDB server?

2015-10-07 Thread Greg Clayton via lldb-dev
Most calls for lldb-server should use an instance variable 
GDBRemoteCommunication::m_packet_timeout which you could then modify. But this 
timeout you are talking about is the time that the expression can take when 
running. I would just bump these up temporarily while you are debugging to 
avoid the timeouts. Just don't check it in.

So for GDB Remote packets, we already bump the timeout up in the 
GDBRemoteCommunication constructor:

#ifdef LLDB_CONFIGURATION_DEBUG
m_packet_timeout (1000),
#else
m_packet_timeout (1),
#endif


Anything else is probably expression timeouts and you will need to manually 
bump those up in order to debug, or you could do the same thing as the GDB 
Remote in InferiorCallPOSIX.cpp:

 #ifdef LLDB_CONFIGURATION_DEBUG
options.SetTimeoutUsec(5000);
#else
options.SetTimeoutUsec(50);
#endif


> On Oct 7, 2015, at 10:33 AM, Eugene Birukov via lldb-dev 
>  wrote:
> 
> Hello,
>  
> I am trying to see what is going inside LLDB server 3.7.0 but there are a lot 
> of timeouts scattered everywhere. Say, InferiorCallPOSIX.cpp:74 sets 
> hard-coded timeout to 500,000us, etc. These timeouts fire if I spend any time 
> on breakpoint inside server and make debugging experience miserable. Is there 
> any way to turn them all off?
>  
> BTW, I am using LLDB as a C++ API, not as standalone program, but I have 
> debugger attached to it and can alter its memory state.
>  
> Thanks,
> Eugene
>  
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How to debug LLDB server?

2015-10-07 Thread Eugene Birukov via lldb-dev
Thanks! 
 
A newbie question then: how to trigger LLDB_CONFIGURATION_DEBUG when I run 
cmake? I am sure that I built debug version, but packet timeout is still 1 to 
me.
 
(gdb) p m_packet_timeout
$1 = 1

 
> Subject: Re: [lldb-dev] How to debug LLDB server?
> From: gclay...@apple.com
> Date: Wed, 7 Oct 2015 11:04:45 -0700
> CC: lldb-dev@lists.llvm.org
> To: eugen...@hotmail.com
> 
> Most calls for lldb-server should use an instance variable 
> GDBRemoteCommunication::m_packet_timeout which you could then modify. But 
> this timeout you are talking about is the time that the expression can take 
> when running. I would just bump these up temporarily while you are debugging 
> to avoid the timeouts. Just don't check it in.
> 
> So for GDB Remote packets, we already bump the timeout up in the 
> GDBRemoteCommunication constructor:
> 
> #ifdef LLDB_CONFIGURATION_DEBUG
> m_packet_timeout (1000),
> #else
> m_packet_timeout (1),
> #endif
> 
> 
> Anything else is probably expression timeouts and you will need to manually 
> bump those up in order to debug, or you could do the same thing as the GDB 
> Remote in InferiorCallPOSIX.cpp:
> 
>  #ifdef LLDB_CONFIGURATION_DEBUG
> options.SetTimeoutUsec(5000);
> #else
> options.SetTimeoutUsec(50);
> #endif
> 
> 
> > On Oct 7, 2015, at 10:33 AM, Eugene Birukov via lldb-dev 
> >  wrote:
> > 
> > Hello,
> >  
> > I am trying to see what is going inside LLDB server 3.7.0 but there are a 
> > lot of timeouts scattered everywhere. Say, InferiorCallPOSIX.cpp:74 sets 
> > hard-coded timeout to 500,000us, etc. These timeouts fire if I spend any 
> > time on breakpoint inside server and make debugging experience miserable. 
> > Is there any way to turn them all off?
> >  
> > BTW, I am using LLDB as a C++ API, not as standalone program, but I have 
> > debugger attached to it and can alter its memory state.
> >  
> > Thanks,
> > Eugene
> >  
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
  ___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Greg Clayton via lldb-dev
> 
> So in summary, it sounds like we agree on the following guidelines:
> 
> 1) If you're committing a CL and it is possible to test it through the SB 
> API, you should only submit an SB API test, and not a HandleCommand test.

agreed

> 2) If you're committing a CL and it's not possible to test it through the SBI 
> API but it does make sense for the SB API, you should extend the SB API at 
> the same time as your CL, and then refer back to #1.

agreed

> 3) If it is not possible to test it through the SB API and it does not make 
> sense to add it to the SB API from a design perspective, you should consider 
> writing a unit test for it in C++.  This applies especially for utility 
> classes and data structures.

agreed

> 4) Finally, if none of the above are true, you can write a HandleCommand test.

agreed
> 
> One more question: I mentioned earlier that we should enforce the distinction 
> between HandleCommand tests and python api tests at an organizational level.  
> In other words, all HandleCommand tests go in lldb/test/command-api, and all 
> new SB API tests go in lldb/test/command-api.  Eventually the goal would be 
> to only have 3 toplevel directories under lldb/test.  unittests, command-api, 
> and python-api.  But this would take some time since it would be on a 
> move-as-you-touch basis, rather than all at once.  Does this seem reasonable 
> as well?

I really don't care for the "python-api" or "command-api" directories. We 
should make tests as needed without needing to place them into specific API or 
command line bins. I don't want two directories like:

test/command-api/lang/c/
test/public-api/lang/c/


I would rather us just write the tests as needed and do what is right for the 
tests. The API/command directories add no value. I am fine with having these 
directories when say we are trying to compile against the public API as a test 
in itself, but we don't need to go moving or putting tests into these 
directories.

Greg

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Jim Ingham via lldb-dev

> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev 
>  wrote:
> 
> 
> 
> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton  wrote:
> 
> 
> > On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev 
> >  wrote:
> >
> > Jim, Greg,
> >
> > Can I get some feedback on this?  I would like to start enforcing this 
> > moving forward.  I want to make sure we're in agreement.
> >
> > On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala  wrote:
> > IMHO that all sounds reasonable.
> >
> > FWIW - I wrote some tests for the test system changes I put in (for the 
> > pure-python impl of timeout support), and in the process, I discovered a 
> > race condition in using a python facility that there really is no way I 
> > would have found anywhere near as reasonably without having added the 
> > tests.  (For those of you who are test-centric, this is not a surprising 
> > outcome, but I'm adding this for those who may be inclined to think of it 
> > as an afterthought).
> >
> > -Todd
> >
> > On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev 
> >  wrote:
> > On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
> > I have held from the beginning that the only tests that should be written 
> > using HandleCommand are those that explicitly test command behavior, and if 
> > it is possible to write a test using the SB API you should always do it 
> > that way for the very reasons you cite.  Not everybody agreed with me at 
> > first, so we ended up with a bunch of tests that do complex things using 
> > HandleCommand where they really ought not to.  I'm not sure it is worth the 
> > time to go rewrite all those tests, but we shouldn't write any new tests 
> > that way.
> >
> > I would like to revive this thread, because there doesn't seem to be 
> > consensus that this is the way to go.  I've suggested on a couple of 
> > reviews recently that people put new command api tests under a new 
> > top-level folder under tests, and so far the responses I've gotten have not 
> > indicated that people are willing to do this.
> >
> > Nobody chimed in on this thread with a disagreement, which indicates to me 
> > that we are ok with moving this forward.  So I'm reviving this in hopes 
> > that we can come to agreement.  With that in mind, my goal is:
> >
> > 1) Begin enforcing this on new CLs that go in.  We need to maintain a 
> > consistent message and direction for the project, and if this is a "good 
> > idea", then it should be applied and enforced consistently.  Command api 
> > tests should be the exception, not the norm.
> 
> You mean API tests should be the norm right? I don't want people submitting 
> command line tests like "file a.out", "run", "step". I want the API to be 
> used. Did you get this reversed?
> I didn't get it reversed, but I agree my wording wasn't clear.  By "command 
> api", I meant HandleCommand / etc.  I *do* want the SB API to be used.
>  
> >
> > 2) Begin rejecting or reverting changes that go in without tests.  I 
> > understand there are some situations where tests are difficult.  Core dumps 
> > and unwinding come to mind.  There are probably others.  But this is the 
> > exception, and not the norm.  Almost every change should go in with tests.
> 
> As long as it can be tested reasonably I am fine with rejecting changes going 
> in that don't have tests.
> One of the problem is that most changes go in without review.  I understand 
> why this is, because Apple especially are code owners of more than 80% of 
> LLDB, so people adhere to the post-commit review.  This is fine in principle, 
> but if changes go in without tests and there was no corresponding code 
> review, then my only option is to either keep pinging the commit thread in 
> hopes I'll get a response (which I sometimes don't get), or revert the 
> change.  Often though I get a response that says "Yea I'll get to adding 
> tests eventually".  I especially want this last type of response to go the 
> way of the dinosaur.  I don't know how to change peoples' habits, but if you 
> could bring this up at you daily/weekly standups or somehow make sure 
> everyone is on the same page, perhaps that would be a good start.  Reverting 
> is the best way I know to handle this, because it forces a change.  But at 
> the same time it's disruptive, so I really don't want to do it.

I agree that reversion is aggressive and it would be better to have some nicer 
way to enforce this.  It is also a bit rigid and sometimes people's schedules 
make delaying the test-writing seem a very persuasive option.  We want to take 
that into account.  Maybe every time a change goes in that warrants a test but 
doesn't have one we file a bug against the author to write the tests, and mark 
it some way that is easy to find.  Then if you have more than N such bugs you 
can't make new checkins till you get them below N?  That way this work can be 
batch more easily to accommodate schedules but there are still consequences if 
you put it off too long.

>  
> >
> > 3)

Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Jim Ingham via lldb-dev

> On Oct 7, 2015, at 11:16 AM, Jim Ingham  wrote:
> 
>> 
>> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev 
>>  wrote:
>> 
>> 
>> 
>> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton  wrote:
>> 
>> 
>>> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev 
>>>  wrote:
>>> 
>>> Jim, Greg,
>>> 
>>> Can I get some feedback on this?  I would like to start enforcing this 
>>> moving forward.  I want to make sure we're in agreement.
>>> 
>>> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala  wrote:
>>> IMHO that all sounds reasonable.
>>> 
>>> FWIW - I wrote some tests for the test system changes I put in (for the 
>>> pure-python impl of timeout support), and in the process, I discovered a 
>>> race condition in using a python facility that there really is no way I 
>>> would have found anywhere near as reasonably without having added the 
>>> tests.  (For those of you who are test-centric, this is not a surprising 
>>> outcome, but I'm adding this for those who may be inclined to think of it 
>>> as an afterthought).
>>> 
>>> -Todd
>>> 
>>> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev 
>>>  wrote:
>>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
>>> I have held from the beginning that the only tests that should be written 
>>> using HandleCommand are those that explicitly test command behavior, and if 
>>> it is possible to write a test using the SB API you should always do it 
>>> that way for the very reasons you cite.  Not everybody agreed with me at 
>>> first, so we ended up with a bunch of tests that do complex things using 
>>> HandleCommand where they really ought not to.  I'm not sure it is worth the 
>>> time to go rewrite all those tests, but we shouldn't write any new tests 
>>> that way.
>>> 
>>> I would like to revive this thread, because there doesn't seem to be 
>>> consensus that this is the way to go.  I've suggested on a couple of 
>>> reviews recently that people put new command api tests under a new 
>>> top-level folder under tests, and so far the responses I've gotten have not 
>>> indicated that people are willing to do this.
>>> 
>>> Nobody chimed in on this thread with a disagreement, which indicates to me 
>>> that we are ok with moving this forward.  So I'm reviving this in hopes 
>>> that we can come to agreement.  With that in mind, my goal is:
>>> 
>>> 1) Begin enforcing this on new CLs that go in.  We need to maintain a 
>>> consistent message and direction for the project, and if this is a "good 
>>> idea", then it should be applied and enforced consistently. Command api 
>>> tests should be the exception, not the norm.
>> 
>> You mean API tests should be the norm right? I don't want people submitting 
>> command line tests like "file a.out", "run", "step". I want the API to be 
>> used. Did you get this reversed?
>> I didn't get it reversed, but I agree my wording wasn't clear.  By "command 
>> api", I meant HandleCommand / etc.  I *do* want the SB API to be used.
>> 
>>> 
>>> 2) Begin rejecting or reverting changes that go in without tests.  I 
>>> understand there are some situations where tests are difficult.  Core dumps 
>>> and unwinding come to mind.  There are probably others.  But this is the 
>>> exception, and not the norm.  Almost every change should go in with tests.
>> 
>> As long as it can be tested reasonably I am fine with rejecting changes 
>> going in that don't have tests.
>> One of the problem is that most changes go in without review.  I understand 
>> why this is, because Apple especially are code owners of more than 80% of 
>> LLDB, so people adhere to the post-commit review.  This is fine in 
>> principle, but if changes go in without tests and there was no corresponding 
>> code review, then my only option is to either keep pinging the commit thread 
>> in hopes I'll get a response (which I sometimes don't get), or revert the 
>> change.  Often though I get a response that says "Yea I'll get to adding 
>> tests eventually".  I especially want this last type of response to go the 
>> way of the dinosaur.  I don't know how to change peoples' habits, but if you 
>> could bring this up at you daily/weekly standups or somehow make sure 
>> everyone is on the same page, perhaps that would be a good start.  Reverting 
>> is the best way I know to handle this, because it forces a change.  But at 
>> the same time it's disruptive, so I really don't want to do it.
> 
> I agree that reversion is aggressive and it would be better to have some 
> nicer way to enforce this.  It is also a bit rigid and sometimes people's 
> schedules make delaying the test-writing seem a very persuasive option.  We 
> want to take that into account.  Maybe every time a change goes in that 
> warrants a test but doesn't have one we file a bug against the author to 
> write the tests, and mark it some way that is easy to find.  Then if you have 
> more than N such bugs you can't make new checkins till you get them below N?  
> That way this work can be batch 

Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Zachary Turner via lldb-dev
On Wed, Oct 7, 2015 at 11:26 AM Jim Ingham  wrote:

>
> > On Oct 7, 2015, at 11:16 AM, Jim Ingham  wrote:
> >
> >>
> >> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>
> >>
> >>
> >> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton 
> wrote:
> >>
> >>
> >>> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>> Jim, Greg,
> >>>
> >>> Can I get some feedback on this?  I would like to start enforcing this
> moving forward.  I want to make sure we're in agreement.
> >>>
> >>> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala 
> wrote:
> >>> IMHO that all sounds reasonable.
> >>>
> >>> FWIW - I wrote some tests for the test system changes I put in (for
> the pure-python impl of timeout support), and in the process, I discovered
> a race condition in using a python facility that there really is no way I
> would have found anywhere near as reasonably without having added the
> tests.  (For those of you who are test-centric, this is not a surprising
> outcome, but I'm adding this for those who may be inclined to think of it
> as an afterthought).
> >>>
> >>> -Todd
> >>>
> >>> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
> >>> I have held from the beginning that the only tests that should be
> written using HandleCommand are those that explicitly test command
> behavior, and if it is possible to write a test using the SB API you should
> always do it that way for the very reasons you cite.  Not everybody agreed
> with me at first, so we ended up with a bunch of tests that do complex
> things using HandleCommand where they really ought not to.  I'm not sure it
> is worth the time to go rewrite all those tests, but we shouldn't write any
> new tests that way.
> >>>
> >>> I would like to revive this thread, because there doesn't seem to be
> consensus that this is the way to go.  I've suggested on a couple of
> reviews recently that people put new command api tests under a new
> top-level folder under tests, and so far the responses I've gotten have not
> indicated that people are willing to do this.
> >>>
> >>> Nobody chimed in on this thread with a disagreement, which indicates
> to me that we are ok with moving this forward.  So I'm reviving this in
> hopes that we can come to agreement.  With that in mind, my goal is:
> >>>
> >>> 1) Begin enforcing this on new CLs that go in.  We need to maintain a
> consistent message and direction for the project, and if this is a "good
> idea", then it should be applied and enforced consistently. Command api
> tests should be the exception, not the norm.
> >>
> >> You mean API tests should be the norm right? I don't want people
> submitting command line tests like "file a.out", "run", "step". I want the
> API to be used. Did you get this reversed?
> >> I didn't get it reversed, but I agree my wording wasn't clear.  By
> "command api", I meant HandleCommand / etc.  I *do* want the SB API to be
> used.
> >>
> >>>
> >>> 2) Begin rejecting or reverting changes that go in without tests.  I
> understand there are some situations where tests are difficult.  Core dumps
> and unwinding come to mind.  There are probably others.  But this is the
> exception, and not the norm.  Almost every change should go in with tests.
> >>
> >> As long as it can be tested reasonably I am fine with rejecting changes
> going in that don't have tests.
> >> One of the problem is that most changes go in without review.  I
> understand why this is, because Apple especially are code owners of more
> than 80% of LLDB, so people adhere to the post-commit review.  This is fine
> in principle, but if changes go in without tests and there was no
> corresponding code review, then my only option is to either keep pinging
> the commit thread in hopes I'll get a response (which I sometimes don't
> get), or revert the change.  Often though I get a response that says "Yea
> I'll get to adding tests eventually".  I especially want this last type of
> response to go the way of the dinosaur.  I don't know how to change
> peoples' habits, but if you could bring this up at you daily/weekly
> standups or somehow make sure everyone is on the same page, perhaps that
> would be a good start.  Reverting is the best way I know to handle this,
> because it forces a change.  But at the same time it's disruptive, so I
> really don't want to do it.
> >
> > I agree that reversion is aggressive and it would be better to have some
> nicer way to enforce this.  It is also a bit rigid and sometimes people's
> schedules make delaying the test-writing seem a very persuasive option.  We
> want to take that into account.  Maybe every time a change goes in that
> warrants a test but doesn't have one we file a bug against the author to
> write the tests, and mark it some way that is easy to find.  Then if you
> have more than N such bugs you can't m

Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Jim Ingham via lldb-dev

> On Oct 7, 2015, at 11:40 AM, Zachary Turner  wrote:
> 
> 
> 
> On Wed, Oct 7, 2015 at 11:26 AM Jim Ingham  wrote:
> 
> > On Oct 7, 2015, at 11:16 AM, Jim Ingham  wrote:
> >
> >>
> >> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev 
> >>  wrote:
> >>
> >>
> >>
> >> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton  wrote:
> >>
> >>
> >>> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev 
> >>>  wrote:
> >>>
> >>> Jim, Greg,
> >>>
> >>> Can I get some feedback on this?  I would like to start enforcing this 
> >>> moving forward.  I want to make sure we're in agreement.
> >>>
> >>> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala  wrote:
> >>> IMHO that all sounds reasonable.
> >>>
> >>> FWIW - I wrote some tests for the test system changes I put in (for the 
> >>> pure-python impl of timeout support), and in the process, I discovered a 
> >>> race condition in using a python facility that there really is no way I 
> >>> would have found anywhere near as reasonably without having added the 
> >>> tests.  (For those of you who are test-centric, this is not a surprising 
> >>> outcome, but I'm adding this for those who may be inclined to think of it 
> >>> as an afterthought).
> >>>
> >>> -Todd
> >>>
> >>> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev 
> >>>  wrote:
> >>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham  wrote:
> >>> I have held from the beginning that the only tests that should be written 
> >>> using HandleCommand are those that explicitly test command behavior, and 
> >>> if it is possible to write a test using the SB API you should always do 
> >>> it that way for the very reasons you cite.  Not everybody agreed with me 
> >>> at first, so we ended up with a bunch of tests that do complex things 
> >>> using HandleCommand where they really ought not to.  I'm not sure it is 
> >>> worth the time to go rewrite all those tests, but we shouldn't write any 
> >>> new tests that way.
> >>>
> >>> I would like to revive this thread, because there doesn't seem to be 
> >>> consensus that this is the way to go.  I've suggested on a couple of 
> >>> reviews recently that people put new command api tests under a new 
> >>> top-level folder under tests, and so far the responses I've gotten have 
> >>> not indicated that people are willing to do this.
> >>>
> >>> Nobody chimed in on this thread with a disagreement, which indicates to 
> >>> me that we are ok with moving this forward.  So I'm reviving this in 
> >>> hopes that we can come to agreement.  With that in mind, my goal is:
> >>>
> >>> 1) Begin enforcing this on new CLs that go in.  We need to maintain a 
> >>> consistent message and direction for the project, and if this is a "good 
> >>> idea", then it should be applied and enforced consistently. Command api 
> >>> tests should be the exception, not the norm.
> >>
> >> You mean API tests should be the norm right? I don't want people 
> >> submitting command line tests like "file a.out", "run", "step". I want the 
> >> API to be used. Did you get this reversed?
> >> I didn't get it reversed, but I agree my wording wasn't clear.  By 
> >> "command api", I meant HandleCommand / etc.  I *do* want the SB API to be 
> >> used.
> >>
> >>>
> >>> 2) Begin rejecting or reverting changes that go in without tests.  I 
> >>> understand there are some situations where tests are difficult.  Core 
> >>> dumps and unwinding come to mind.  There are probably others.  But this 
> >>> is the exception, and not the norm.  Almost every change should go in 
> >>> with tests.
> >>
> >> As long as it can be tested reasonably I am fine with rejecting changes 
> >> going in that don't have tests.
> >> One of the problem is that most changes go in without review.  I 
> >> understand why this is, because Apple especially are code owners of more 
> >> than 80% of LLDB, so people adhere to the post-commit review.  This is 
> >> fine in principle, but if changes go in without tests and there was no 
> >> corresponding code review, then my only option is to either keep pinging 
> >> the commit thread in hopes I'll get a response (which I sometimes don't 
> >> get), or revert the change.  Often though I get a response that says "Yea 
> >> I'll get to adding tests eventually".  I especially want this last type of 
> >> response to go the way of the dinosaur.  I don't know how to change 
> >> peoples' habits, but if you could bring this up at you daily/weekly 
> >> standups or somehow make sure everyone is on the same page, perhaps that 
> >> would be a good start.  Reverting is the best way I know to handle this, 
> >> because it forces a change.  But at the same time it's disruptive, so I 
> >> really don't want to do it.
> >
> > I agree that reversion is aggressive and it would be better to have some 
> > nicer way to enforce this.  It is also a bit rigid and sometimes people's 
> > schedules make delaying the test-writing seem a very persuasive option.  We 
> > want to take that into account.  Maybe every t

[lldb-dev] Thread resumes with stale signal after executing InferiorCallMmap

2015-10-07 Thread Eugene Birukov via lldb-dev
Hi,
 
I am using LLDB 3.7.0 C++ API. My program stops at a certain breakpoint and if 
I call SBFrame::EvaluateExpression() there, when I let it go it terminates with 
SIG_ILL on an innocent thread. I dug up into this, and there seems to be two 
independent problems there, this mail is about the second one.
 
EvaluateExpression() calls Process::CanJIT() which in turn executes mmap() on 
the inferior. This mmap gets SIG_ILL because execution starts at address which 
is 2 bytes before the very first mmap instruction. I am still looking why LLDB 
server decided to do that - I am pretty sure that the client asked to set the 
program counter to correct value.So, the thread execution terminates and the 
signal is recorded on Thread::m_resume_signal. This field is not cleared during 
Thread::RestoreThreadStateFromCheckpoint() and fires when I resume the program 
after breakpoint. 
So, what would be the best way to deal with the situation? Should I add "resume 
signal" field to ThreadStateCheckpoint? Or would StopInfo be a better place for 
that? Or something else?
 
Thanks,
Eugene
  ___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] " Import error: No module named embedded_interpreter" on windows

2015-10-07 Thread kwadwo amankwa via lldb-dev

Hi Zachary,

Ok so I did end up reconfiguring it and rebuilding but that was no help 
. I then realized that I was importing python27.dll as well as 
python27_d.dll . When I rebuilt liblldb I started getting an single 
unresolved error for
imp_Py_InitModule which I guess was the symbol for the Py_InitModule4 
python api  which is actually a macro which is defined depending on a 
few flags . So I ended up rebuilding python27 and the unresolved error 
went away and built everything from scratch . The good news is that when 
I invoke the 'script' command in the lldb interpreter it doesn't crash 
anymore and I can use the python interpreter . I can even import the 
lldb module without getting the embedded interpreter Import error . 
However I  when I actually import lldb from a python module and run it 
with the standalone interpreter I still get the Import Error.  I have 
checked the PYTHONPATH which was different for the lldb embedded 
interpreter and updated the variable to contain the missing paths but no 
cigar :-( . Any suggestions ?


On 05/10/15 21:21, Zachary Turner wrote:
Can you try to regenerate CMake with that command line and see if that 
helps?


On Mon, Oct 5, 2015 at 1:17 PM kwadwo amankwa > wrote:



No

On 05/10/15 21:15, Zachary Turner wrote:

Are you using -DCMAKE_BUILD_TYPE=Debug when you generate CMake?

On Mon, Oct 5, 2015 at 1:14 PM kwadwo amankwa mailto:q...@lunarblack.com>> wrote:

Thanks for the response ,  sorry for the delay. As a matter
of fact I actually got rid of the system python and installed
my custom version.  I do suspect it is a linking problem
though. When I build liblldb.dll it always loads python27.dll
instead of python27_d.dll. Do you happen to know where the
python27 lib is specified as an input library because the
project properties in liblldb does not specify it . however
the linker complains if I don't specify the lib directory in
'additional directories' and when I do it always links to the
python27lib. I grepped the whole build directory and two
files SystemInitializer.obj and LLDBWrapPython.obj seem to
contain /DEFAULTLIB:python27.lib. Do you have an idea of what
is causing the compiler to do this ?


On 05/10/15 19:13, Zachary Turner wrote:

Ahh, I thought you were doing this from inside LLDB.  There
are a couple of problems:

1) You might be running with the system Python, not the
custom Python you built with VS2013.  What is the value of
`sys.executable`?
2) Even if you are running your own Python, the regular
Python appears to be in your `sys.path`.  You will need to
unset PYTHONPATH and PYTHONHOME from pointing to your system
Python.  PYTHONHOME should point to your custom Python, and
PYTHONPATH should point to the `lib\site-packages` directory
that I mentioned earlier in your build directory.



On Mon, Oct 5, 2015 at 11:06 AM kwadwo amankwa
mailto:q...@lunarblack.com>> wrote:

here it is;

C:\Users\redbandit\Documents\GitHub\pygui>python
Python 2.7.10 (default, Sep 18 2015, 02:35:59) [MSC
v.1800 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for
more information.
>>> import sys
>>> sys.path
['', 'C:\\Python27\\Lib',

'C:\\Users\\redbandit\\llvm\\build\\Debug\\lib\\site-packages\\lldb',
'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts',
'C:\\Users\\redbandi t\\Documents\\GitHub\\pygui',
'C:\\Python27\\python27.zip', 'C:\\Python27\\DLLs',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk', 'C:\\Python27',
'C:\\Python27\\li b\\site-packages']
>>> import lldb
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named embedded_interpreter
>>> lldb.__file__
'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts\\lldb.pyc'

>>> sys.path
['C:/Users/redbandit/llvm/build/Debug/lib/site-packages/lldb',

'C:/Users/redbandit/llvm/build/Debug/lib/site-packages/lib/site-packages',
'', 'C:\\Python27\\Lib', 'C:\\Users\\
redbandit\\llvm\\build\\Debug\\lib\\site-packages\\lldb', 
'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts',
'C:\\Users\\redbandit\\Documents\\GitHub\\pygui',
'C:\\Pyt hon27\\python27.zip', 'C:\\Python27\\DLLs',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk', 'C:\\Python27',
'C:\\Python27\\lib\\site-packages', '.']


On 05/10/15 18:48, Zachary Turner wrote:

Can you run the followign commands and paste the

Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Zachary Turner via lldb-dev
What are the chances of someone attempting to get the existing unit test
runner working in the Xcode build?  Or at least attempting to and seeing if
there's any major blockers that prevent it from working?

On Wed, Oct 7, 2015 at 11:54 AM Jim Ingham  wrote:

>
> > On Oct 7, 2015, at 11:40 AM, Zachary Turner  wrote:
> >
> >
> >
> > On Wed, Oct 7, 2015 at 11:26 AM Jim Ingham  wrote:
> >
> > > On Oct 7, 2015, at 11:16 AM, Jim Ingham  wrote:
> > >
> > >>
> > >> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >>
> > >>
> > >>
> > >> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton 
> wrote:
> > >>
> > >>
> > >>> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >>>
> > >>> Jim, Greg,
> > >>>
> > >>> Can I get some feedback on this?  I would like to start enforcing
> this moving forward.  I want to make sure we're in agreement.
> > >>>
> > >>> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala 
> wrote:
> > >>> IMHO that all sounds reasonable.
> > >>>
> > >>> FWIW - I wrote some tests for the test system changes I put in (for
> the pure-python impl of timeout support), and in the process, I discovered
> a race condition in using a python facility that there really is no way I
> would have found anywhere near as reasonably without having added the
> tests.  (For those of you who are test-centric, this is not a surprising
> outcome, but I'm adding this for those who may be inclined to think of it
> as an afterthought).
> > >>>
> > >>> -Todd
> > >>>
> > >>> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham 
> wrote:
> > >>> I have held from the beginning that the only tests that should be
> written using HandleCommand are those that explicitly test command
> behavior, and if it is possible to write a test using the SB API you should
> always do it that way for the very reasons you cite.  Not everybody agreed
> with me at first, so we ended up with a bunch of tests that do complex
> things using HandleCommand where they really ought not to.  I'm not sure it
> is worth the time to go rewrite all those tests, but we shouldn't write any
> new tests that way.
> > >>>
> > >>> I would like to revive this thread, because there doesn't seem to be
> consensus that this is the way to go.  I've suggested on a couple of
> reviews recently that people put new command api tests under a new
> top-level folder under tests, and so far the responses I've gotten have not
> indicated that people are willing to do this.
> > >>>
> > >>> Nobody chimed in on this thread with a disagreement, which indicates
> to me that we are ok with moving this forward.  So I'm reviving this in
> hopes that we can come to agreement.  With that in mind, my goal is:
> > >>>
> > >>> 1) Begin enforcing this on new CLs that go in.  We need to maintain
> a consistent message and direction for the project, and if this is a "good
> idea", then it should be applied and enforced consistently. Command api
> tests should be the exception, not the norm.
> > >>
> > >> You mean API tests should be the norm right? I don't want people
> submitting command line tests like "file a.out", "run", "step". I want the
> API to be used. Did you get this reversed?
> > >> I didn't get it reversed, but I agree my wording wasn't clear.  By
> "command api", I meant HandleCommand / etc.  I *do* want the SB API to be
> used.
> > >>
> > >>>
> > >>> 2) Begin rejecting or reverting changes that go in without tests.  I
> understand there are some situations where tests are difficult.  Core dumps
> and unwinding come to mind.  There are probably others.  But this is the
> exception, and not the norm.  Almost every change should go in with tests.
> > >>
> > >> As long as it can be tested reasonably I am fine with rejecting
> changes going in that don't have tests.
> > >> One of the problem is that most changes go in without review.  I
> understand why this is, because Apple especially are code owners of more
> than 80% of LLDB, so people adhere to the post-commit review.  This is fine
> in principle, but if changes go in without tests and there was no
> corresponding code review, then my only option is to either keep pinging
> the commit thread in hopes I'll get a response (which I sometimes don't
> get), or revert the change.  Often though I get a response that says "Yea
> I'll get to adding tests eventually".  I especially want this last type of
> response to go the way of the dinosaur.  I don't know how to change
> peoples' habits, but if you could bring this up at you daily/weekly
> standups or somehow make sure everyone is on the same page, perhaps that
> would be a good start.  Reverting is the best way I know to handle this,
> because it forces a change.  But at the same time it's disruptive, so I
> really don't want to do it.
> > >
> > > I agree that reversion is aggressive and it would be better to have
>

Re: [lldb-dev] " Import error: No module named embedded_interpreter" on windows

2015-10-07 Thread Zachary Turner via lldb-dev
When you built LLDB, did you specify a -DPYTHON_HOME= on your CMake
command line, and also run the install_custom_python.py script?  There's a
lot of steps, so it seems like almost everybody misses at least one step
when doing this.

I'm actively working (as in, literally right now) on getting LLDB to work
with Python 3.  If all goes smoothly, hopefully all of these problems will
disappear and everything will just work without any user configuration
required at all.

On Wed, Oct 7, 2015 at 12:17 PM kwadwo amankwa  wrote:

> Hi Zachary,
>
> Ok so I did end up reconfiguring it and rebuilding but that was no help .
> I then realized that I was importing python27.dll as well as python27_d.dll
> . When I rebuilt liblldb I started getting an single unresolved error for
> imp_Py_InitModule which I guess was the symbol for the Py_InitModule4
> python api  which is actually a macro which is defined depending on a few
> flags . So I ended up rebuilding python27 and the unresolved error went
> away and built everything from scratch . The good news is that when I
> invoke the 'script' command in the lldb interpreter it doesn't crash
> anymore and I can use the python interpreter . I can even import the lldb
> module without getting the embedded interpreter Import error . However I
> when I actually import lldb from a python module and run it with the
> standalone interpreter I still get the Import Error.  I have checked the
> PYTHONPATH which was different for the lldb embedded interpreter and
> updated the variable to contain the missing paths but no cigar :-(.
> Any suggestions ?
>
>
> On 05/10/15 21:21, Zachary Turner wrote:
>
> Can you try to regenerate CMake with that command line and see if that
> helps?
>
> On Mon, Oct 5, 2015 at 1:17 PM kwadwo amankwa  wrote:
>
>>
>> No
>>
>> On 05/10/15 21:15, Zachary Turner wrote:
>>
>> Are you using -DCMAKE_BUILD_TYPE=Debug when you generate CMake?
>>
>> On Mon, Oct 5, 2015 at 1:14 PM kwadwo amankwa  wrote:
>>
>>> Thanks for the response ,  sorry for the delay. As a matter of fact I
>>> actually got rid of the system python and installed my custom version.  I
>>> do suspect it is a linking problem though. When I build liblldb.dll it
>>> always loads python27.dll instead of python27_d.dll. Do you happen to know
>>> where the python27 lib is specified as an input library because the project
>>> properties in liblldb does not specify it . however the linker complains if
>>> I don't specify the lib directory in 'additional directories' and when I do
>>> it always links to the python27lib. I grepped the whole build directory and
>>> two files SystemInitializer.obj and LLDBWrapPython.obj seem to contain
>>> /DEFAULTLIB:python27.lib. Do you have an idea of what is causing the
>>> compiler to do this ?
>>>
>>>
>>> On 05/10/15 19:13, Zachary Turner wrote:
>>>
>>> Ahh, I thought you were doing this from inside LLDB.  There are a couple
>>> of problems:
>>>
>>> 1) You might be running with the system Python, not the custom Python
>>> you built with VS2013.  What is the value of `sys.executable`?
>>> 2) Even if you are running your own Python, the regular Python appears
>>> to be in your `sys.path`.  You will need to unset PYTHONPATH and PYTHONHOME
>>> from pointing to your system Python.  PYTHONHOME should point to your
>>> custom Python, and PYTHONPATH should point to the `lib\site-packages`
>>> directory that I mentioned earlier in your build directory.
>>>
>>>
>>>
>>> On Mon, Oct 5, 2015 at 11:06 AM kwadwo amankwa 
>>> wrote:
>>>
 here it is;

 C:\Users\redbandit\Documents\GitHub\pygui>python
 Python 2.7.10 (default, Sep 18 2015, 02:35:59) [MSC v.1800 64 bit
 (AMD64)] on win32
 Type "help", "copyright", "credits" or "license" for more information.
 >>> import sys
 >>> sys.path
 ['', 'C:\\Python27\\Lib',
 'C:\\Users\\redbandit\\llvm\\build\\Debug\\lib\\site-packages\\lldb',
 'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts',
 'C:\\Users\\redbandi t\\Documents\\GitHub\\pygui',
 'C:\\Python27\\python27.zip', 'C:\\Python27\\DLLs',
 'C:\\Python27\\lib\\plat-win', 'C:\\Python27\\lib\\lib-tk', 'C:\\Python27',
 'C:\\Python27\\li b\\site-packages']
 >>> import lldb
 Traceback (most recent call last):
   File "", line 1, in 
 ImportError: No module named embedded_interpreter
 >>> lldb.__file__
 'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts\\lldb.pyc'
 >>> sys.path
 ['C:/Users/redbandit/llvm/build/Debug/lib/site-packages/lldb',
 'C:/Users/redbandit/llvm/build/Debug/lib/site-packages/lib/site-packages',
 '', 'C:\\Python27\\Lib', 'C:\\Users\\
 redbandit\\llvm\\build\\Debug\\lib\\site-packages\\lldb',
 'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts',
 'C:\\Users\\redbandit\\Documents\\GitHub\\pygui', 'C:\\Pyt
 hon27\\python27.zip', 'C:\\Python27\\DLLs', 'C:\\Python27\\lib\\plat-win',
 'C:\\Python27\\lib\\lib-tk', 'C:\\Python27',
 'C:\\Python

Re: [lldb-dev] How to debug LLDB server?

2015-10-07 Thread Greg Clayton via lldb-dev
We set this manually in the Xcode project for "Debug" and "DebugClang" build 
variants. The cmake should be able to do the same, but I am not sure if it is. 
Feel free to make it do so. I am not very good with cmake, so I won't be much 
help.

Greg

> On Oct 7, 2015, at 11:09 AM, Eugene Birukov  wrote:
> 
> Thanks! 
>  
> A newbie question then: how to trigger LLDB_CONFIGURATION_DEBUG when I run 
> cmake? I am sure that I built debug version, but packet timeout is still 1 to 
> me.
>  
> (gdb) p m_packet_timeout
> $1 = 1
> 
>  
> > Subject: Re: [lldb-dev] How to debug LLDB server?
> > From: gclay...@apple.com
> > Date: Wed, 7 Oct 2015 11:04:45 -0700
> > CC: lldb-dev@lists.llvm.org
> > To: eugen...@hotmail.com
> > 
> > Most calls for lldb-server should use an instance variable 
> > GDBRemoteCommunication::m_packet_timeout which you could then modify. But 
> > this timeout you are talking about is the time that the expression can take 
> > when running. I would just bump these up temporarily while you are 
> > debugging to avoid the timeouts. Just don't check it in.
> > 
> > So for GDB Remote packets, we already bump the timeout up in the 
> > GDBRemoteCommunication constructor:
> > 
> > #ifdef LLDB_CONFIGURATION_DEBUG
> > m_packet_timeout (1000),
> > #else
> > m_packet_timeout (1),
> > #endif
> > 
> > 
> > Anything else is probably expression timeouts and you will need to manually 
> > bump those up in order to debug, or you could do the same thing as the GDB 
> > Remote in InferiorCallPOSIX.cpp:
> > 
> > #ifdef LLDB_CONFIGURATION_DEBUG
> > options.SetTimeoutUsec(5000);
> > #else
> > options.SetTimeoutUsec(50);
> > #endif
> > 
> > 
> > > On Oct 7, 2015, at 10:33 AM, Eugene Birukov via lldb-dev 
> > >  wrote:
> > > 
> > > Hello,
> > >  
> > > I am trying to see what is going inside LLDB server 3.7.0 but there are a 
> > > lot of timeouts scattered everywhere. Say, InferiorCallPOSIX.cpp:74 sets 
> > > hard-coded timeout to 500,000us, etc. These timeouts fire if I spend any 
> > > time on breakpoint inside server and make debugging experience miserable. 
> > > Is there any way to turn them all off?
> > >  
> > > BTW, I am using LLDB as a C++ API, not as standalone program, but I have 
> > > debugger attached to it and can alter its memory state.
> > >  
> > > Thanks,
> > > Eugene
> > >  
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> > 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-10-07 Thread Todd Fiala via lldb-dev
Hey Zachary,

> What are the chances of someone attempting to get the existing unit test
runner working in the Xcode build?  Or at least attempting to and seeing if
there's any major blockers that prevent it from working?

I fully intend to do that.  I need to do a few more pieces of
infrastructural change to attempt to support the IDE side of it, but I
expect to get to it at some point in the mid-term.

On Wed, Oct 7, 2015 at 12:37 PM, Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> What are the chances of someone attempting to get the existing unit test
> runner working in the Xcode build?  Or at least attempting to and seeing if
> there's any major blockers that prevent it from working?
>
> On Wed, Oct 7, 2015 at 11:54 AM Jim Ingham  wrote:
>
>>
>> > On Oct 7, 2015, at 11:40 AM, Zachary Turner  wrote:
>> >
>> >
>> >
>> > On Wed, Oct 7, 2015 at 11:26 AM Jim Ingham  wrote:
>> >
>> > > On Oct 7, 2015, at 11:16 AM, Jim Ingham  wrote:
>> > >
>> > >>
>> > >> On Oct 7, 2015, at 10:37 AM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> > >>
>> > >>
>> > >>
>> > >> On Wed, Oct 7, 2015 at 10:17 AM Greg Clayton 
>> wrote:
>> > >>
>> > >>
>> > >>> On Oct 7, 2015, at 10:05 AM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> > >>>
>> > >>> Jim, Greg,
>> > >>>
>> > >>> Can I get some feedback on this?  I would like to start enforcing
>> this moving forward.  I want to make sure we're in agreement.
>> > >>>
>> > >>> On Mon, Oct 5, 2015 at 12:30 PM Todd Fiala 
>> wrote:
>> > >>> IMHO that all sounds reasonable.
>> > >>>
>> > >>> FWIW - I wrote some tests for the test system changes I put in (for
>> the pure-python impl of timeout support), and in the process, I discovered
>> a race condition in using a python facility that there really is no way I
>> would have found anywhere near as reasonably without having added the
>> tests.  (For those of you who are test-centric, this is not a surprising
>> outcome, but I'm adding this for those who may be inclined to think of it
>> as an afterthought).
>> > >>>
>> > >>> -Todd
>> > >>>
>> > >>> On Mon, Oct 5, 2015 at 11:24 AM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> > >>> On Fri, Sep 11, 2015 at 11:42 AM Jim Ingham 
>> wrote:
>> > >>> I have held from the beginning that the only tests that should be
>> written using HandleCommand are those that explicitly test command
>> behavior, and if it is possible to write a test using the SB API you should
>> always do it that way for the very reasons you cite.  Not everybody agreed
>> with me at first, so we ended up with a bunch of tests that do complex
>> things using HandleCommand where they really ought not to.  I'm not sure it
>> is worth the time to go rewrite all those tests, but we shouldn't write any
>> new tests that way.
>> > >>>
>> > >>> I would like to revive this thread, because there doesn't seem to
>> be consensus that this is the way to go.  I've suggested on a couple of
>> reviews recently that people put new command api tests under a new
>> top-level folder under tests, and so far the responses I've gotten have not
>> indicated that people are willing to do this.
>> > >>>
>> > >>> Nobody chimed in on this thread with a disagreement, which
>> indicates to me that we are ok with moving this forward.  So I'm reviving
>> this in hopes that we can come to agreement.  With that in mind, my goal is:
>> > >>>
>> > >>> 1) Begin enforcing this on new CLs that go in.  We need to maintain
>> a consistent message and direction for the project, and if this is a "good
>> idea", then it should be applied and enforced consistently. Command api
>> tests should be the exception, not the norm.
>> > >>
>> > >> You mean API tests should be the norm right? I don't want people
>> submitting command line tests like "file a.out", "run", "step". I want the
>> API to be used. Did you get this reversed?
>> > >> I didn't get it reversed, but I agree my wording wasn't clear.  By
>> "command api", I meant HandleCommand / etc.  I *do* want the SB API to be
>> used.
>> > >>
>> > >>>
>> > >>> 2) Begin rejecting or reverting changes that go in without tests.
>> I understand there are some situations where tests are difficult.  Core
>> dumps and unwinding come to mind.  There are probably others.  But this is
>> the exception, and not the norm.  Almost every change should go in with
>> tests.
>> > >>
>> > >> As long as it can be tested reasonably I am fine with rejecting
>> changes going in that don't have tests.
>> > >> One of the problem is that most changes go in without review.  I
>> understand why this is, because Apple especially are code owners of more
>> than 80% of LLDB, so people adhere to the post-commit review.  This is fine
>> in principle, but if changes go in without tests and there was no
>> corresponding code review, then my only option is to either keep pinging
>> the commit thread in hopes I'll get a response (which I sometimes don't
>> get), or revert the

Re: [lldb-dev] " Import error: No module named embedded_interpreter" on windows

2015-10-07 Thread kwadwo amankwa via lldb-dev
I'll work my way backwards . Eventually I'll get there. Anyway if you 
need any help on adding python 3 support , I'll be more than happy to help


thanks ,
Que

On 07/10/15 20:40, Zachary Turner wrote:
When you built LLDB, did you specify a -DPYTHON_HOME= on your 
CMake command line, and also run the install_custom_python.py script?  
There's a lot of steps, so it seems like almost everybody misses at 
least one step when doing this.


I'm actively working (as in, literally right now) on getting LLDB to 
work with Python 3.  If all goes smoothly, hopefully all of these 
problems will disappear and everything will just work without any user 
configuration required at all.


On Wed, Oct 7, 2015 at 12:17 PM kwadwo amankwa > wrote:


Hi Zachary,

Ok so I did end up reconfiguring it and rebuilding but that was no
help . I then realized that I was importing python27.dll as well
as python27_d.dll . When I rebuilt liblldb I started getting an
single unresolved error for
imp_Py_InitModule which I guess was the symbol for the
Py_InitModule4 python api  which is actually a macro which is
defined depending on a few flags . So I ended up rebuilding
python27 and the unresolved error went away and built everything
from scratch . The good news is that when I invoke the 'script'
command in the lldb interpreter it doesn't crash anymore and I can
use the python interpreter . I can even import the lldb module
without getting the embedded interpreter Import error . However I 
when I actually import lldb from a python module and run it with

the standalone interpreter I still get the Import Error.  I have
checked the PYTHONPATH which was different for the lldb embedded
interpreter and updated the variable to contain the missing paths
but no cigar :-( . Any suggestions ?


On 05/10/15 21:21, Zachary Turner wrote:

Can you try to regenerate CMake with that command line and see if
that helps?

On Mon, Oct 5, 2015 at 1:17 PM kwadwo amankwa mailto:q...@lunarblack.com>> wrote:


No

On 05/10/15 21:15, Zachary Turner wrote:

Are you using -DCMAKE_BUILD_TYPE=Debug when you generate CMake?

On Mon, Oct 5, 2015 at 1:14 PM kwadwo amankwa
mailto:q...@lunarblack.com>> wrote:

Thanks for the response ,  sorry for the delay. As a
matter of fact I actually got rid of the system python
and installed my custom version.  I do suspect it is a
linking problem though. When I build liblldb.dll it
always loads python27.dll instead of python27_d.dll. Do
you happen to know where the python27 lib is specified
as an input library because the project properties in
liblldb does not specify it . however the linker
complains if I don't specify the lib directory in
'additional directories' and when I do it always links
to the python27lib. I grepped the whole build directory
and two files SystemInitializer.obj and
LLDBWrapPython.obj seem to contain
/DEFAULTLIB:python27.lib. Do you have an idea of what is
causing the compiler to do this ?


On 05/10/15 19:13, Zachary Turner wrote:
Ahh, I thought you were doing this from inside LLDB. 
There are a couple of problems:


1) You might be running with the system Python, not the
custom Python you built with VS2013.  What is the value
of `sys.executable`?
2) Even if you are running your own Python, the regular
Python appears to be in your `sys.path`.  You will need
to unset PYTHONPATH and PYTHONHOME from pointing to
your system Python. PYTHONHOME should point to your
custom Python, and PYTHONPATH should point to the
`lib\site-packages` directory that I mentioned earlier
in your build directory.



On Mon, Oct 5, 2015 at 11:06 AM kwadwo amankwa
mailto:q...@lunarblack.com>> wrote:

here it is;

C:\Users\redbandit\Documents\GitHub\pygui>python
Python 2.7.10 (default, Sep 18 2015, 02:35:59) [MSC
v.1800 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license"
for more information.
>>> import sys
>>> sys.path
['', 'C:\\Python27\\Lib',

'C:\\Users\\redbandit\\llvm\\build\\Debug\\lib\\site-packages\\lldb',
'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts',
'C:\\Users\\redbandi t\\Documents\\GitHub\\pygui',
'C:\\Python27\\python27.zip', 'C:\\Python27\\DLLs',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk', 'C:\\Python27',
'C:\\Python27

Re: [lldb-dev] " Import error: No module named embedded_interpreter" on windows

2015-10-07 Thread Zachary Turner via lldb-dev
I'm about 90% of the way there on the native code side.  The last part is
tricky, but still mechanical.  After I finish that the trick is going to be
getting the test suite running.  I suspect that will be nasty, just because
there's a few thousand lines of code and it's going to be a pain to find
everything.  Might need some help there, I'll let you know.

On Wed, Oct 7, 2015 at 1:27 PM kwadwo amankwa  wrote:

> I'll work my way backwards . Eventually I'll get there. Anyway if you need
> any help on adding python 3 support , I'll be more than happy to help
>
> thanks ,
> Que
>
>
> On 07/10/15 20:40, Zachary Turner wrote:
>
> When you built LLDB, did you specify a -DPYTHON_HOME= on your CMake
> command line, and also run the install_custom_python.py script?  There's a
> lot of steps, so it seems like almost everybody misses at least one step
> when doing this.
>
> I'm actively working (as in, literally right now) on getting LLDB to work
> with Python 3.  If all goes smoothly, hopefully all of these problems will
> disappear and everything will just work without any user configuration
> required at all.
>
> On Wed, Oct 7, 2015 at 12:17 PM kwadwo amankwa  wrote:
>
>> Hi Zachary,
>>
>> Ok so I did end up reconfiguring it and rebuilding but that was no help .
>> I then realized that I was importing python27.dll as well as python27_d.dll
>> . When I rebuilt liblldb I started getting an single unresolved error for
>> imp_Py_InitModule which I guess was the symbol for the Py_InitModule4
>> python api  which is actually a macro which is defined depending on a few
>> flags . So I ended up rebuilding python27 and the unresolved error went
>> away and built everything from scratch . The good news is that when I
>> invoke the 'script' command in the lldb interpreter it doesn't crash
>> anymore and I can use the python interpreter . I can even import the lldb
>> module without getting the embedded interpreter Import error . However I
>> when I actually import lldb from a python module and run it with the
>> standalone interpreter I still get the Import Error.  I have checked the
>> PYTHONPATH which was different for the lldb embedded interpreter and
>> updated the variable to contain the missing paths but no cigar :-(.
>> Any suggestions ?
>>
>>
>> On 05/10/15 21:21, Zachary Turner wrote:
>>
>> Can you try to regenerate CMake with that command line and see if that
>> helps?
>>
>> On Mon, Oct 5, 2015 at 1:17 PM kwadwo amankwa  wrote:
>>
>>>
>>> No
>>>
>>> On 05/10/15 21:15, Zachary Turner wrote:
>>>
>>> Are you using -DCMAKE_BUILD_TYPE=Debug when you generate CMake?
>>>
>>> On Mon, Oct 5, 2015 at 1:14 PM kwadwo amankwa 
>>> wrote:
>>>
 Thanks for the response ,  sorry for the delay. As a matter of fact I
 actually got rid of the system python and installed my custom version.  I
 do suspect it is a linking problem though. When I build liblldb.dll it
 always loads python27.dll instead of python27_d.dll. Do you happen to know
 where the python27 lib is specified as an input library because the project
 properties in liblldb does not specify it . however the linker complains if
 I don't specify the lib directory in 'additional directories' and when I do
 it always links to the python27lib. I grepped the whole build directory and
 two files SystemInitializer.obj and LLDBWrapPython.obj seem to contain
 /DEFAULTLIB:python27.lib. Do you have an idea of what is causing the
 compiler to do this ?


 On 05/10/15 19:13, Zachary Turner wrote:

 Ahh, I thought you were doing this from inside LLDB.  There are a
 couple of problems:

 1) You might be running with the system Python, not the custom Python
 you built with VS2013.  What is the value of `sys.executable`?
 2) Even if you are running your own Python, the regular Python appears
 to be in your `sys.path`.  You will need to unset PYTHONPATH and PYTHONHOME
 from pointing to your system Python.  PYTHONHOME should point to your
 custom Python, and PYTHONPATH should point to the `lib\site-packages`
 directory that I mentioned earlier in your build directory.



 On Mon, Oct 5, 2015 at 11:06 AM kwadwo amankwa 
 wrote:

> here it is;
>
> C:\Users\redbandit\Documents\GitHub\pygui>python
> Python 2.7.10 (default, Sep 18 2015, 02:35:59) [MSC v.1800 64 bit
> (AMD64)] on win32
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import sys
> >>> sys.path
> ['', 'C:\\Python27\\Lib',
> 'C:\\Users\\redbandit\\llvm\\build\\Debug\\lib\\site-packages\\lldb',
> 'C:\\Users\\redbandit\\llvm\\build\\tools\\lldb\\scripts',
> 'C:\\Users\\redbandi t\\Documents\\GitHub\\pygui',
> 'C:\\Python27\\python27.zip', 'C:\\Python27\\DLLs',
> 'C:\\Python27\\lib\\plat-win', 'C:\\Python27\\lib\\lib-tk', 
> 'C:\\Python27',
> 'C:\\Python27\\li b\\site-packages']
> >>> import lldb
> Traceback (mo

Re: [lldb-dev] Thread resumes with stale signal after executing InferiorCallMmap

2015-10-07 Thread Jim Ingham via lldb-dev
Does it only happen for InferiorCallMmap, or does an expression evaluation that 
crashes in general set a bad signal on resume?  I don't see this behavior in 
either case on OS X, so it may be something in the Linux support.  Be 
interesting to figure out why it behaves this way on Linux, so whatever we do 
we're implementing it consistently.

Jim



> On Oct 7, 2015, at 12:03 PM, Eugene Birukov via lldb-dev 
>  wrote:
> 
> Hi,
>  
> I am using LLDB 3.7.0 C++ API. My program stops at a certain breakpoint and 
> if I call SBFrame::EvaluateExpression() there, when I let it go it terminates 
> with SIG_ILL on an innocent thread. I dug up into this, and there seems to be 
> two independent problems there, this mail is about the second one.
>  
>   • EvaluateExpression() calls Process::CanJIT() which in turn executes 
> mmap() on the inferior. This mmap gets SIG_ILL because execution starts at 
> address which is 2 bytes before the very first mmap instruction. I am still 
> looking why LLDB server decided to do that - I am pretty sure that the client 
> asked to set the program counter to correct value.
>   • So, the thread execution terminates and the signal is recorded on 
> Thread::m_resume_signal. This field is not cleared during 
> Thread::RestoreThreadStateFromCheckpoint() and fires when I resume the 
> program after breakpoint.
>  
> So, what would be the best way to deal with the situation? Should I add 
> "resume signal" field to ThreadStateCheckpoint? Or would StopInfo be a better 
> place for that? Or something else?
>  
> Thanks,
> Eugene
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Too many open files

2015-10-07 Thread Adrian McCarthy via lldb-dev
Adding a printing destructor to threading.Event seems to aggravate timing
problems, causing several tests to fail to make their inferiors and that
seemingly keeps us below the open file limit.  That aside, the destructor
did fire many hundreds of times, so there's not a general problem stopping
all or even most of those to be cleaned up.

The event objects that I'm seeing with the Sysinternals tools are likely
Windows Events that Python creates to facilitate the interprocess
communication.

I'm looking at the ProcessDriver lifetimes now.

On Tue, Oct 6, 2015 at 9:54 AM, Todd Fiala  wrote:

> Okay.
>
> A promising avenue might be to look at how Windows cleans up the
> threading.Event objects.  Chasing that thread might yield why the events
> are not going away (assuming those are the events that are lingering on
> your end).  One thing you could consider doing is patching in a replacement
> destructor for the threading.Event and print something when it fires off,
> verifying that they're really going away from the Python side.  If they're
> not, perhaps there's a retain bloat issue where we're not getting rid of
> some python objects due to some unintended references living beyond
> expectations.
>
> The dosep.py call_with_timeout method drives the child process operation
> chain.  That thing creates a ProcessDriver and collects the results from it
> when done.  Everything within the ProcessDriver (including the event)
> should be cleaned up by the time the call_with_timeout() call wraps up as
> there shouldn't be any references outstanding.  It might also be worth you
> adding a destructor to the ProcessDriver to make sure that's going away,
> one per Python test inferior executed.
>
> On Tue, Oct 6, 2015 at 9:48 AM, Adrian McCarthy 
> wrote:
>
>> Python 2.7.10 made no difference.  I'm dealing with other issues this
>> afternoon, so I'll probably return to this on Wednesday.  It's not critical
>> since there are workarounds.
>>
>> On Tue, Oct 6, 2015 at 9:41 AM, Todd Fiala  wrote:
>>
>>>
>>>
>>> On Mon, Oct 5, 2015 at 3:58 PM, Adrian McCarthy 
>>> wrote:
>>>
 Different tools are giving me different numbers.

 At the time of the error, Windbg says there are about 2000 open
 handles, most of them are Event handles, not File handles.  That's higher
 than I'd expect, but not really concerning.


>>> Ah, that's useful.  I am using events (python threading.Event).  These
>>> don't afford any clean up mechanisms on them, so I assume these go away
>>> when the Python objects that hold them go away.
>>>
>>>
 Process Explorer, however, shows ~20k open handles per Python process
 running dotest.exe.  It also says that about 2000 of those are the
 process's "own handles."  I'm researching to see what that means.  I
 suspect it means that the process has about ~18k handles to objects owned
 by another process and 2k of ones that it actually owns.

 I found this Stack Overflow post, which suggests is may be an
 interaction with using Python subprocess in a loop and having those
 subprocesses work with files that are still open in the parent process, but
 I don't entirely understand the answer:


 http://stackoverflow.com/questions/16526783/python-subprocess-too-many-open-files


>>> Hmm I'll read through that.
>>>
>>>
 It might be a problem with Python subprocess that's been fixed in a
 newer version.  I'm going to try upgrading from Python 2.7.9 to 2.7.10 to
 see if that makes a difference.


>>> Okay, we're on 2.7.10 on latest OS X.  I *think* I'm using Python 2.7.6
>>> on Ubuntu 14.04.  Checking now... (yes, 2.7.6 on 14.04).  Ubuntu 15.10 beta
>>> 1 is using Python 2.7.10.
>>>
>>> Seems reasonable to check that out.  Let me know what you find out!
>>>
>>> -Todd
>>>
>>>
 On Mon, Oct 5, 2015 at 12:02 PM, Todd Fiala 
 wrote:

> It's possible.  However, I was monitoring actual open files during the
> course of the run (i.e. what the kernel thought was open for the master
> driver process, which is the only place that makes sense to see leaks
> accumulate) in both threading and threading-pool (on OS X), and I saw only
> the handful of file handles that I'd expect to  be open - pipes
> (stdout,stderr,stdin) from the main test runner to the inferior test
> runners, the shared libraries loaded as part of the test runner, and (in 
> my
> case, but probably not yours for the configuration), the tcp sockets for
> gathering the test events.  There was no growth, and I didn't see things
> hanging around longer than I'd expect.
>
> The SysInternals process viewer tool is great for this kind of thing -
> glad you're using it.  Once you find out which file handles are getting
> leaked and where they came from, we can probably figure out which part of
> the implementation is leaking it.  I don't *expect* it to be on our side
> given that it's not sh

Re: [lldb-dev] Too many open files

2015-10-07 Thread Adrian McCarthy via lldb-dev
Zach had the clue that found the problem.  Python on Windows uses the stdio
from CRT.  On Windows, the default limit for open file descriptors is 512.
When you have 40 logical cores and the parent process spends several FDs
communicating with each one, you get real close to that number.

https://msdn.microsoft.com/en-us/library/6e3b887c.aspx

As a test, I hacked a _setmaxstdio(2048) into the main() of my local copy
of Python, and the problem went away.

I guess the general solution is to limit the number of processes on
Windows, which we already knew was a possible workaround.


On Wed, Oct 7, 2015 at 3:17 PM, Adrian McCarthy  wrote:

> Adding a printing destructor to threading.Event seems to aggravate timing
> problems, causing several tests to fail to make their inferiors and that
> seemingly keeps us below the open file limit.  That aside, the destructor
> did fire many hundreds of times, so there's not a general problem stopping
> all or even most of those to be cleaned up.
>
> The event objects that I'm seeing with the Sysinternals tools are likely
> Windows Events that Python creates to facilitate the interprocess
> communication.
>
> I'm looking at the ProcessDriver lifetimes now.
>
> On Tue, Oct 6, 2015 at 9:54 AM, Todd Fiala  wrote:
>
>> Okay.
>>
>> A promising avenue might be to look at how Windows cleans up the
>> threading.Event objects.  Chasing that thread might yield why the events
>> are not going away (assuming those are the events that are lingering on
>> your end).  One thing you could consider doing is patching in a replacement
>> destructor for the threading.Event and print something when it fires off,
>> verifying that they're really going away from the Python side.  If they're
>> not, perhaps there's a retain bloat issue where we're not getting rid of
>> some python objects due to some unintended references living beyond
>> expectations.
>>
>> The dosep.py call_with_timeout method drives the child process operation
>> chain.  That thing creates a ProcessDriver and collects the results from it
>> when done.  Everything within the ProcessDriver (including the event)
>> should be cleaned up by the time the call_with_timeout() call wraps up as
>> there shouldn't be any references outstanding.  It might also be worth you
>> adding a destructor to the ProcessDriver to make sure that's going away,
>> one per Python test inferior executed.
>>
>> On Tue, Oct 6, 2015 at 9:48 AM, Adrian McCarthy 
>> wrote:
>>
>>> Python 2.7.10 made no difference.  I'm dealing with other issues this
>>> afternoon, so I'll probably return to this on Wednesday.  It's not critical
>>> since there are workarounds.
>>>
>>> On Tue, Oct 6, 2015 at 9:41 AM, Todd Fiala  wrote:
>>>


 On Mon, Oct 5, 2015 at 3:58 PM, Adrian McCarthy 
 wrote:

> Different tools are giving me different numbers.
>
> At the time of the error, Windbg says there are about 2000 open
> handles, most of them are Event handles, not File handles.  That's higher
> than I'd expect, but not really concerning.
>
>
 Ah, that's useful.  I am using events (python threading.Event).  These
 don't afford any clean up mechanisms on them, so I assume these go away
 when the Python objects that hold them go away.


> Process Explorer, however, shows ~20k open handles per Python process
> running dotest.exe.  It also says that about 2000 of those are the
> process's "own handles."  I'm researching to see what that means.  I
> suspect it means that the process has about ~18k handles to objects owned
> by another process and 2k of ones that it actually owns.
>
> I found this Stack Overflow post, which suggests is may be an
> interaction with using Python subprocess in a loop and having those
> subprocesses work with files that are still open in the parent process, 
> but
> I don't entirely understand the answer:
>
>
> http://stackoverflow.com/questions/16526783/python-subprocess-too-many-open-files
>
>
 Hmm I'll read through that.


> It might be a problem with Python subprocess that's been fixed in a
> newer version.  I'm going to try upgrading from Python 2.7.9 to 2.7.10 to
> see if that makes a difference.
>
>
 Okay, we're on 2.7.10 on latest OS X.  I *think* I'm using Python 2.7.6
 on Ubuntu 14.04.  Checking now... (yes, 2.7.6 on 14.04).  Ubuntu 15.10 beta
 1 is using Python 2.7.10.

 Seems reasonable to check that out.  Let me know what you find out!

 -Todd


> On Mon, Oct 5, 2015 at 12:02 PM, Todd Fiala 
> wrote:
>
>> It's possible.  However, I was monitoring actual open files during
>> the course of the run (i.e. what the kernel thought was open for the 
>> master
>> driver process, which is the only place that makes sense to see leaks
>> accumulate) in both threading and threading-pool (on OS X), and I saw 
>> only
>> the

Re: [lldb-dev] Thread resumes with stale signal after executing InferiorCallMmap

2015-10-07 Thread Eugene Birukov via lldb-dev
Even on Linux call to InferiorCallMmap does not fail consistently. In many 
cases it survives. I just happened to have 100% repro on this specific 
breakpoint in my specific problem. I.e. the burden of investigation is on me, 
since I cannot share my program. 
But I am not looking at this SIG_ILL yet. Whatever the problem is with mmap - 
the client must not carry this signal past expression evaluation. I.e. I 
believe that we can construct any arbitrary function that causes signal, call 
it from evaluate expression, and then continue would fail. I suspect that this 
problem might be applicable to any POSIX platform.
As it turned out, my initial analysis was incorrect. m_resume_signal is 
calculated from StopInfo::m_value (now I wonder why do we need two fields for 
that?). And after mmap call, m_stop_info on the thread is null. So, my current 
theory is that there is an event with SIG_ILL that is stuck in the broadcaster 
and is picked up and processed much later.

> Subject: Re: [lldb-dev] Thread resumes with stale signal after executing 
> InferiorCallMmap
> From: jing...@apple.com
> Date: Wed, 7 Oct 2015 15:08:18 -0700
> CC: lldb-dev@lists.llvm.org
> To: eugen...@hotmail.com
> 
> Does it only happen for InferiorCallMmap, or does an expression evaluation 
> that crashes in general set a bad signal on resume?  I don't see this 
> behavior in either case on OS X, so it may be something in the Linux support. 
>  Be interesting to figure out why it behaves this way on Linux, so whatever 
> we do we're implementing it consistently.
> 
> Jim
> 
> 
> 
> > On Oct 7, 2015, at 12:03 PM, Eugene Birukov via lldb-dev 
> >  wrote:
> > 
> > Hi,
> >  
> > I am using LLDB 3.7.0 C++ API. My program stops at a certain breakpoint and 
> > if I call SBFrame::EvaluateExpression() there, when I let it go it 
> > terminates with SIG_ILL on an innocent thread. I dug up into this, and 
> > there seems to be two independent problems there, this mail is about the 
> > second one.
> >  
> > • EvaluateExpression() calls Process::CanJIT() which in turn executes 
> > mmap() on the inferior. This mmap gets SIG_ILL because execution starts at 
> > address which is 2 bytes before the very first mmap instruction. I am still 
> > looking why LLDB server decided to do that - I am pretty sure that the 
> > client asked to set the program counter to correct value.
> > • So, the thread execution terminates and the signal is recorded on 
> > Thread::m_resume_signal. This field is not cleared during 
> > Thread::RestoreThreadStateFromCheckpoint() and fires when I resume the 
> > program after breakpoint.
> >  
> > So, what would be the best way to deal with the situation? Should I add 
> > "resume signal" field to ThreadStateCheckpoint? Or would StopInfo be a 
> > better place for that? Or something else?
> >  
> > Thanks,
> > Eugene
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
  ___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Thread resumes with stale signal after executing InferiorCallMmap

2015-10-07 Thread Jim Ingham via lldb-dev

> On Oct 7, 2015, at 4:06 PM, Eugene Birukov  wrote:
> 
> Even on Linux call to InferiorCallMmap does not fail consistently. In many 
> cases it survives. I just happened to have 100% repro on this specific 
> breakpoint in my specific problem. I.e. the burden of investigation is on me, 
> since I cannot share my program. 
> 
> But I am not looking at this SIG_ILL yet. Whatever the problem is with mmap - 
> the client must not carry this signal past expression evaluation. I.e. I 
> believe that we can construct any arbitrary function that causes signal, call 
> it from evaluate expression, and then continue would fail. I suspect that 
> this problem might be applicable to any POSIX platform.

It doesn't happen on OS X, though when it comes to signal handling in the 
debugger OS X is an odd fish...

> 
> As it turned out, my initial analysis was incorrect. m_resume_signal is 
> calculated from StopInfo::m_value (now I wonder why do we need two fields for 
> that?).

The signal that you stop with is not necessarily the one you are going to 
resume with.  For instance, if you use "process handle SIG_SOMESIG -p 0" to 
tell lldb not to propagate the signal, then the resume signal will be nothing, 
even though the stop signal is SIG_SOMESIG.

> And after mmap call, m_stop_info on the thread is null. So, my current theory 
> is that there is an event with SIG_ILL that is stuck in the broadcaster and 
> is picked up and processed much later.

When the expression evaluation completes, the StopInfo from the last "natural" 
stop should be put back in place in the thread.  After all, if you hit a 
breakpoint, run an expression, then ask why that thread stopped, you want to 
see "hit a breakpoint" not "ran a function call".  Sounds like that is failing 
somehow.

Jim


> 
> > Subject: Re: [lldb-dev] Thread resumes with stale signal after executing 
> > InferiorCallMmap
> > From: jing...@apple.com
> > Date: Wed, 7 Oct 2015 15:08:18 -0700
> > CC: lldb-dev@lists.llvm.org
> > To: eugen...@hotmail.com
> > 
> > Does it only happen for InferiorCallMmap, or does an expression evaluation 
> > that crashes in general set a bad signal on resume? I don't see this 
> > behavior in either case on OS X, so it may be something in the Linux 
> > support. Be interesting to figure out why it behaves this way on Linux, so 
> > whatever we do we're implementing it consistently.
> > 
> > Jim
> > 
> > 
> > 
> > > On Oct 7, 2015, at 12:03 PM, Eugene Birukov via lldb-dev 
> > >  wrote:
> > > 
> > > Hi,
> > >  
> > > I am using LLDB 3.7.0 C++ API. My program stops at a certain breakpoint 
> > > and if I call SBFrame::EvaluateExpression() there, when I let it go it 
> > > terminates with SIG_ILL on an innocent thread. I dug up into this, and 
> > > there seems to be two independent problems there, this mail is about the 
> > > second one.
> > >  
> > > • EvaluateExpression() calls Process::CanJIT() which in turn executes 
> > > mmap() on the inferior. This mmap gets SIG_ILL because execution starts 
> > > at address which is 2 bytes before the very first mmap instruction. I am 
> > > still looking why LLDB server decided to do that - I am pretty sure that 
> > > the client asked to set the program counter to correct value.
> > > • So, the thread execution terminates and the signal is recorded on 
> > > Thread::m_resume_signal. This field is not cleared during 
> > > Thread::RestoreThreadStateFromCheckpoint() and fires when I resume the 
> > > program after breakpoint.
> > >  
> > > So, what would be the best way to deal with the situation? Should I add 
> > > "resume signal" field to ThreadStateCheckpoint? Or would StopInfo be a 
> > > better place for that? Or something else?
> > >  
> > > Thanks,
> > > Eugene
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> > 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb fails to hit breakpoint when line maps to multiple addresses

2015-10-07 Thread via lldb-dev
On Mon, Oct 05, 2015 at 03:01:28PM -0700, Jim Ingham wrote:
> 
> Given that, the best lldb can do is use heuristics, and the best heuristic I 
> had was Block == basic block???  

Can you at least check for branches then?  (Yes, that would require 
dissassebly).

> The motivation is that compilers in general and certainly clang in particular 
> love to put multiple line table entries in for a given line that are either 
> contiguous or interrupted by artificial book-keeping code.  So if we didn???t 
> coalesce these line entries, when you set a breakpoint on such a line, 
> you??d have to hit continue some unpredictable number of times before you 
> actually get past that line.  You could figure out how many time by counting 
> the number of locations, but nobody could be expected to do that???  And if 
> you are chasing multiple hits of the breakpoint through code it was really a 
> pain since one ???continue??? didn???t result in one pass through the 
> function containing the code. This happens very frequently and was a font of 
> bugs for lldb early on.

Understood - we get reports like this all the time, and I've also thought of
ways to workaround it, but for each idea I had, I could always find a way to
break it.  So now I tell users it "works as designed", and that it's better
to hit a BP a couple times than none at all.

> Note, this doesn???t affect the stepping algorithms, since when we step we 
> just look at where we land and if it has the same line number as we were 
> stepping through we keep going.  Of course, it also makes stepping over such 
> a line annoying for the same reason that it made continue annoying...

What about tail recursion?  You must at least check the stack ptr, no?  

> Note also that gdb plays the same trick with setting breakpoints on multiple 
> line table entries (or at least it did last time I looked.)  This wasn???t 
> something new in lldb.

No, gdb gets this case right.

> Yours is the first report we???ve had where this causes trouble, whereas it 
> makes general stepping work much more nicely.

I'm quite surprised.  FWIW, the typical case we see this in is exception 
handling.

> So if you have some specific reason to need it either (a) if there???s some 
> better heuristic you can come up with that detects when you should not 
> coalesce, that would be awesome 

Better: check for branches out of the block.

But this would still fail for cases where code branches around the initial block
and into the blocks belonging to that line further down.

; Example pseudo code: Set BP at line 10 in the following:
  br @lbl2  ; belongs to line 9
lbl1: insns_for_line10_part1; lldb sets BP here
lbl2: insns_for_line10_part2; BP at line 10 never hit
  if (false_cond) br @lbl1

So best would be to also check for labels which lines into the block, 
but that's unrealistic.

> or (b) if there???s no way lldb can tell, you???ll have to add an option.

Sounds like we'll have to go with an option then.

Thanks,
-Dawn
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb fails to hit breakpoint when line maps to multiple addresses

2015-10-07 Thread Jim Ingham via lldb-dev

> On Oct 7, 2015, at 4:39 PM, d...@burble.org wrote:
> 
> On Mon, Oct 05, 2015 at 03:01:28PM -0700, Jim Ingham wrote:
>> 
>> Given that, the best lldb can do is use heuristics, and the best heuristic I 
>> had was Block == basic block???  
> 
> Can you at least check for branches then?  (Yes, that would require 
> disassembly).

Breakpoint setting doesn't have to be blazingly fast, we look as disassembly 
and worse in other cases (e.g. resolver symbols actually have to be resolved - 
which involves a function call in the debugee - to figure out the target of the 
resolver...)  So I'm not opposed to this in general.  But I wouldn't want (you 
;-)) to do this work if it isn't going to cover all the cases you care about, 
which it sounds from below like it wouldn't.  And you would have to be careful 
since things like calls shouldn't cause extra locations to be generated...

Another way to do this - which I thought about originally but rejected as too 
much delicate machinery for the desired effect - is to add the notion of 
"clusters" of locations to the breakpoint.  Instead of eliding all the segments 
with the same line number into one location, you'd make a location per segment 
but treat them as a cluster, where a hit on one location in the cluster would 
set a flag telling you to auto-continue the other locations in the cluster till 
"something happened to reset the cluster".  You'd have to figure out good 
heuristics for that "something happened".  You could probably get away with 
"frame changed" and "hit the location that I hit the first time I hit the 
cluster".  But I'd have to think a bit harder about this to assure myself this 
was good enough.  And you'd have to keep a side table of history for each 
breakpoint which you'd have to manage...  Nothing impossible, but it didn't 
seem worth the effort at the time.

Anyway, if you are sufficiently motivated to give this a try, it would be a 
more general solution to the problem without requiring user intervention - 
either having to press continue some undetermined number of times, or create 
the breakpoints with some special option.

> 
>> The motivation is that compilers in general and certainly clang in 
>> particular love to put multiple line table entries in for a given line that 
>> are either contiguous or interrupted by artificial book-keeping code.  So if 
>> we didn???t coalesce these line entries, when you set a breakpoint on such a 
>> line, you??d have to hit continue some unpredictable number of times 
>> before you actually get past that line.  You could figure out how many time 
>> by counting the number of locations, but nobody could be expected to do 
>> that???  And if you are chasing multiple hits of the breakpoint through code 
>> it was really a pain since one ???continue??? didn???t result in one pass 
>> through the function containing the code. This happens very frequently and 
>> was a font of bugs for lldb early on.
> 
> Understood - we get reports like this all the time, and I've also thought of
> ways to workaround it, but for each idea I had, I could always find a way to
> break it.  So now I tell users it "works as designed", and that it's better
> to hit a BP a couple times than none at all.
> 
>> Note, this doesn???t affect the stepping algorithms, since when we step we 
>> just look at where we land and if it has the same line number as we were 
>> stepping through we keep going.  Of course, it also makes stepping over such 
>> a line annoying for the same reason that it made continue annoying...
> 
> What about tail recursion?  You must at least check the stack ptr, no?

Of course, and not just for tail recursion: the current line might have called 
something that calls the current function that gets you back to the current 
line.  Stepping can't stop for that either. I wasn't giving a complete 
description of the stepping algorithm, just how it pertains to passing through 
blocks of code in the same function.

>  
> 
>> Note also that gdb plays the same trick with setting breakpoints on multiple 
>> line table entries (or at least it did last time I looked.)  This wasn???t 
>> something new in lldb.
> 
> No, gdb gets this case right.

Interesting.  Maybe they've changed how they do the line coalescing, I haven't 
looked in years.

> 
>> Yours is the first report we???ve had where this causes trouble, whereas it 
>> makes general stepping work much more nicely.
> 
> I'm quite surprised.  FWIW, the typical case we see this in is exception 
> handling.

Maybe you have a different code generation model from clang (or swift?)

> 
>> So if you have some specific reason to need it either (a) if there???s some 
>> better heuristic you can come up with that detects when you should not 
>> coalesce, that would be awesome 
> 
> Better: check for branches out of the block.
> 
> But this would still fail for cases where code branches around the initial 
> block
> and into the blocks belonging to that line further down.
> 

Re: [lldb-dev] How to debug LLDB server?

2015-10-07 Thread Bruce Mitchener via lldb-dev
In the LLDB project, you have 3 different defines:

LLDB_CONFIGURATION_DEBUG
LLDB_CONFIGURATION_RELEASE
LLDB_CONFIGURATION_BUILD_AND_INTEGRATION

I can easily set this up to be set for the various build types in cmake,
but I'd like to make sure we all agree about what should happen first:

CMAKE_BUILD_TYPE = Debug: Add LLDB_CONFIGURATION_DEBUG
CMAKE_BUILD_TYPE = RelWithDebinfo: Add LLDB_CONFIGURATION_RELEASE
CMAKE_BUILD_TYPE = Release: Add LLDB_CONFIGURATION_BUILD_AND_INTEGRATION

This seems right to me as there are some usages of
LLDB_CONFIGURATION_RELEASE that appear to be useful with debugging.

Does that seem to be correct?

 - Bruce


On Thu, Oct 8, 2015 at 2:44 AM, Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> We set this manually in the Xcode project for "Debug" and "DebugClang"
> build variants. The cmake should be able to do the same, but I am not sure
> if it is. Feel free to make it do so. I am not very good with cmake, so I
> won't be much help.
>
> Greg
>
> > On Oct 7, 2015, at 11:09 AM, Eugene Birukov 
> wrote:
> >
> > Thanks!
> >
> > A newbie question then: how to trigger LLDB_CONFIGURATION_DEBUG when I
> run cmake? I am sure that I built debug version, but packet timeout is
> still 1 to me.
> >
> > (gdb) p m_packet_timeout
> > $1 = 1
> >
> >
> > > Subject: Re: [lldb-dev] How to debug LLDB server?
> > > From: gclay...@apple.com
> > > Date: Wed, 7 Oct 2015 11:04:45 -0700
> > > CC: lldb-dev@lists.llvm.org
> > > To: eugen...@hotmail.com
> > >
> > > Most calls for lldb-server should use an instance variable
> GDBRemoteCommunication::m_packet_timeout which you could then modify. But
> this timeout you are talking about is the time that the expression can take
> when running. I would just bump these up temporarily while you are
> debugging to avoid the timeouts. Just don't check it in.
> > >
> > > So for GDB Remote packets, we already bump the timeout up in the
> GDBRemoteCommunication constructor:
> > >
> > > #ifdef LLDB_CONFIGURATION_DEBUG
> > > m_packet_timeout (1000),
> > > #else
> > > m_packet_timeout (1),
> > > #endif
> > >
> > >
> > > Anything else is probably expression timeouts and you will need to
> manually bump those up in order to debug, or you could do the same thing as
> the GDB Remote in InferiorCallPOSIX.cpp:
> > >
> > > #ifdef LLDB_CONFIGURATION_DEBUG
> > > options.SetTimeoutUsec(5000);
> > > #else
> > > options.SetTimeoutUsec(50);
> > > #endif
> > >
> > >
> > > > On Oct 7, 2015, at 10:33 AM, Eugene Birukov via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > > >
> > > > Hello,
> > > >
> > > > I am trying to see what is going inside LLDB server 3.7.0 but there
> are a lot of timeouts scattered everywhere. Say, InferiorCallPOSIX.cpp:74
> sets hard-coded timeout to 500,000us, etc. These timeouts fire if I spend
> any time on breakpoint inside server and make debugging experience
> miserable. Is there any way to turn them all off?
> > > >
> > > > BTW, I am using LLDB as a C++ API, not as standalone program, but I
> have debugger attached to it and can alter its memory state.
> > > >
> > > > Thanks,
> > > > Eugene
> > > >
> > > > ___
> > > > lldb-dev mailing list
> > > > lldb-dev@lists.llvm.org
> > > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> > >
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] How to set source line breakpoint using BreakpointCreateByLocation?

2015-10-07 Thread Jeffrey Tan via lldb-dev
Hi,

I am writing a python script to set source line breakpoint in ObjC on Mac
OSX.
But 
self.debugger.GetSelectedTarget().BreakpointCreateByLocation("EATAnimatedView.m",
line) always fail. Any ideas?

Also, can I use full path instead of file basename? In lldb, I found "b
/Users/jeffreytan/fbsource/fbobjc/Apps/Internal/MPKEats/MPKEats/View/EATAnimatedView.m:21"
will fail to bind but "b EATAnimatedView.m:21" will succeed.

Traceback (most recent call last):
  File
"/Users/jeffreytan/fbsource/fbobjc/Tools/Nuclide/pkg/nuclide/debugger/lldb/scripts/chromedebugger.py",
line 69, in _generate_response
params=message.get('params', {}),
  File
"/Users/jeffreytan/fbsource/fbobjc/Tools/Nuclide/pkg/nuclide/debugger/lldb/scripts/handler.py",
line 42, in handle
return self._domains[domain_name].handle(method_name, params)
  File
"/Users/jeffreytan/fbsource/fbobjc/Tools/Nuclide/pkg/nuclide/debugger/lldb/scripts/handler.py",
line 106, in handle
return self._handlers[method](params)
  File
"/Users/jeffreytan/fbsource/fbobjc/Tools/Nuclide/pkg/nuclide/debugger/lldb/scripts/handler.py",
line 56, in _handler_wrapper
ret = func(self, params)
  File
"/Users/jeffreytan/fbsource/fbobjc/Tools/Nuclide/pkg/nuclide/debugger/lldb/scripts/debugger.py",
line 248, in setBreakpointByUrl
int(params['lineNumber']) + 1)
  File
"/Users/jeffreytan/fbsource/fbobjc/Tools/Nuclide/pkg/nuclide/debugger/lldb/scripts/debugger.py",
line 283, in _set_breakpoint_by_filespec
breakpoint =
self.debugger.GetSelectedTarget().BreakpointCreateByLocation(filespec, line)
  File
"/Applications/Xcode.app/Contents/Developer/../SharedFrameworks/LLDB.framework/Resources/Python/lldb/__init__.py",
line 8650, in BreakpointCreateByLocation
return _lldb.SBTarget_BreakpointCreateByLocation(self, *args)
NotImplementedError: Wrong number of arguments for overloaded function
'SBTarget_BreakpointCreateByLocation'.
  Possible C/C++ prototypes are:
BreakpointCreateByLocation(lldb::SBTarget *,char const *,uint32_t)
BreakpointCreateByLocation(lldb::SBTarget *,lldb::SBFileSpec const
&,uint32_t)
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev