[lldb-dev] Need your help in target.GetProcess().ReadMemory

2017-07-24 Thread Laghzaoui Mohammed via lldb-dev
Hello
 I would like to get value of an given adresss ,and I do this in command
python:

addr = lldb.SBAddress("0x942604a2", target)
err = lldb.SBError()
size = 0x100
membuff = target.GetProcess().ReadMemory(addr, size, err)

when I run it I get this Error:

NotImplementedError: Wrong number of arguments for overloaded function
'new_SBAddress'.


who can I do it.


Many Thanks
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB tests

2017-07-24 Thread Steve Trotter via lldb-dev
Hi all,

I'm fairly new to LLVM and LLDB, I became interested in this project about
3 months back and I'm hoping to be able to contribute to improving LLDB in
time. I've been trying to get to grips with the code and have been looking
into the tests as a rough guide to how things work, however I have some
questions about the test suites in LLDB.

It seems to me that we essentially have tests ran by the LIT runner from
LLVM core and tests ran by an LLDB specific python script `dotest.py`. I
notice that on the test page for LLDB they refer to the `dotest.py` tests
ran by a `ninja --check-lldb` but not the latter. I also notice in an email
titled "lldb-server tests" from Paval Labath on 15th May 2017 suggests that
the plan long term is to be to move purely to using LIT style testing. Is
this correct or have I misunderstood? I did have a look in buildbot to see
what tests are being used and I can only find the `dotest.py` style tests,
however it's possible I've misunderstood something here, the
"lldb-x86_64-ubuntu-14.04-cmake" is not easy to make sense of I'm afraid.

Also there seems only to be one test for lldb-server in the LIT suite at
present. Is there a reason for this at present, possibly along the lines of
we're still waiting for the ability to run tests remotely using LIT as per
this email thread? I couldn't find an obvious answer as to whether a design
was agreed upon for this and/or the work completed, maybe it's an ongoing
question still.

Finally, I do see failures myself in both of these tests from the latest
build. I do tend to limit it to compiling only for X86 target and I suspect
this may be related, or possibly just something odd with my build system
anyway. Obviously in an ideal world these tests should always pass but does
anyone else have similar problems? I assume they tend to pass for the core
developers as it seems to be fairly LLVM centric to ensure passing tests
for new bits of work. I can send the outputs of the failing tests if it's
thought useful.

Many thanks for your time,

Steve
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Remote debugging - unix socket and/or specific port

2017-07-24 Thread Mark Nelson via lldb-dev
Has there been any change in this since reported here :

http://lists.llvm.org/pipermail/lldb-dev/2016-June/010616.html

It is pretty clear that that the remote-linux platform is trying to open
additional ports to talk to lldb-server, and if that server is in a
container we need to expose them. But what ports, how many, how to specify?
All uncertain.

Looking at the source shows there are some (undocumented?) port commands in
lldb-server platform, I'm wondering if this is a solved problem that just
doesn't have an easy-to-search-for solution.

BTW, I may be barking up the wrong tree. I am using lldb on the host and
lldb-server on the remote, so the gdb-server protocol shouldn't be in play,
at least I don't think so.

But the problem I see in this configuration sure looks to be one of ports
being firewalled.

>Hi Adrien,
>
>I think your diagnosis is correct here. LLDB does indeed create an
>additional connection to the gdb-server instance which is started by the
>platform instance when you start debugging. In case of android platforms we
>already include code to forward this port automatically, but there is no
>such thing for linux -- we just expect the server to be reachable.


--

Mark Nelson – ma...@ieee.org
 -
http://marknelson.us
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Trying to use socketpair for lldb-server fails

2017-07-24 Thread Ted Woodward via lldb-dev
This is big time overkill, but I wasn’t sure where the problem I was tracking 
down was:

 

“lldb all:linux all:gdb-remote all”

 

Ted

 

--

Qualcomm Innovation Center, Inc.

The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project

 

From: Demi Obenour [mailto:demioben...@gmail.com] 
Sent: Friday, July 21, 2017 8:54 PM
To: Ted Woodward ; lldb-dev@lists.llvm.org
Subject: Re: [lldb-dev] Trying to use socketpair for lldb-server fails

 

Sadly, that gives me nothing in the log file.  Also, 
ConnectionFileDescriptor::Connect already seems to handle this case.

 

Running strace on all child processes gives a “Operation not permitted” error 
from setsid().  That seems like the culprit, which is strange.

 

Would you mind providing the value you used for LLDB_SERVER_LOG_CHANNELS?

 

Demi

 

On Fri, Jul 21, 2017 at 2:55 PM Ted Woodward mailto:ted.woodw...@codeaurora.org> > wrote:

The first thing I'd do is use the lldb logging mechanism. lldb-server closes
its own stdout and stderr, because nobody is interested in output from the
server, just from the target. Except when you're debugging the server, so
there is an easy way to turn on logging.

Set the following environment variables:
LLDB_DEBUGSERVER_LOG_FILE - this contains the path to the file the logs will
be written to
LLDB_SERVER_LOG_CHANNELS - this contains the channels and categories to turn
logging on for. The format is "channel category:channel category...". If you
want more than 1 category for a channel, I think "channel cat1 cat2..."
works. This is not spelled out very clearly, unfortunately.


Quickly glancing at the code, it looks like you need to implement a
socketpair connection, and handling of the fd:// connection URL, starting in
ConnectionFileDescriptor::Connect. The log for this would be "lldb
connection".

Ted

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

> -Original Message-
> From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org 
>  ] On Behalf Of Demi
> Obenour via lldb-dev
> Sent: Wednesday, July 19, 2017 7:44 PM
> To: lldb-dev@lists.llvm.org  
> Subject: [lldb-dev] Trying to use socketpair for lldb-server fails
>
> To avoid a local privilage escalation, I am trying to patch LLDB not to
use a TCP
> socket for local communication.
>
> The attached patch failed.  Would anyone be able to provide suggestions
for
> how to debug the problem?
>
> Sincerely,
>
> Demi

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests

2017-07-24 Thread Sean Callanan via lldb-dev

Steve,

since you asked about failures, here are some public bots you can look 
at to get a general sense of how we are doing:


 * http://lab.llvm.org:8011/builders [various platforms]
 * http://lab.llvm.org:8080/green/view/LLDB/job/lldb_build_test/ [OS X]
 * https://ci.swift.org/view/All/job/oss-lldb-incremental-osx/ [OS X]
 * https://ci.swift.org/view/All/job/oss-lldb-incremental-linux-ubuntu-16_10/
   [Linux]

There are many more bots, as you'll discover browsing around, but these 
should give you a good idea of the health of our testsuite at any given 
time.


Sean

On 7/24/17 3:03 AM, Steve Trotter via lldb-dev wrote:

Hi all,

I'm fairly new to LLVM and LLDB, I became interested in this project 
about 3 months back and I'm hoping to be able to contribute to 
improving LLDB in time. I've been trying to get to grips with the code 
and have been looking into the tests as a rough guide to how things 
work, however I have some questions about the test suites in LLDB.


It seems to me that we essentially have tests ran by the LIT runner 
from LLVM core and tests ran by an LLDB specific python script 
`dotest.py`. I notice that on the test page for LLDB they refer to the 
`dotest.py` tests ran by a `ninja --check-lldb` but not the latter. I 
also notice in an email titled "lldb-server tests" from Paval Labath 
on 15th May 2017 suggests that the plan long term is to be to move 
purely to using LIT style testing. Is this correct or have I 
misunderstood? I did have a look in buildbot to see what tests are 
being used and I can only find the `dotest.py` style tests, however 
it's possible I've misunderstood something here, the 
"lldb-x86_64-ubuntu-14.04-cmake" is not easy to make sense of I'm afraid.


Also there seems only to be one test for lldb-server in the LIT suite 
at present. Is there a reason for this at present, possibly along the 
lines of we're still waiting for the ability to run tests remotely 
using LIT as per this email thread? I couldn't find an obvious answer 
as to whether a design was agreed upon for this and/or the work 
completed, maybe it's an ongoing question still.


Finally, I do see failures myself in both of these tests from the 
latest build. I do tend to limit it to compiling only for X86 target 
and I suspect this may be related, or possibly just something odd with 
my build system anyway. Obviously in an ideal world these tests should 
always pass but does anyone else have similar problems? I assume they 
tend to pass for the core developers as it seems to be fairly LLVM 
centric to ensure passing tests for new bits of work. I can send the 
outputs of the failing tests if it's thought useful.


Many thanks for your time,

Steve



___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 33875] TestWithModuleDebugging fails since llvm r308708

2017-07-24 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=33875

Adrian Prantl  changed:

   What|Removed |Added

 Resolution|--- |FIXED
 Status|ASSIGNED|RESOLVED

--- Comment #4 from Adrian Prantl  ---
Test re-enabled and passing in LLDB r308905.

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests

2017-07-24 Thread Jim Ingham via lldb-dev

> On Jul 24, 2017, at 3:03 AM, Steve Trotter via lldb-dev 
>  wrote:
> 
> Hi all,
> 
> I'm fairly new to LLVM and LLDB, I became interested in this project about 3 
> months back and I'm hoping to be able to contribute to improving LLDB in 
> time. I've been trying to get to grips with the code and have been looking 
> into the tests as a rough guide to how things work, however I have some 
> questions about the test suites in LLDB.

Welcome!

> 
> It seems to me that we essentially have tests ran by the LIT runner from LLVM 
> core and tests ran by an LLDB specific python script `dotest.py`. I notice 
> that on the test page for LLDB they refer to the `dotest.py` tests ran by a 
> `ninja --check-lldb` but not the latter. I also notice in an email titled 
> "lldb-server tests" from Paval Labath on 15th May 2017 suggests that the plan 
> long term is to be to move purely to using LIT style testing. Is this correct 
> or have I misunderstood?

The discussion in that thread was about tests for lldb-server.

There has been some discussion about using the LIT framework rather than the 
current unittest based runner for actually running the lldb testsuite tests, 
replacing the runner, but using the same test code (all the test .py files).  
The current test suite test format has the advantage that it exercises lldb's 
exported API set extensively.  There are a number of clients of this (actually 
given that it's what Xcode uses, the vast majority of lldb users use it through 
the SB API's), and it is a core part of lldb for users as well so the more 
testing of that we get the better.  It is also a powerful API for driving lldb 
- and writing tests using it is a good way to see whether you have all the 
affordances you need in the API - plus you get the results in a natural 
structured form that is easy to validate.  So this style of test is going to 
stay around whatever else gets added to the lldb testing efforts.

We are also looking to see where in lldb it would make sense to add more 
testing of restricted components of lldb, either using the existing gtest 
framework or coming up with some other framework if that is not sufficient.

> I did have a look in buildbot to see what tests are being used and I can only 
> find the `dotest.py` style tests, however it's possible I've misunderstood 
> something here, the "lldb-x86_64-ubuntu-14.04-cmake" is not easy to make 
> sense of I'm afraid.

I think the bots are supposed to run the  googletests. For instance in the 
output here:

http://lab.llvm.org:8080/green/view/LLDB/job/lldb_coverage_xcode/141/consoleFull#-15141855949844eead-46b0-4a97-ae91-923a3407a4e3

If you scan down you'll see:

+ 
/Users/buildslave/jenkins/workspace/lldb_coverage_xcode/build/lldb/build/Debug/lldb-gtest
 
--gtest_output=xml:/Users/buildslave/jenkins/workspace/lldb_coverage_xcode/build/lldb/build/gtest-results.xml
[==] Running 279 tests from 26 test cases.
[--] Global test environment set-up.
[--] 9 tests from GoParserTest
[ RUN  ] GoParserTest.ParseBasicLiterals
...

Those are the gtests running.

> 
> Also there seems only to be one test for lldb-server in the LIT suite at 
> present. Is there a reason for this at present, possibly along the lines of 
> we're still waiting for the ability to run tests remotely using LIT as per 
> this email thread? I couldn't find an obvious answer as to whether a design 
> was agreed upon for this and/or the work completed, maybe it's an ongoing 
> question still.

I haven't looked into the lldb-server tests much, Pavel would be better for 
that question.

> 
> Finally, I do see failures myself in both of these tests from the latest 
> build. I do tend to limit it to compiling only for X86 target and I suspect 
> this may be related, or possibly just something odd with my build system 
> anyway. Obviously in an ideal world these tests should always pass but does 
> anyone else have similar problems? I assume they tend to pass for the core 
> developers as it seems to be fairly LLVM centric to ensure passing tests for 
> new bits of work. I can send the outputs of the failing tests if it's thought 
> useful.
> 

Since we depend on lots of different moving pieces it is not uncommon for 
failures to come and go.  The clang & llvm folks don't gate their changes on a 
clean lldb test run, so this often needs to get cleaned up after the fact.  For 
instance, some recent change to how DWARF was emitted started causing failures 
in TestWithModuleDebugging.py.  Adrian's in the process of fixing that.  
There's also a test in the MI tool (TestMiVar.py) that has failed on and off 
for a while.  IIUC Sean is going to x-fail that one because it hasn't gotten 
any attention from the code owners of that area in a while.  Those are the only 
tests that I've seen failing recently.  If you are seeing other tests fail, 
please file a PR with the llvm bugzilla (bugs.llvm.org) and we'll take a look.

Jim





> Many thanks for your time,
> 
>