I guess I don't see how having a test dive into lldb-test and do a bunch of 
work opaque work that I can't really annotate makes for an easier debugging 
scenario than a test were I can trivially insert code to query the state of the 
test as it goes along.  In the current testsuite, the progress of the test 
moves pretty clearly through the stages of getting the situation up to the 
point that you can test the thing you want to test.  That makes it, to my mind, 
very easy to understand and debug, and you end up dealing with the core pieces 
of lldb functionality, not whatever odd bits of business the lldb command line 
or the lldb-test utility do.  Moreover, you can pretty much type the test 
script of a dotest test directly into the script interpreter, so you can also 
interactively recreate the test patterns.  I find this makes it very convenient 
to debug using these tests.  You of course have to learn the SB API to some 
extent, but I can't see why that's a gating factor if you're going to take the 
time to do work like port lldb to a new platform.

The wider LLVM community tests a very different kind of tool than lldb, which 
leaves me less moved by the argumentum ad verecundiam than I might otherwise be.

We can have another centithread and discuss this again.  I'm not so sure we'll 
convince one another, however.

Jim


> On Jan 29, 2018, at 5:56 PM, Zachary Turner <ztur...@google.com> wrote:
> 
> Also, I can think of at least 3 different companies/people who are investing 
> in LLDB for their downstream needs (who haven't publicly announced this, so 
> this isn't widely known), which involves bringing LLDB up on currently 
> unsupported platforms.  It's easy to lose sight of what that entails when 
> you've had a supported platform for 10+ years, but suffice it to say that the 
> less a test does, the better.  For these people, when a test fails, you want 
> as close to an absolute guarantee as possible that the failure points 
> immediately to the underlying cause as possible.  This drastically reduces 
> the amount of work people have to do.  
> 
> We can have another bi-monthly centithread about this if you want, but at the 
> end of the day if we want the test situation to improve, this is the way to 
> go and I believe there's pretty wide concensus in the larger LLVM community 
> about this.
> 
> On Mon, Jan 29, 2018 at 5:51 PM Zachary Turner <ztur...@google.com> wrote:
> We’ve had many instances of flakiness in non pexpect tests (on all 
> platforms). There’s no obvious pattern to when a test will be flaky. Whether 
> those are due to dotest or liblldb is an open question, but one good way of 
> answering those types of questions is to replace one source of 
> unknown-flakiness with a source of known-not-flakiness and seeing if the 
> flakiness goes away.
> 
> The new-and-not-tested code you’re referring to would be about 5 lines of c++ 
> that also directly calls the api, just like your dotest example. So that 
> aspect doesn’t feel like a convincing argument 
> On Mon, Jan 29, 2018 at 5:28 PM Jim Ingham via Phabricator 
> <revi...@reviews.llvm.org> wrote:
> jingham added a comment.
> 
> lldb testcases are know not to be flakey if they don't use pexpect, which 
> these wouldn't.  The setup machinery for running a dotest based test is 
> pretty well tested at this point.
> 
> And the lldb-test test would not just magically come into being by writing 
> the lit-form text you suggested.  You will have to write a lldb-test func 
> that marshals the input (both --complete-string and you'll also need a cursor 
> position to test completion inside a line).  That's new and not tested code.  
> Whereas the dotest test relies on the API it is directly testing, and trust 
> that the basic machinery of dotest is going to continue to function.
> 
> 
> https://reviews.llvm.org/D42656
> 
> 
> 

_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to