Hi Tamas,
I think you grabbed me stats on failing tests in the past. Can you dig up
the failure rate for TestRaise.py's test_restart_bug() variants on Ubuntu
14.04 x86_64? I'd like to mark it as flaky on Linux, since it is passing
most of the time over here. But I want to see if that's valid ac
Hmm, the flakey behavior may be specific to dwo. Testing it locally as
unconditionally flaky on Linux is failing on dwarf. All the ones I see
succeed are dwo. I wouldn't expect a diff there but that seems to be the
case.
So, the request still stands but I won't be surprised if we find that dwo
Nope, no good either when I limit the flakey to DWO.
So perhaps I don't understand how the flakey marking works. I thought it
meant:
* run the test.
* If it passes, it goes as a successful test. Then we're done.
* run the test again.
* If it passes, then we're done and mark it a successful test.
Thanks, Tamas.
On Mon, Oct 19, 2015 at 4:30 AM, Tamas Berghammer
wrote:
> The expected flakey works a bit differently then you are described:
> * Run the tests
> * If it passes, it goes as a successful test and we are done
> * Run the test again
> * If it is passes the 2nd time then record it as
Okay. I think for the time being, the XFAIL makes sense. Per my previous
email, though, I think we should move away from unexpected success (XPASS)
being a "sometimes meaningful, sometimes meaningless" signal. For almost
all cases, an unexpected success is an actionable signal. I don't want it
Hi all,
I'd like unexpected successes (i.e. tests marked as unexpected failure that
in fact pass) to retain the actionable meaning that something is wrong.
The wrong part is that either (1) the test now passes consistently and the
author of the fix just missed updating the test definition (or perh
> I'd like unexpected successes (i.e. tests marked as unexpected failure
that in fact pass)
argh, that should have been "(i.e. tests marked as *expected* failure that
in fact pass)"
On Mon, Oct 19, 2015 at 12:50 PM, Todd Fiala wrote:
> Hi all,
>
> I'd like unexpected successes (i.e. tests marke
I think the older Ubuntus and the RHEL 7 line both still have a 2.7-based
python. I am not aware of any system on the Linux/OS X side where we are
seeing Python 2.6 systems anymore.
Can't speak to the BSDs.
My guess would be we don't need to worry about python < 2.7.
-Todd
On Mon, Oct 19, 2015
e getCategories() mechanism.
-Todd
On Mon, Oct 19, 2015 at 1:03 PM, Zachary Turner wrote:
>
>
> On Mon, Oct 19, 2015 at 12:50 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi all,
>>
>> I'd like unexpected successes (i.e. tests mark
)
>> override the categorization for the TestCase getCategories() mechanism.
>>
>> -Todd
>>
>> On Mon, Oct 19, 2015 at 1:03 PM, Zachary Turner
>> wrote:
>>
>>>
>>>
>>> On Mon, Oct 19, 2015 at 12:50 PM Todd Fiala via lldb-dev <
&
;> our only mechanism to add categories (1) specify a dot-file to the
>>> directory to have everything in it get tagged with a category, or (2)
>>> override the categorization for the TestCase getCategories() mechanism.
>>>
>>> -Todd
>>>
>>
' category. We
>>>>> won't do anything different with the category by default, so everyone will
>>>>> still get flakey tests running the same manner they do now. However, on
>>>>> our test runners, we will be disabling the category entirely using
Hi Ying,
Our dotest.py lldb test results go through that lit test parser system? I
see XPASS happen frequently (and in fact is my whole reason for starting a
thread on getting rid of flakey tests, or making them run enough times so
that their output can be a useful signal rather than useless). A
I'm in favor of (b). The less user-required setup to do the right thing on
a test suite, the better IMHO. Those actively trying to make sure one or
another c++ library is getting tested will be looking for the output to
validate which std c++ lib(s) ran.
-Todd
On Wed, Oct 21, 2015 at 3:47 AM, P
Oh haha okay. :-)
Thanks for explaining, Ying!
-Todd
On Wed, Oct 21, 2015 at 10:01 AM, Ying Chen wrote:
> Yes, the output of dotest.py goes through LitTestCommand parse.
> The parser is matching for "XPASS", but dotest output is using "UNEXPECTED
> SUCCESS". :)
>
> Thanks,
> Ying
>
> On Tue,
I'd be okay with that.
The unittest2 stuff looks like it was a vestige of being incorporated
before unittest2 was stock (unitest) on Python 2.[6,7]?. Everyone should
have a unitest included that is effectively what we use as unittest2.
-Todd
On Thu, Oct 22, 2015 at 10:05 AM, Zachary Turner via
We could also then remove unittest2 from inclusion in the lldb repo.
On Thu, Oct 22, 2015 at 11:28 AM, Todd Fiala wrote:
> I'd be okay with that.
>
> The unittest2 stuff looks like it was a vestige of being incorporated
> before unittest2 was stock (unitest) on Python 2.[6,7]?. Everyone should
(I was eventually going to do this at some point after I verified it was
indeed true). It should just be called unittest in a stock distribution.
On Thu, Oct 22, 2015 at 11:29 AM, Todd Fiala wrote:
> We could also then remove unittest2 from inclusion in the lldb repo.
>
>
> On Thu, Oct 22, 2015
(And side note: if you're pushing a "lambda: self.foo()" with no arguments,
the lambda is unneeded and you can just push "self.foo" --- that cleanup
hook pushed on most tests at the end of the file is a perfect example of an
unneeded level of lambda indirection).
On Wed, Oct 21, 2015 at 12:04 PM,
Yeah I think the biggest thing I wanted to check there was that there
wasn't any unittest2 behavior present in that cut of unittest2 that didn't
make it into the revamped version brought into the python distributions
when they upgraded unittest. Then it's just a big rename exercise on
replacing un
Okay, will do.
On Thu, Oct 22, 2015 at 12:56 PM, Zachary Turner wrote:
> This is going in right now. As it is a fairly large change, it wouldn't
> surprise me if someone encounters an issue. I tested this everywhere I can
> and it seems fine, so please let me know if anyone encounters anything
Hi all,
What's the proper command line invocation to run our sources through to get
proper LLVM formatting and other desired fix-ups?
Thanks!
--
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lld
Hi all,
I've taken a stab at getting the gtests in lldb/unittests to compile and
run on Xcode. I just checked this in. There's a new scheme called
lldb-gtest. If you run that in Xcode, it should build the DebugClang
variant of lldb and link against the gtest libraries that come with clang.
The
Okay this broke the cmake Linux build. I'm fixing that now...
On Sun, Oct 25, 2015 at 2:49 PM, Todd Fiala wrote:
> Hi all,
>
> I've taken a stab at getting the gtests in lldb/unittests to compile and
> run on Xcode. I just checked this in. There's a new scheme called
> lldb-gtest. If you run
This should be fixed with:
$ svn commit
unittests/Editline/CMakeLists.txt
Transmitting file data .
Committed revision 251264.
On Sun, Oct 25, 2015 at 2:55 PM, Todd Fiala wrote:
> Okay this broke the cmake Linux build. I'm fixing that now...
>
> On Sun, Oct 25, 2015 at 2:49 PM, Todd Fiala wrote
ript
> `llvm/tools/clang/tools/clang-format/git-clang-format`. You can run that
> script manually with --help to get more information about how to use it
> without git. And there may also be a way to integrate it into svn so you
> can write something like `svn clang-format`
>
Yes, they do.
On Mon, Oct 26, 2015 at 9:34 AM, Zachary Turner wrote:
> Nice! Out of curiosity, do all the unittests pass? (I expect they do, as
> they do everywhere else, just wondering)
>
> On Sun, Oct 25, 2015 at 2:57 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org
Hi all,
I've made a few changes to the Apple OS X buildbot today. These are mostly
minor, but the key is to make sure we all know when it's broken.
First off, it now builds the lldb-tool scheme using the Debug
configuration. (Previously it was building a BuildAndIntegration
configuration, which
301 - 328 of 328 matches
Mail list logo