30 марта 2012 г. 5:55 пользователь Stas Malyshev
<smalys...@sugarcrm.com> написал:
> Hi!
>
>> The difference started from 5.3.9 release when we start to pay *much
>> more* attention to tests.
>> You now can cleanly see failing tests, it's not that huge list, so
>> it's big difference.
>
> Yes, and removing XFAILs would kill that advantage.
>
>> The main idea I'm trying to say is that it's comfortable to live with
>> XFAILs. That's why they live by years. They don't get make any
>> pressure, we don't have a release rule "No failing tests", so they go
>
> You talk about "making pressure", but when date fails were sitting in
> the tests as FAILs, they didn't make any "pressure" and nobody was
> fixing them. And if we had rule of "no failing tests", we'd have no
> releases for years now, because nobody is fixing those tests and bugs
> behind them. You want to fix them? Go ahead, no problem. But if there's
> nobody to fix them - what's the use to put them in FAILs and prevent us
> from seeing issues that *are* going to be fixed?

They didn't make any pressure because they were not frequently exposed
on the list and irc.
What I think should be done:
1) Make a *daily* notifications about failing tests in this mailing
list and irc. This will create pressure and make sure that nobody will
forget that we still have problems and they need to be solved.
BTW, that's really strange that we still do not have *any*
notifications about failed builds, but do have them on phpdoc project.
I don't think those guys are smarter than us :)
2) Create explicit distinction release-stopper tests (let's call them
acceptance) and usual functional/unit tests. For example, we create a
folder "acceptance" under each "tests/" folder and put there all tests
that never should be broken. If those tests are broken, release can't
be made.


>> We have 3 failing tests and 35 xfails, I don't see any tons of fails
>> here. Sorry, if I sound like a broken record, but if we need to fix
>> those, we need to make more noise about that.
>
> OK, you made noise. Let's see how many of those 35 xfails get fixed,
> let's say, in a month. How many you would predict it would be?

That's not a noise. See p.1 above. If we don't setup *constant*
notifications, people won't feel pressure.
Of course, it's easy to tune spam filter in your mail client or ban a
bot on IRC, that's why I'm asking for agreement here, to make it a
part of the development process.
Guys, I respect you very much, all of you. I can feed my family
because of your work. I'm really trying to help. Please, don't get it
personally and let's try to find a decision together. I assume we at
least agree that we have a problem here.

>> XFAIL - expected to fail test. If it's fails - then it's ok. That's
>> how I understand it. Failing test should not be ok, it's an error. If
>> you get used to not paying attention on failing tests, you're in
>> dangerous situation. It's like a fairy tale about boy that cried
>
> Nobody already *is* paying attention, so it's not an "if", it's a fact.
> It's a sad fact, but still a fact. And it's not result of the XFAILs,
> because this situation predates XFAILs and was there before we moved
> such tests to XFAILs.

See above.
>
>> About incomplete, well, it seems it doesn't suite here much, it's
>> about that test is not fully written or finished.
>
> If your test is not finished, do it in a fork. By the time the feature
> gets merged into main branches, it should be complete enough to run the
> tests.

Yes, it's a sane way too.

-- 
Regards,
Shein Alexey

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to