From my phone. Apologies for brevity and typos.

On 2011/08/20, at 9:03, Rasmus Lerdorf <ras...@lerdorf.com> wrote:


>>
>>
>
> The secondary problem is that we are not doing a good job running our
> tests prior to releases. I think this is mostly because we have way
> too
> many tests that fail and one more or less failing test gets lost in
> the
> noise.

This was a major problem when Drupal added automated testing. Our
solution was to get to a 100% pass rate via a combination of fixing
bugs but also removing failing tests (moving them to patches against
bug reports in the issue queue or occasionally commenting out
assertions). This means we're able to tell instantly if there's a
regression committed when there's test coverage for it, since we test
patches in the queue this usually happens before that anyway.

  Tests that fail stay in the queue until they're committed along with
the accompanying bug fix. It's not ideal but it was impossible to keep
track any other way.

Nat


>
> -Rasmus
>
> --
> PHP Internals - PHP Runtime Development Mailing List
> To unsubscribe, visit: http://www.php.net/unsub.php
>

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to