On 7/17/06, Torsten Schoenfeld <[EMAIL PROTECTED]> wrote:
On Mon, 2006-07-17 at 11:39 +0200, demerphq wrote:
> Test names shouldnt be optional.
I disagree. I would find it cumbersome to have to come up with a
description for each and every test.
I dont think its that cumbersome at all. Even stuff like
"Fnorble 1"
"Fnorble 2"
is sufficient.
> Finding a particular test in a file by its number can be quite
> difficult, especially in test files where you dont have stuff like
>
> 'ok 26'.
>
> When ok() and is() are silently incrementing the counter and test
> names arent used how is one supposed to find the failing test? As you
> probably know it can be quite difficult.
Well, if the test passes, there's no need to know where exactly it's
located. If it fails, the diagnostics contain the line number:
not ok 6
# Failed test in t/xxx.t at line 26.
I've never seen incorrect line numbers.
I have. Lots and lots and lots of times. I could do a survey but IMO
it would be a waste of time.
Anytime you need to do testing that doesnt exactly fit into the
provided tools from the Test::Builder suite you either need to design
a Test::Builder style module, or you get bogus line numbers because
the wrapper routines around the tests report the wrong thing.
Basically determining where the test originated is determined by
heuristic (much as Carp does its thing by heuristic). And as anybody
with comp-sci background knows heuristics are called that and not
algorithms because they are not provably correct. They get things
wrong.
A string in a test file is trivial to find. Open the test file in an
editor and do a search for the string, and presto you have the failing
test.
Yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"