Mark Glines wrote:


Ok.  Thanks for the feedback.  Lets see what I can do with it...

First.  Frankly, I don't care about the *style* of POD in use, just that
there *is* some.  "Well-formedness" for me comes down to parsability,
not style.  So Perl::Critic can GTFOMI.  :)  And there already is a POD
well-formedness test (t/doc/pod.t), so all the codingstd test I'm
proposing would do is make sure some pod exists, at least for
non-generated sources in POD-capable file formats.


Ahhh, I'm starting to breathe easier (uh, "more easily" for the grammar police).

I feel the same way you do about Test::Pod::Coverage; its the reason I
haven't added POD coverage tests to my own distributions.

Ahhh, so I'm not alone in this!

It's quite
possible to document a module properly without having a separate =head
tag for each subroutine in it.

And, IIRC, that was exactly why my POD for List::Compare was judged to be of low "kwalitee."


Second.  How do you feel about a codingstd test for this that is not run
during "make test" or "make smoke"?

Well, I think it should be given serious consideration. The tests I have in 'make buildtools_tests' do not run during 'make test' and, in fact, they're not intended to do so. So perhaps this is another such case.

(Coincidentally, one of the things I'm going to argue for in my talk at YAPC is a more fine-grained concept of which tests should be run when.)


The @coding_std_tests variable in
t/harness looks to me like a whitelist for which tests to run on
non-release builds... so if this test isn't part of the list, then
everyone is happy?  (Or have I misunderstood that code?)


To tell the truth, I'm not that familiar with Parrot's t/harness or with that variable in particular. I'll have to take a look.

Thank you very much.
kid51

Reply via email to