Hi Bert! Thanks for airing your concerns.

Bert Huijben wrote:
> I see added value in these tests, but can we please make this behavior 
> optional 
> before enabling for everybody all the time?

Certainly! That's one of the three TODO tasks I listed.

> I don't see why every test in the testsuite needs a double dump and 
> comparison in every testrun (on every test invocation)

Every test potentially generates a different repo. Every RA layer potentially 
gives different behaviour with 'svnrdump' (issue #4551 includes an example). 
Each FS type potentially behaves differently.

The problem of excessive duplication of coverage in our testing regime is not a 
new concern here.

> (And then the patch appears to ignore the fact that we have tests that create 
> multiple repositories)

Ignore? No, just not implemented yet. In the patch's log message it says:

Ideas for improvement:
  - Improve the logic for finding repositories created by a test: detect
    when a test created a repository even if the sandbox is not marked as
    'built'; detect when a test created additional repositories.

  - Implement the same cross-checking for the C tests.

> I can't see why the coverage is better this way, than running this just in a 
> single configuration...

Obviously the coverage is "better" in the sense of "more", so maybe you mean 
"better" in the sense of amount of coverage in proportion to the time taken?

> except by slowing developers down (and thereby reducing 
> the number of new bugs... just by reducing their productivity)

This extra test coverage will be optional. Don't enable it if you don't want to.

Trying to unpick what you really mean, I feel you are unhappy that the current 
set of tests that you run frequently (before each commit, perhaps) is too slow 
for your liking, and you think this addition will make it slower without a 
proportional increase in coverage. You are right about the last part -- this 
extra testing doubtless doesn't add as much coverage, in proportion to its run 
time, as adding a regression test targeted to a specific bug.

So maybe the point you are trying to make here is that this kind of "blanket" 
testing is not as "efficient", in the sense of coverage over execution time, as 
specifically targeted tests. Is that right?

Of course in another sense it is very efficient, in that it can detect a large 
class of bugs with very little human effort.

> With the same reasoning: better coverage is better, we can just as well 
> remove 
> the flag on which filesystem we test, and always run BDB and FSFS.
> Or skip the check which RA layer, and run them all.

What's your point? Of course we don't want to run all the possible test 
permutations a hundred times a day during our own development work flow. And of 
course we DO want to run all the possible tests sometimes, before shipping 
software.

You seem to be thinking that there is exactly one set of tests, and that 
everybody has to run the same set of tests every time for every purpose.

As developers, each of us chooses what subset of all possible tests to run, and 
how often, depending on our work patterns, our machine speed, the likelihood 
that the change we're working on will be detected by 
certain tests, the importance of getting the change right first time, and other 
considerations.

> We separated these tests over multiple configurations for a reason, and I 
> think 
> this should behave the same way

What way do you mean?



By being so negative about it, you are sending out a signal that additional 
testing is unwelcome. It is very easy to spread a negative feeling. I feel that 
any time I commit or even propose to implement some extra testing, you'll 
likely argue against it. Why should developers bother trying to make the 
software well tested if that's the attitude?

I'm sure that's not what you mean, but that's how it comes across. Please can 
we resolve this argument so we're clear on how extra testing can both be 
welcomed and its run time kept under control?


- Julian

Reply via email to