Thanks for your feedback. I've been anticipating this discussion, and my 
response here is directed to everyone, not just your problem.

Using specs and instrument+generative testing is much different than the 
example-based testing that has happened thus far, and should deliver 
substantially greater benefits. But the (test-time) performance profile of such 
an approach is similarly different, and will require a different approach as 
well.

Let's deal with the simplest issue - the raw perf of spec validation today. 
spec has not yet been perf-tuned, and it's quite frustrating to see e.g. 
shoot-out tables comparing perf vs other libs. If people want to see code being 
developed (and not just dropped in their laps) then they have to be patient and 
understand the 'make it right, then make it fast' approach that is being 
followed. I see no reason spec's validation perf should end up being much 
different than any other validation perf. But it is not there yet.

That being said, even after we perf-tune spec, comparing running a test suite 
with instrumented code (and yes, that is a good idea) with the same test suite 
without (which as people will find, has been missing bugs) is apples vs oranges.

Add in switching to (or adding) generative testing, which is definitely always 
going to be much more computationally intensive than example based tests (just 
by the numbers, each generative test is ~100 tests), there is no way that 
test-everything-every-time is going to be sustainable.

Should we not use generative testing because we can't run every test each time 
we save a file?

We have to look at the true nature of the problem. E.g., each time you save a 
file, do you run the test suites of every library upon which you depend? Of 
course not. Why not? *Because you know they haven't changed*. Did you just add 
a comment to a file - then why are you testing anything? Unfortunately, our 
testing tools don't have a fraction of the brains of decades-old 'make' when it 
comes to understanding change and dependency. Instead we have testing tools 
oriented around files and mutable-state programming where, yeah, potentially 
changing anything could break any and everything else, so let's test everything 
any time anything has changed.

This is just another instance of the general set of problems spec (and other 
work) is targeting - we are suffering from using tools and development 
approaches (e.g. building, testing, dependency management et al) whose 
granularity is a mismatch from reality. Having fine-grained (function-level) 
specs provides important opportunities to do better. While tools could (but 
currently mostly don't) know when particular functions change (vs files), specs 
can let us independently talk about whether the *interface* to a fn has 
changed, vs a change to its implementation. Testing could be radically better 
and more efficient if it leveraged these two things.

I don't like to talk about undelivered things, but I'll just say that these 
issues were well known and are not a byproduct of spec but a *target* of it 
(and other work).

Rich

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to