On 29 May 2009, at 14:43, John Jones wrote: > The short answer is they integrate to form a large system. It is quite > a mixture: external services (that change from time to time so I run > tests against them), perl modules doing DSP/FFT as well as running > four expert systems (in three threads) processing real time data, an > application build suite (for continuous integration of the main > application), a web interface running on localhost fronts the > application, a ruby REST service in our intranet fronts a ruby > application supporting another interface protocol to third parties and > used by the application plus an Internet side web site coordinates the > running of potentially many thousands of applications (at the moment > we are in small numbers).
The fact you're integrating so many things makes me think you need Cucumber files at the top level that drive the whole setup as a black box. When I'm integrating with external services I build a micro-version of the service that has dumb implementations of the interesting bits of the API. This lets set the system up in a known state, and change it to see how your system responds. And if you've already got tests you can run against the live version, you know exactly how they should behave. You can couple these with tests against the live system to prove they actually join up, if you believe this is a risk. > Some are code level and others are more like integration tests. There > are approaching five years worth of tests and these have evolved as > testing has evolved in this time -- along with my own interpretation > to suite my agenda. Fortunately they are quite granular and very > encapsulated so I can build on them with simple step handlers/helpers. > > My plan is to do black box testing with webrat steps. This provides > the interface for doing testing via the embedded web server running on > localhost. Also, this black box approach can be used to assess changes > to the rules in each of its four expert systems. > > How to test these four knowledge bases has been puzzling me for some > time too, because the sum of the tested parts certainly does not > guarantee its performance. For example, one of the applications > significant features is given the same input real time data streams, > each separate running application will produce similar end results but > make different decisions along the way. In fact they do this slightly > too well at the moment (for testing purposes). A little less lopsidity > in their actual decision making was anticipated. It may be that a fully-integrated test at this level is too much. If you can isolate the random behaviour and replace it with something deterministic (eg with a mocked implementation), you can tackle one moving part at a time. (It is always easier to tackle 1 n-dimensional problem as n 1-dimensional problems.) But your setup sounds pretty complex, so I'm not sure I can be much help here without knowing more. I probably shouldn't use the term "black-box" really. *All* tests should be black box, it's a matter of deciding how big the box should be, and what it'll have in it. >> What I suspect you want is a `rake predeploy` (or something) task >> that >> runs all the Cucumber features and all the Perl tests together, and >> fails unless both subcommands are successful. > > At the moment I see cucumber providing this layer. What are your > thoughts on this? Are there goals the cucumber development path has > that might make this impractical? > > The big attraction (for me) to taking this approach lies in using the > feature and scenario descriptions to capture and present application > knowledge/expertise related to the feature-scenario and test steps > each scenario fires. Capturing this application expertise is a big > issue in this project. If I just run the perl test harness(es) from a > rake task, I know they all pass or fail, which is great, but to > another engineer coming along later it is going to be more opaque than > seeing immediately which step failed. I think I understand you better now. At first, I imagined a scenario that just ran all the Perl tests. Are you saying that each test case would have a scenario of its own? Or something a bit coarser than that? > For example, in the tests that > run against one of the external services if the date format changes in > the data feed the feedback is instant, no digging down and running the > test again manually to find out what has happened. Plus, ongoing > development and maintenance is all driven agilely from one cucumber > layer - even if we are going to add additional functionality, > test::more tests in perl, ruby or python. Early, informative error messages are a big win! I'd actually be really interested to see the setup you're using there. Sounds like you've got a clever re-use of existing test coverage within Cucumber. <thinking_out_loud>I'm now wondering if I could do something similar in general, to integrate RSpec runs into Cucumber. Maybe tying RSpec- only edge cases into the appropriate part of a feature. Hmmm.... </ thinking_out_loud> > I felt really backed into a corner when I looked at all the testing > needs this application demands and while there was a way to tackle > each part, each solution was piece meal and lacked a top down > coherence and feel good factor. > > Cucumber is helping me to put some order and positive 'feel good' > into this testing process and so far it appears to work well. I can > think top down from the business needs and build down into the piece > meal testing layer as needed by adding suitable step helpers. Up to > now just three step helpers seem to provide all I need to interface > into scripts and I can see the need for just two more, though these > are more nice to have than vital. Just out of curiosity, what are these steps? Can you publish them here? Ashley -- http://www.patchspace.co.uk/ http://www.linkedin.com/in/ashleymoran http://aviewfromafar.net/ http://twitter.com/ashleymoran --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "NWRUG" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/nwrug-members?hl=en -~----------~----~----~----~------~----~------~--~---
