Rob Oxspring wrote:
Hi Steve,

I'm sure I remember talk of remote junit tests on ant-dev a while back (a year??), couldn't find references recently though... was it you that was interested then? Anyone else? - would have thought others were interested at least! Anyway comments / questions inline:

There is some remote junit stub stuff in the sandbox, from stephane.


Steve Loughran wrote:


FYI, I am trying to add remote JUnit stuff to the smartfrog framework I work on by day;


Hmmm, smartfrog... as in http://smartfrog.sf.net? How does that compare to IBM's STAF http://staf.sf.net?

Smartfrog as in http://smartfrog.sf.net 302s you to http://smartfrog.org

Looks like STAF is test centric. SmartFrog is really deployment of running code, liveness probes, etc. You provide a component for something like, jetty or axis and it deploys it, configures it, handles failures how you choose it. It's been used in 1-10,000+ node configurations., and I am busy trying to adapt it to the GGF Grid architecture these days. Testing is something I need to put in as a sideline to testing the framework itself, and because testing is so central to a develop/deploy process you cannot leave it out.

The key thing is probably the configuration language which is *not* XML; a deep religious issue that is slowly being resolved. The language itself is not hard to use, and we have ant tasks to deploy stuff to running daemons right now (though I need to add in the certificate based security).



 > the idea being you could deploy code on serverA, then

run tests on clients B,C and D.


I've recently been tasked with automating our test environment and am planning something along these lines myself. If you want someone to help develop / test the solution then I'm quite possibly your man (assuming smartfrog looks up to the job).

oh, this would be great. I am only just writing the stuff now, help would be excellent though things are mostly immature. Still, this is actually day job work so I am likely to actually do it.



So whats your plan regarding syncrhonisation? would the build block until all tests have completed and produce results or would the tests be forked and results be collected later? And what about testbox selection - do you expect to simply run all tests on all boxes running a smartfrog daemon?

hmm.

The way the language works is that components have attributes (or nested components ==elements/element references); special attributes are interpreted by the runtime. So "sfProcessHost" specifies the hostname to run on; the daemons distribute things amongst themselves.

I'd imagine have a deployment descriptor that would declare a test listener component that could run on a different host from any of the test runners. Test runners would run on whatever; there would be nested components to descibe test packages, ideally with all the tricks of the junit task (if/unless, patterns, etc).

TestListener extends XMLTestListener  {
        sfProcessHost = "logServer";
        directory = " /nfs/common/tests";
}

/**
        I havent settled on a good way to map this.
        We will only have access to .class files in the jars
        but being java1.4 only, I can use built in regexp support.
*/
FunctionalTests extends TestSuite {
        package="org.example.tests";
        pattern= "*Test";
}

FunctionalTestRunner extends JunitTestRunner {
        listener extends TestListener;
        ftests extends FunctionalTests;
}



sfConfig extends {

WinXPTests extends FunctionalTestRunner {
        sfProcessHost "windows.example.org";
}

DebianTests extends FunctionalTestRunner {
sfProcessHost "debian.example.org";
}

}
This would run one test listener, run the same set of tests on different boxes. I'm ignoring deploy of the app itself in this config, as that is a separate 'application' to deploy.


If you are into dynamic selection of host boxes and things then stuff gets more complex, but complex is tractable. As far as ant integration goes, I hadnt thought about build synchronisation. If you do a blocking deploy, <sf-run>, then the build blocks until it is finishd; here the tests. A non-blocking deploy with <sf-deploy> would just deploy and keep going.

I need to think more about sync. I'd guess each testrunner needs to notify something that the test run is completed (the listener?) and terminate, and then I need to have a test listener know to terminate when all listened to test runs are finished. It might be easier to have one listener per test runner. And I may need a new ant task to resync, something that blocks until a named smartfrog app is terminated .


> I'm just getting to grips with

RMI-transport of Junit results, then comes the test runner itself, which will be slightly different from the Ant one. I'm going to generate files in the same format as the ant ones, so they would all integrate together. I fear I may have to add some better reporting once we do start running tests on multiple machines, as you would want a summary report for each machine (including machine config information)


Amen to that - heppy to muck in though.

That'd be great. There is a new point release of smartfrog (with beta of the ant tasks) out in the next two weeks; there should be little change from what is in CVS now (I hope; there is a lot more prerelease churn than I'd like). Get on the smartfrog-developer email list at sourceforge if you want to work with the tool.


-Steve


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to