On 8/14/12 10:56 AM, "Carol Frampton" <cfram...@adobe.com> wrote:
>
>
> On 8/14/12 12 :01PM, "Jeff Conrad" <jeff...@gmail.com> wrote:
>
>> Hi,
>>
>> I'd like to help the project get to a point to where we can run the entire
>> test suite for the sdk in 10 minutes or less. I think that's a worthy
>> goal, and I'm willing to help make that a reality.
>>
>> If we get the testing time down to being that fast, that will mean we can
>> review a lot more contributions, and it could also mean that potential
>> contributors can submit patches that they know don't fail any tests before
>> submitting the patch. It also means that if one of those tests fail, the
>> person writing the code still has the context of what they did fresh in
>> their mind and can fix it quickly.
>>
>> For reference, I ran the entire Mustella suite last night, and it took 4
>> hours, 3 minutes, and 50 seconds to run ./mini_run.sh -createImages -all
>> last night on a quad-core machine running Windows 7 using the Git Bash
>> (yay! no cygwin!). To make it work with the Git Bash, I changed the shell
>> variable in mustella/build.xml to just sh.exe. As a plus, shellrunner.sh
>> somewhat intelligently parallel-ized the compilation of all the test swfs
>> so it was compiling 4 swfs at a time, and then the ant script ran all the
>> test swfs one at a time. I know mustella is more a functional /
>> integration test suite, so by definition it's going to run slower than a
>> suite of unit tests.
>>
>> I also know that at least a few people on this list want to refactor the
>> sdk so it's unit-testable. I definitely support this and would like to
>> help in any way I can on that front.
>>
>> I want the entire suite: unit, integration, and functional to be able to
>> run in 10 minutes or less. I have two ideas as to how we can make this
>> happen, and I'm open to more.
>>
>> One idea would be to intelligently look at the files that a given patch /
>> changeset affects and only build and run the tests that test that
>> functionality.
I have a prototype of selecting tests from an SVN Status report ready to go.
I haven't tested it on a full set of files yet so it might need some tuning.
>
> That's what we used to do at Adobe. We had what we called a cyclone
> server that we submitted a patch to and it ran the subset of tests based
> on the changed files. It used to take more than 10 minutes to get the
> results though.
I think Jeff is going to try to create a "massively" parallel system where
we have a bundle of servers in the cloud each tasked with running a subset
of the tests so they all get done in parallel. There is generally no need
for these tests to run serially.
The only gotcha I've thought up so far is that the mustella tests use bitmap
capture from the flashplayer. That means the servers have to be running mac
or windows because I don't think the linux player is on par with the other
players.
But I'm not up on cloud computing so maybe there is a way to do client-side
testing in the cloud.
> The problem with our implementation is the set of tests
> that were run was based on a database that was manually put together, not
> based on code inspection. Over time the db became less and less accurate
> because it wasn't kept up to date (and it might not ever have been totally
> accurate because it was manually created).
>
> As to your other question about "hurricane". I don't know what that is.
> Maybe Alex remembers. I was working on mustella/build.xml and mini_run.sh
> last Friday. I suspect there is a lot of dead code in there but we need
> to get everything working first before we "clean". The mustella directory
> was previously maintained by the QE group for their setups. I think there
> are code paths in there that no longer apply.
There is dead code in there that references the old QE DB. I'm not sure if
we'll need a similar DB or not. I believe "hurricane" is another internal
code word like "cyclone" that we used for qe/dev pair testing.
>
> BTW I'm not sure you can get the run down to 10 minutes but I think there
> are an awful lot of duplicate tests and if we could figure which were
> duplicates we could probably just throw away many tests.
>
> Carol
>
--
Alex Harui
Flex SDK Team
Adobe Systems, Inc.
http://blogs.adobe.com/aharui