On 9/4/13 1:48 AM, "Justin Mclean" <jus...@classsoftware.com> wrote:
>Hi, > >> So, what do you suggest? >1. Have more contributors/committers that have more time to fix and test >bugs? >2. Nicely ask the full time Adobe people to put a bit more resources onto >this as they know and understand it best? I'm willing to spend a small amount of time on making it better, but a bit of process does need to happen first. At Adobe, if new failures appeared and it was in tests that could have been affected by code you changed, you had to revert or fix immediately. Nobody else was allowed to check-in after a failure report otherwise it could make it hard for you to revert. Then you ran the tests that failed locally and figured out if it was the test or your change and submitted the code with fixed tests if necessary. The odds that it is the test is probably 50-50. I didn't make much noise about this for Apache Flex because Justin was the only one committing stuff, but now I have some changes pending and maybe others do as well. Can we agree on the above process? If so, Justin, can you revert some of your recent checkins in those areas? And see if the failures go away? The ListDragDrop failure might have to be deleted. It uses a timer which seems to always fail to get as many events on FP11.7 vs other players. I understand your time is limited, so I'll make a deal with you. If you can identify which change broke a test and have convinced yourself the test is in error and can't figure out how to debug the test, then I will look into it. At least I will then know what of the many code changes could have caused the problem which makes the process more efficient for me. But here is how I debug tests: I use FDB, but there is supposedly a way to launch a SWF in the FB debugger without having to build a process around it. But either way I 1) Build the test SWF using -failures or -caseName so it runs as few tests as possible. 2) Run it and look at the failure output to see which step failed and make sure it fails. 3) Examine the test steps and see if the failing step is unique somehow, like it is the first use of DispatchMouseClickEvent or the first AssertPropertyValue with target="foo". 4) Start the test in the debugger. 5) Set a breakpoint on the doStep() or execute() of that test step. Almost every test step has a doStep() or execute() method. 6) Set a conditional breakpoint if necessary 7) Run to the breakpoint. Poke around with the expression window. Figure out what is going wrong. I try not to use mini_run.sh to launch the test because it will kill the process if it has to wait too long while you poke around in the debugger. >3. Having a way to be easily able debug mustella tests in the debugger >would be a great help. What is hard about it right now? Just the launching or something else? -Alex