Quoting Marek Olšák (2016-04-18 10:39:46) > On Mon, Apr 18, 2016 at 6:45 PM, Dylan Baker <[email protected]> wrote: [snip] > > > > Thanks for working on this Marek, > > > > This has been discussed here several times amongst the intel group, and > > the recurring problem to solve is crashing. I don't have a strong > > opinion on python vs catching a fail in the signal handler, except that > > handling in the python might be more robust, but I'm not really familiar > > with what a C signal handler can recover from, so it may not. > > I can catch signals like exceptions and report 'crash'. Then I can > open a new process from the handler to run the remaining tests, wait > and exit. > > The signal catching won't work on Windows. > > Also, there are piglit GL framework changes that have only been tested > with Waffle and may break other backends.
It wouldn't be difficult to handle in the python framework. I have some patches that are half baked to do exactly this sort of thing for piglit/deqp, it shouldn't be too hard to generalize that code and handle it that way. I think it would be better to handle it in shader_runner if we can, but as a fallback if we decide that having one solution that works everywhere is better than having one for windows and one for not-windows it can be done in python. > > > > > The one concern I have is using subtests. There are a couple of > > limitations to them, first we'll loose all of the per test stdout/stderr > > data, and that seems less than optimal. I wonder if it would be better > > to have shader runner print some sort of scissor to stdout and stderr > > when it starts a test and when it finishes one, and then report results > > as normal without the subtest. That would maintain the output of each > > test file, which seems like what we want, otherwise the output will be > > That can be done easily in C. > > > jumbled. The other problem with subtests is that the JUnit backend > > doesn't have a way to represent subtests at the moment. That would be > > problematic for both us and for VMWare. > > I can't help with anything related to python. > > The goal is to make piglit faster for general regression testing. > Other use cases can be affected negatively, but the time savings are > worth it. Well, we either need to not use subtests, or the junit subtest handling deficiency needs to be solved before landing this or its going to be a huge problem, since we rely on the CI heavily and this would hide a lot of specifics about regressions. What we'd end up with is something like 'spec/ARB_ham_sandwich: fail', which is completely insufficient, since most of piglit is shader_runner based. Personally, I think using the scissoring approach is better anyway since it also allows us to link the stdout/stderr to the specific test, and with that approach we don't need to use subtests either, the python layer can just make a test result per scissor and the changes wouldn't be user visible at all (barring any bugs). There's a few changes to the python that would need to happen to make this work, but I don't think it's going to be more than a couple of patches. > > > > > Looking at the last patch the python isn't all correct there, it will > > run in some cases and fail in others, particularly it will do something > > odd if fast skipping is enabled, but I'm not sure exactly what. I think > > it's worth measuring and seeing if the fast skipping path is even an > > optimization with your enhancements, if it's not we should just disable > > it for shader_runner or remove it entirely, it would remove a lot of > > complexity. > > If the fast skipping is the only issue, I can remove it. I'd be fine with just removing it from shader_runner for now and I could run tests to see if it's actually an improvement later, and get it working at that point if it is advantageous, and rip it out if it isn't. I could see it still being a win for some of the very old platforms we support, since they tend to have slow CPUs and limited OpenGL support. The most straightforward way to disable it would be to just remove or comment out "self.__find_requirements" in ShaderTest.__init__, I think. > > > > > I'd be more than happy to help get the python work done and running, > > since this would be really useful for us in our CI system. > > What else needs to be done in python? > > Marek I guess that depends on what approach you want to take on things. If you want to try the scissor output approach we'll need to write an extended interpret_result method for ShaderTest. I don't think it'll be that complicated since it'll just be looking for the scissor marks and the test name, and passing the rest up via super(). There's a few more changes that would be needed, but I don't think they'd be too complicated. If you want to have the crash handler/rerunner in python we'll need to implement that, that's probably a bit more complicated, but shouldn't be bad. Dylan
signature.asc
Description: signature
_______________________________________________ Piglit mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/piglit
