On Thu, 09 Jun 2011 14:04:37 -0500 Anthony Liguori <aligu...@us.ibm.com> wrote:
> On 06/09/2011 01:47 PM, Luiz Capitulino wrote: > > > > I've started writing some tests with the glib test framework (used by the > > qapi > > patches) but am facing some issues that doesn't seem to exist with check > > (our > > current framework). > > > > Of course that it's possible that I'm missing something, in this case > > pointers > > are welcome, but I must admit that my first impression wasn't positive. > > > > 1. Catching test abortion > > > > By default check runs each test on a separate process, this way it's able to > > catch any kind of abortion (such as an invalid pointer deference) and it > > prints a very developer friendly message: > > > > Running suite(s): Memory module test suite > > 0%: Checks: 1, Failures: 0, Errors: 1 > > check-memory.c:20:E:Memory API:test_read_write_byte_simple:33: (after > > this point) Received signal 11 (Segmentation fault) > > > > The glib suite doesn't seem to do that, at least not by default, so this is > > what you get on an invalid pointer: > > > > ~/src/qmp-unstable/build (qapi-review)/ ./test-visiter2 > > /qapi/visitor/input/int: Segmentation fault (core dumped) > > ~/src/qmp-unstable/build (qapi-review)/ > > > > Is it possible to have check's functionality someway? I read about the > > g_test_trap_fork() function, but one would have to use it manually in > > each test case, this is a no-no. > > I think this is a personal preference thing. I think having fork() be > optional is great because it makes it easier to use common state for > multiple test cases. Coupling test-cases like this is almost always a bad thing. Test-cases have to be independent from each other so that they can be run and debugged individually, also a failing test won't bring the whole suite down, as this makes a failing report useless. That said, you can still do this sharing without sacrificing essential features. Like disabling the fork mode altogether or subdividing test cases. Anyway, If there's a non-ultra cumbersome way to use g_test_trap_fork() (or any other workaround) to catch segfaults and abortions, then fine. Otherwise I consider this a blocker, as any code we're going to test in qemu can possibly crash. This is really a very basic feature that a C unit-test framework can offer. > > > > > 2. Memory leaks > > > > If you write something as simple as: > > > > int main(int argc, char **argv) > > { > > g_test_init(&argc,&argv, NULL); > > return g_test_run(); > > } > > > > And run it under valgrind, you'll see this leaks memory. If you add > > tests cases to it you'll see that it floods memory. This makes it almost > > impossible to debug memory leaks. > > > > Is there a cleanup function I'm missing? I googled for it, but I found only > > other people complaining about this too :( > > My version of glib/valgrind doesn't have this problem. Maybe there's a > valgrind filter for gtester on ubuntu and not fedora? What's the version you're using? > > > > > Now, let me say that this will also happen with check if you it in fork mode > > (which is the default). However, the leak goes away when you run it in > > non-fork mode which is what you want to do if you want to do any kind of > > debug > > with check (having the bug is still not acceptable though, but the fact is > > that > > it won't bite you in practice). > > > > 3. Test output > > > > The default output I get when I run a gtester test is: > > > > ~/src/qmp-unstable/build (qapi-review)/ ./test-visiter2 > > /qapi/visitor/input/int: OK > > /qapi/visitor/input/str: OK > > ~/src/qmp-unstable/build (qapi-review)/ > > > > Which is probably ok for a small amount of tests. However, you don't want to > > look for a list of 10 or more lines to see if a test failed, you want > > something > > more obvious, like what check does: > > > > ~/src/qmp-unstable/build (qapi-review)/ ./check-qint > > Running suite(s): QInt test-suite > > 100%: Checks: 5, Failures: 0, Errors: 0 > > ~/src/qmp-unstable/build (qapi-review)/ > > > > Now, I read about the gtester program and the gtester-report and I can > > understand > > the wonders of a xml + html report (like having on the web page, etc) but > > running > > two programs and transforming xml is not what developers want to do when > > they're > > running unit-tests every few minutes (not to mention that I'm getting all > > kinds of > > crashes when I run gtester-report in fedora). > > I actually like the way gtester does it and the html output is quite > nice IMHO. > > But the main motivator between gtester is that it's there. It can be a > non-optional build dependency. libcheck cannot because it's not widely > available/used. It's also much harder to use libcheck since you have to > create a test hierarchy programmatically. Agreed. > The check tests have bit rotted over time to the point that they're > broken in the tree. No, that's not true. Only check-qjson has a failing test and I did that on purpose (ie. my fault). I fixed a few issues wrt the handling of backslashes last year and realized that some tests where missing. I added them but that one didn't pass. I was sure it was a problem in the code (and I think I talked to you) but I didn't know how to fix it, so I decided to let the test failing as a way to remind me it had a problem. check-qdict is not broken, it just requires its test file to exist in the same directory. I never bothered to fix this because I used to build qemu in the top directory. Both problems are long standing because I was avoiding touching any QMP code since the QAPI discussions started. > I attribute this to the fact that they aren't built > by default. This is true. I doubt both problems would exist if the tests were run in every (developer?) build. > > Regards, > > Anthony Liguori > > > Ah, I just found out that check also has xml support but I've never > > used it... >