21.01.2021 04:17, Eric Blake wrote:
On 11/18/20 12:04 PM, Vladimir Sementsov-Ogievskiy wrote:
We are going to add more test cases, so use the library supporting test
cases.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com>
---
tests/qemu-iotests/264 | 93 ++++++++++++++++++++++----------------
tests/qemu-iotests/264.out | 20 ++------
2 files changed, 58 insertions(+), 55 deletions(-)
+++ b/tests/qemu-iotests/264.out
@@ -1,15 +1,5 @@
-Start NBD server
-{"execute": "blockdev-add", "arguments": {"driver": "raw", "file": {"driver": "nbd", "reconnect-delay": 10, "server": {"path":
"TEST_DIR/PID-nbd-sock", "type": "unix"}}, "node-name": "backup0"}}
-{"return": {}}
-{"execute": "blockdev-backup", "arguments": {"device": "drive0", "speed": 1048576, "sync": "full",
"target": "backup0"}}
-{"return": {}}
-Backup job is started
-Kill NBD server
-Backup job is still in progress
-{"execute": "block-job-set-speed", "arguments": {"device": "drive0", "speed":
0}}
-{"return": {}}
-Start NBD server
-Backup completed: 5242880
-{"execute": "blockdev-del", "arguments": {"node-name": "backup0"}}
-{"return": {}}
-Kill NBD server
+.
+----------------------------------------------------------------------
+Ran 1 tests
+
+OK
I find it a shame that the expected output no longer shows what was
executed. But the test still passes, and if it makes it easier for you
to extend the test in a later patch, I won't stand in the way (this is
more an indication that by stripping the useful output, I'm no longer in
as decent a position to help debug if the test starts failing).
Still, what is executed is understandable from the test itself.. And IMHO,
debugging python unittests is simpler: you get the stack, and immediately see
what happens. When with output-checking tests, you should first understand,
what is the statement corresponding to the wrong output. It's not saying about
the fact that with unittests I can simply test only one test-case (that's the
reason, why I think that tests with several testcases must be written as
unittest tests). And debugging output-checking tests with a lot of test-cases
inside is always a pain for me.
Another benefit of unittest: on failure test immediately finishes. With
output-checking tests, test continue to execute, and may produce unnecessary
not-matching log, or hang, or anything else.
Another drawback of output-cheking tests: they often test too much unrelated
things. Sometimes it's good: you can catch some unrelated bug :) But often it's
a pain: you have to modify test outputs when creating new features or changing
the behaviour actually unrelated to what the test actually want to test.
Python unittests are more difficult to write, as you should understand what
exactly you want/should to check.. When with output-checking tests you can just
log everything. But in general I'm for python unittests.
Still I think sometimes about supporting output for python-unitests based tests
(not loosing the ability to execute test-cases in separate, may be .out file
per test-case?), it may be a good compromise.
Reviewed-by: Eric Blake <ebl...@redhat.com>
--
Best regards,
Vladimir