On 5/14/21 4:16 AM, Emanuele Giuseppe Esposito wrote:
On 13/05/2021 20:47, John Snow wrote:
On 4/14/21 1:03 PM, Emanuele Giuseppe Esposito wrote:
As with gdbserver, valgrind delays the test execution, so
the default QMP socket timeout timeout too soon.
Signed-off-by: Emanuele Giuseppe Esposito <eespo...@redhat.com>
---
python/qemu/machine.py | 2 +-
tests/qemu-iotests/iotests.py | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/python/qemu/machine.py b/python/qemu/machine.py
index d6142271c2..dce96e1858 100644
--- a/python/qemu/machine.py
+++ b/python/qemu/machine.py
@@ -410,7 +410,7 @@ def _launch(self) -> None:
shell=False,
close_fds=False)
- if 'gdbserver' in self._wrapper:
+ if 'gdbserver' in self._wrapper or 'valgrind' in self._wrapper:
This approaches me suggesting that we just change __init__ to accept a
parameter that lets the caller decide what kind of timeout(s) they
find acceptable. They know more about what they're trying to run than
we do.
Certainly after launch occurs, the user is free to just grab the qmp
object and tinker around with the timeouts, but that does not allow us
to change the timeout(s) for accept itself.
D'oh.
(Spilled milk: It was probably a mistake to make the default launch
behavior here have a timeout of 15 seconds. That logic likely belongs
to the iotests implementation. The default here probably ought to
indeed be "wait forever".)
In the here and now ... would it be acceptable to change the launch()
method to add a timeout parameter? It's still a little awkward,
because conceptually it's a timeout for just QMP and not for the
actual duration of the entire launch process.
But, I guess, it's *closer* to the truth.
If you wanted to route it that way, I take back what I said about not
wanting to pass around variables to event loop hooks.
If we defined the timeout as something that applies exclusively to the
launching process, then it'd be appropriate to route that to the
launch-related functions ... and subclasses would have to be adjusted
to be made aware that they're expected to operate within those
parameters, which is good.
Sorry for my waffling back and forth on this. Let me know what the
actual requirements are if you figure out which timeouts you need /
don't need and I'll give you some review priority.
Uhm.. I am getting a little bit confused on what to do too :)
SORRY, I hit send too quickly and then change my mind. I've handed you a
giant bag of my own confusion. Very unfair of me!
So the current plan I have for _qmp_timer is:
- As Max suggested, move it in __init__ and check there for the wrapper
contents. If we need to block forever (gdb, valgrind), we set it to
None. Otherwise to 15 seconds. I think setting it always to None is not
ideal, because if you are testing something that deadlocks (see my
attempts to remove/add locks in QEMU multiqueue) and the socket is set
to block forever, you don't know if the test is super slow or it just
deadlocked.
I agree with your concern on rational defaults, let's focus on that briefly:
Let's have QEMUMachine default to *no timeouts* moving forward, and have
the timeouts be *opt-in*. This keeps the Machine class somewhat pure and
free of opinions. The separation of mechanism and policy.
Next, instead of modifying hundreds of tests to opt-in to the timeout,
let's modify the VM class in iotests.py to opt-in to that timeout,
restoring the current "safe" behavior of iotests.
The above items can happen in a single commit, preserving behavior in
the bisect.
Finally, we can add a non-private property that individual tests can
re-override to opt BACK out of the default.
Something as simple as:
vm.qmp_timeout = None
would be just fine.
Well, one can argue that in both cases this is not the expected
behavior, but I think having an upper bound on each QMP command
execution would be good.
- pass _qmp_timer directly to self._qmp.accept() in _post launch,
leaving _launch() intact. I think this makes sense because as you also
mentioned, changing _post_launch() into taking a parameter requires
changing also all subclasses and pass values around.
Sounds OK. If we do change the defaults back to "No Timeout" in a way
that allows an override by an opinionated class, we'll already have the
public property, though, so a parameter might not be needed.
(Yes, this is the THIRD time I've changed my mind in 48 hours.)
Any opinion on this is very welcome.
Brave words!
My last thought here is that I still don't like the idea of QEMUMachine
class changing its timeout behavior based on the introspection of
wrapper args.
It feels much more like the case that a caller who is knowingly wrapping
it with a program that delays its execution should change its parameters
accordingly based on what the caller knows about what they're trying to
accomplish.
Does that make the code too messy? I understand you probably want to
ensure that adding a GDB wrapper is painless and simple, so it might not
be great to always ask a caller to remember to set some timeout value to
use it.
I figure that the right place to do this, though, is wherever the
boilerplate code gets written that knows how to set up the right gdb
args and so on, and not in machine.py. It sounds like iotests.py code to
me, maybe in the VM class.
Spoiler alert I haven't tested these changes yet, but I am positive that
there shouldn't be any problem. (Last famous words)
Emanuele
Clear as mud?
--js