Hi On Mon, Oct 8, 2018 at 11:15 AM Peter Xu <pet...@redhat.com> wrote: > > On Tue, Oct 02, 2018 at 01:13:10PM +0400, Marc-André Lureau wrote: > > Hi Peter > > > > On Sat, Sep 29, 2018 at 8:05 AM Peter Xu <pet...@redhat.com> wrote: > > > > > > On Fri, Sep 28, 2018 at 04:06:30PM +0400, Marc-André Lureau wrote: > > > > Hi > > > > > > > > On Wed, Sep 5, 2018 at 10:24 AM Peter Xu <pet...@redhat.com> wrote: > > > > > > > > > > Currently when QMP request queue full we won't resume the monitor > > > > > until > > > > > we have completely handled the current command. It's not necessary > > > > > since even before it's handled the queue is already non-full. Moving > > > > > the resume logic earlier before the command execution, hence drop the > > > > > need_resume local variable. > > > > > > > > > > Signed-off-by: Peter Xu <pet...@redhat.com> > > > > > --- > > > > > monitor.c | 12 +++++------- > > > > > 1 file changed, 5 insertions(+), 7 deletions(-) > > > > > > > > > > diff --git a/monitor.c b/monitor.c > > > > > index a89bb86599..c2c9853f75 100644 > > > > > --- a/monitor.c > > > > > +++ b/monitor.c > > > > > @@ -4130,7 +4130,6 @@ static void monitor_qmp_bh_dispatcher(void > > > > > *data) > > > > > { > > > > > QMPRequest *req_obj = monitor_qmp_requests_pop_any_with_lock(); > > > > > QDict *rsp; > > > > > - bool need_resume; > > > > > Monitor *mon; > > > > > > > > > > if (!req_obj) { > > > > > @@ -4139,8 +4138,11 @@ static void monitor_qmp_bh_dispatcher(void > > > > > *data) > > > > > > > > > > mon = req_obj->mon; > > > > > /* qmp_oob_enabled() might change after "qmp_capabilities" */ > > > > > - need_resume = !qmp_oob_enabled(mon) || > > > > > - mon->qmp.qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1; > > > > > + if (!qmp_oob_enabled(mon) || > > > > > + mon->qmp.qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) { > > > > > + /* Pairs with the monitor_suspend() in handle_qmp_command() > > > > > */ > > > > > + monitor_resume(mon); > > > > > + } > > > > > > > > With spice chardev, this may result in a synchronous write. > > > > If I read it right, this may re-enter handle_qmp_command and dead-lock > > > > on qemu_mutex_lock(&mon->qmp.qmp_queue_lock); > > > > > > > > So at least I would release the lock before resuming :) > > > > > > For sure this I can do. :) Though I'd like to know more context too. > > > > > > I just noticed that we added the qemu_chr_fe_accept_input() call into > > > monitor_resume() a month ago which I completely unaware of... then the > > > resuming could be a heavy-weighted function now. I'm a bit worried on > > > whether this would affect the oob thing since noting that we're still > > > in the monitor iothread (which should not block for long). Especially > > > if you mentioned that we'll handle commands again, then could we > > > potentially run non-oob command handlers in oob context here simply > > > due to the call to monitor_resume()? > > > > monitor_resume() is only called from main thread, afaict. > > My fault on misreading on that; yes it's only called in main thread. > > > > > So I think the problem is rather that qemu_chr_fe_accept_input() is > > not thread safe (if the same charfe is used in a different thread, > > like mon_iothread). > > > > So instead of simply kicking the mon_iothread, we should probably > > schedule a BH to call accept input. > > Hmm, could you help explain why we need to make > qemu_chr_fe_accept_input() thread safe? I see that's only called in > main thread as well (besides the call in monitor_resume, it's mostly > in memory region ops), or did I misread again?
Yes, it's called from main thread. And that's not very safe for chardev to be called from different threads. Let's try to keep the chardev-related calls in the in mon_iothread (if it is used). I'll send a patch along with some other cleanups. > > Regards, > > -- > Peter Xu -- Marc-André Lureau