On Thu, Dec 14, 2017 at 11:41:36AM +0000, Stefan Hajnoczi wrote: > On Tue, Dec 05, 2017 at 01:51:52PM +0800, Peter Xu wrote: > > Set maximum QMP request queue length to 8. If queue full, instead of > > queue the command, we directly return a "request-dropped" event, telling > > client that specific command is dropped. > > > > Note that this flow control mechanism is only valid if OOB is enabled. > > If it's not, the effective queue length will always be 1, which strictly > > follows original behavior of QMP command handling (which never drop > > messages). > > > > Signed-off-by: Peter Xu <pet...@redhat.com> > > --- > > monitor.c | 15 +++++++++++++++ > > 1 file changed, 15 insertions(+) > > > > diff --git a/monitor.c b/monitor.c > > index c20e659740..f7923c4590 100644 > > --- a/monitor.c > > +++ b/monitor.c > > @@ -4029,6 +4029,8 @@ static void monitor_qmp_bh_dispatcher(void *data) > > } > > } > > > > +#define QMP_REQ_QUEUE_LEN_MAX (8) > > + > > static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens, > > void *opaque) > > { > > @@ -4061,6 +4063,19 @@ static void handle_qmp_command(JSONMessageParser > > *parser, GQueue *tokens, > > req_obj->id = id; > > req_obj->req = req; > > > > + if (qmp_oob_enabled(mon)) { > > + /* Drop the request if queue is full. */ > > + if (mon->qmp.qmp_requests->length >= QMP_REQ_QUEUE_LEN_MAX) { > > qmp_queue_lock must be held. Perhaps it's simplest to move this check > into the qmp_queue_lock critical section below and unlock before calling > qapi_event_send_request_dropped()... > > > + qapi_event_send_request_dropped(id, > > + REQUEST_DROP_REASON_QUEUE_FULL, > > + NULL); > > + qobject_decref(id); > > + qobject_decref(req); > > + g_free(req_obj); > > + return; > > + } > > + } > > + > > /* > > * Put the request to the end of queue so that requests will be > > * handled in time order. Ownership for req_obj, req, id, > > ... down here.
Yeah can do. I think it's not really a must (IMHO the worst case won't be that worse if current queue length is 8)? but it's obviously nicer. Thanks, -- Peter Xu