On 2/6/19 2:26 PM, Petre Ovidiu PIRCALABU wrote:
> On Wed, 2018-12-19 at 20:52 +0200, Petre Pircalabu wrote:
>> This patchset is a rework of the "multi-page ring buffer" for
>> vm_events
>> patch based on Andrew Cooper's comments.
>> For synchronous vm_events the ring waitqueue logic was unnecessary as
>> the
>> vcpu sending the request was blocked until a response was received.
>> To simplify the request/response mechanism, an array of slotted
>> channels
>> was created, one per vcpu. Each vcpu puts the request in the
>> corresponding slot and blocks until the response is received.
>>
>> I'm sending this patch as a RFC because, while I'm still working on
>> way to
>> measure the overall performance improvement, your feedback would be a
>> great
>> assistance.
>>
> 
> Is anyone still using asynchronous vm_event requests? (the vcpu is not
> blocked and no response is expected).
> If not, I suggest that the feature should be removed as it
> (significantly) increases the complexity of the current implementation.

Could you describe in a bit more detail what the situation is?  What's
the current state fo affairs with vm_events, what you're trying to
change, why async vm_events is more difficult?

I certainly think it would be better if you could write the new vm_event
interface without having to spend a lot of effort supporting modes that
you think nobody uses.

On the other hand, getting into the habit of breaking stuff, even for
people we don't know about, will be a hindrance to community growth; a
commitment to keeping it working will be a benefit to growth.

But of course, we haven't declared the vm_event interface 'supported'
(it's not even mentioned in the SUPPORT.md document yet).

Just for the sake of discussion, would it be possible / reasonble, for
instance, to create a new interface, vm_events2, instead?  Then you
could write it to share the ioreq interface without having legacy
baggage you're not using; we could deprecate and eventually remove
vm_events1, and if anyone shouts, we can put it back.

Thoughts?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to