On 30/05/2019 07:18, Petre Pircalabu wrote: > This patchset adds a new mechanism of sending synchronous vm_event > requests and handling vm_event responses without using a ring. > As each synchronous request pauses the vcpu until the corresponding > response is handled, it can be stored in a slotted memory buffer > (one per vcpu) shared between the hypervisor and the controlling domain. > > The main advantages of this approach are: > - the ability to dynamicaly allocate the necessary memory used to hold > the requests/responses (the size of vm_event_request_t/vm_event_response_t > can grow unrestricted by the ring's one page limitation) > - the ring's waitqueue logic is unnecessary in this case because the > vcpu sending the request is blocked until a response is received. >
Before I review patches 7-9 for more than stylistic things, can you briefly describe what's next? AFACT, this introduces a second interface between Xen and the agent, which is limited to synchronous events only, and exclusively uses slotted system per vcpu, with a per-vcpu event channel? What (if any) are the future development plans, and what are the plans for deprecating the use of the old interface? (The answers to these will affect my review of the new interface). ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel