On 13/09/18 16:02, Petre Pircalabu wrote: > In high throughput introspection scenarios where lots of monitor > vm_events are generated, the ring buffer can fill up before the monitor > application gets a chance to handle all the requests thus blocking > other vcpus which will have to wait for a slot to become available. > > This patch adds support for extending the ring buffer by allocating a > number of pages from domheap and mapping them to the monitor > application's domain using the foreignmemory_map_resource interface. > Unlike the current implementation, the ring buffer pages are not part of > the introspected DomU, so they will not be reclaimed when the monitor is > disabled. > > Signed-off-by: Petre Pircalabu <ppircal...@bitdefender.com>
What about the slotted format for the synchronous events? While this is fine for the async bits, I don't think we want to end up changing the mapping API twice. Simply increasing the size of the ring puts more pressure on the > diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c > index 0d23e52..2a9cbf3 100644 > --- a/xen/arch/x86/domain_page.c > +++ b/xen/arch/x86/domain_page.c > @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct page_info > *pg, unsigned int nr) > { > mfn_t mfn[nr]; > int i; > - struct page_info *cur_pg = (struct page_info *)&pg[0]; > > for (i = 0; i < nr; i++) > - mfn[i] = page_to_mfn(cur_pg++); > + mfn[i] = page_to_mfn(pg++); This hunk looks like it should be in the previous patch? That said... > > return map_domain_pages_global(mfn, nr); > } > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c > index 4793aac..faece3c 100644 > --- a/xen/common/vm_event.c > +++ b/xen/common/vm_event.c > @@ -39,16 +39,66 @@ > #define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock) > #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock) > > +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF > + > +static int vm_event_alloc_ring(struct domain *d, struct vm_event_domain *ved) > +{ > + struct page_info *page; > + void *va = NULL; > + int i, rc = -ENOMEM; > + > + page = alloc_domheap_pages(d, ved->ring_order, MEMF_no_refcount); > + if ( !page ) > + return -ENOMEM; ... what is wrong with vzalloc()? You don't want to be making a ring_order allocation, especially as the order grows. All you need are some mappings which are virtually contiguous, not physically contiguous. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel