> -----Original Message----- > From: Jan Beulich <jbeul...@suse.com> > Sent: 27 November 2019 09:44 > To: Durrant, Paul <pdurr...@amazon.com>; Grall, Julien <jgr...@amazon.com> > Cc: xen-devel@lists.xenproject.org; Andrew Cooper > <andrew.coop...@citrix.com>; Roger Pau Monné <roger....@citrix.com>; Wei > Liu <w...@xen.org> > Subject: Re: [PATCH] xen/x86: vpmu: Unmap per-vCPU PMU page when the > domain is destroyed > > On 26.11.2019 18:17, Paul Durrant wrote: > > From: Julien Grall <jgr...@amazon.com> > > > > A guest will setup a shared page with the hypervisor for each vCPU via > > XENPMU_init. The page will then get mapped in the hypervisor and only > > released when XEMPMU_finish is called. > > > > This means that if the guest is not shutdown gracefully (such as via xl > > destroy), the page will stay mapped in the hypervisor. > > Isn't this still too weak a description? It's not the tool stack > invoking XENPMU_finish, but the guest itself afaics. I.e. a > misbehaving guest could prevent proper cleanup even with graceful > shutdown. >
Ok, how about 'if the guest fails to invoke XENPMU_finish, e.g. if it is destroyed, rather than cleanly shut down'? > > @@ -2224,6 +2221,9 @@ int domain_relinquish_resources(struct domain *d) > > if ( is_hvm_domain(d) ) > > hvm_domain_relinquish_resources(d); > > > > + for_each_vcpu ( d, v ) > > + vpmu_destroy(v); > > + > > return 0; > > } > > I think simple things which may allow shrinking the page lists > should be done early in the function. As vpmu_destroy() looks > to be idempotent, how about leveraging the very first > for_each_vcpu() loop in the function (there are too many of them > there anyway, at least for my taste)? > Ok. I did wonder where in the sequence was best... Leaving to the end obviously puts it closer to where it was previously called, but I can't see any harm in moving it earlier. Paul > Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel