On 29/06/15 15:17, Lengyel, Tamas wrote: > > > On Mon, Jun 29, 2015 at 10:09 AM, Lengyel, Tamas <tleng...@novetta.com > <mailto:tleng...@novetta.com>> wrote: > > > > On Mon, Jun 29, 2015 at 9:55 AM, Andrew Cooper > <andrew.coop...@citrix.com <mailto:andrew.coop...@citrix.com>> wrote: > > On 29/06/15 14:42, Lengyel, Tamas wrote: >> >> >> > --- a/xen/arch/x86/hvm/hvm.c >> > +++ b/xen/arch/x86/hvm/hvm.c >> > @@ -6431,6 +6431,14 @@ int hvm_debug_op(struct vcpu *v, >> int32_t op) >> > return rc; >> > } >> > >> > +void hvm_toggle_singlestep(struct vcpu *v) >> > +{ >> > + if ( !cpu_has_monitor_trap_flag ) >> >> monitor_trap_flag is a VMX feature. This will never be >> true on AMD >> systems. (its use in hvm_debug_op() is also dubious) >> >> >> Yes, this feature is only for Intel cpus. Reworking the use >> of the flag though is a bit out-of-scope for this patch itself. > > In which case you must make it very clear in the commit > message that this is for Intel only and needs fixing up for > AMD. Best also to have a comment to the same effect in this > function. > > > Sure. > > > > >> >> >> >> > + return; >> > + >> > + v->arch.hvm_vcpu.single_step = >> !v->arch.hvm_vcpu.single_step; >> > +} >> > + >> > int nhvm_vcpu_hostrestore(struct vcpu *v, struct >> cpu_user_regs *regs) >> > { >> > if (hvm_funcs.nhvm_vcpu_hostrestore) >> > diff --git a/xen/arch/x86/vm_event.c >> b/xen/arch/x86/vm_event.c >> > new file mode 100644 >> > index 0000000..95b30ad >> > --- /dev/null >> > +++ b/xen/arch/x86/vm_event.c >> > @@ -0,0 +1,41 @@ >> > +/* >> > + * arch/x86/vm_event.c >> > + * >> > + * Architecture-specific vm_event handling routines >> > + * >> > + * Copyright (c) 2015 Tamas K Lengyel >> (ta...@tklengyel.com <mailto:ta...@tklengyel.com>) >> > + * >> > + * This program is free software; you can redistribute >> it and/or >> > + * modify it under the terms of the GNU General Public >> > + * License v2 as published by the Free Software >> Foundation. >> > + * >> > + * This program is distributed in the hope that it >> will be useful, >> > + * but WITHOUT ANY WARRANTY; without even the implied >> warranty of >> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR >> PURPOSE. See the GNU >> > + * General Public License for more details. >> > + * >> > + * You should have received a copy of the GNU General >> Public >> > + * License along with this program; if not, write to the >> > + * Free Software Foundation, Inc., 59 Temple Place - >> Suite 330, >> > + * Boston, MA 021110-1307, USA. >> > + */ >> > + >> > +#include <xen/sched.h> >> > +#include <asm/hvm/hvm.h> >> > + >> > +void vm_event_toggle_singlestep(struct domain *d, >> struct vcpu *v) >> > +{ >> > + if ( (v == current) || !is_hvm_domain(d) ) >> >> Why is 'current' excluded? >> >> >> Only to be consistent with the sanity check applied for >> XEN_DOMCTL_debug_op. > > XEN_DOMCTL_debug_op needs the check because of vcpu_pause(). > It is always run in the context of the hypercaller domain, and > must never end up calling singlestep on itself. > > However in this case, we can only legitimately use this on an > already-paused vcpu, which guarentees that Xen is not > currently in the context of the target vcpu. > > It would probably be better to have a check against the vcpu > pause count here (with early return), and > hvm_toggle_singlestep() assert so. > > > I guess that's a fair sanity check to add in case the vm_event > listener is buggy. > > > I was thinking ASSERT(v->vm_event_pause_count.counter); but that locks > the function to only be usable through the vm_event response method. > What do you think?
You want to use atomic_read() instead of peeking into .counter. vm_event_toggle_singlestep() can legitimately gate on v->vm_event_pause_count, whereas hvm_toggle_singlestep() can get away with looking at v->pause_count. See vmx_domain_emable_pml() as a similar example. ~Andrew
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel