Any thoughts appreciated.
On Vi, 2017-10-06 at 13:02 +0300, Alexandru Isaila wrote:
> This patch adds the hvm_save_one_cpu_ctxt() function.
> It optimizes by only pausing the vcpu on all HVMSR_PER_VCPU save
> callbacks where only data for one VCPU is required.
>
> Signed-off-by: Alexandru Isaila
> I'd be fine taking care of all the comments while committing (and
> then adding my R-b), provided you (and ideally also Andrew)
> agree, and of course assuming Paul would ack the patch, plus
> no-one else finds yet another problem which once again I may
> have overlooked.
>
Hi Jan,
Thank you fo
On Mi, 2017-09-27 at 09:38 +0100, Andrew Cooper wrote:
> On 27/09/2017 09:04, Alexandru Isaila wrote:
> >
> > From: Andrew Cooper
> >
> >
> > -return X86EMUL_EXCEPTION;
> > -case HVMTRANS_bad_gfn_to_mfn:
> > -return hvmemul_linear_mmio_write(addr, bytes, p_data,
> > pfec, hvmem
On Mi, 2017-09-20 at 14:37 +, Paul Durrant wrote:
> >
> > -Original Message-
> > From: Jan Beulich [mailto:jbeul...@suse.com]
> > Sent: 20 September 2017 13:24
> > To: Alexandru Isaila
> > Cc: suravee.suthikulpa...@amd.com; Andrew Cooper
> > ; Paul Durrant
> > ;
> > Wei Liu ; George D
On Mi, 2017-09-20 at 06:24 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 20.09.17 at 11:22, wrote:
> > +static void *hvmemul_map_linear_addr(
> > +unsigned long linear, unsigned int bytes, uint32_t pfec,
> > +struct hvm_emulate_ctxt *hvmemul_ctxt)
> > +{
> > +struct vcpu *curr
On Ma, 2017-09-19 at 00:11 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > Razvan Cojocaru 09/18/17 7:05 PM
> > > > >>>
> > On 09/18/2017 06:35 PM, Jan Beulich wrote:
> > >
> > > >
> > > > >
> > > > > >
> > > > > > On 12.09.17 at 15:53, wrote:
> > > > --- a/xen/arch/x86/domctl.c
> > > > +++ b
On Lu, 2017-09-18 at 07:43 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 08.09.17 at 18:05, wrote:
> > Changes since V1:
> > - Moved ASSERT to the begining of the loop
> > - Corrected the decrement on mfn int the while statement
> > - Modified the comment to PAGE_SIZE+1
> While several of
> >
> > -static int vm_event_disable(struct domain *d, struct
> > vm_event_domain *ved)
> > +static int vm_event_disable(struct domain *d, struct
> > vm_event_domain **ved)
> > {
> A lot of the code churn here and above could be avoided by changing
> ved
> in parameter list to something else (ved
On Ma, 2017-08-29 at 09:14 -0600, Tamas K Lengyel wrote:
> On Tue, Aug 29, 2017 at 8:17 AM, Alexandru Isaila
> wrote:
> >
> > The patch splits the vm_event into three structures:vm_event_share,
> > vm_event_paging, vm_event_monitor. The allocation for the
> > structure is moved to vm_event_enable
On Vi, 2017-08-25 at 06:13 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 17.08.17 at 13:50, wrote:
> > --- a/xen/common/monitor.c
> > +++ b/xen/common/monitor.c
> > @@ -75,6 +75,7 @@ int monitor_domctl(struct domain *d, struct
> > xen_domctl_monitor_op *mop)
> > domain_pause(d);
On Jo, 2017-08-24 at 07:24 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 24.08.17 at 13:48, wrote:
> > The patch splits the vm_event into three structures:vm_event_share,
> > vm_event_paging, vm_event_monitor. The allocation for the
> > structure is moved to vm_event_enable so that it
On Ma, 2017-08-08 at 12:27 +0100, Wei Liu wrote:
> On Tue, Aug 08, 2017 at 11:27:35AM +0300, Alexandru Isaila wrote:
> >
> > In some introspection usecases, an in-guest agent needs to
> > communicate
> > with the external introspection agent. An existing mechanism is
> > HVMOP_guest_request_vm_ev
On Vi, 2017-08-04 at 19:32 -0600, Tamas K Lengyel wrote:
On Fri, Aug 4, 2017 at 5:32 AM, Alexandru Isaila
mailto:aisa...@bitdefender.com>> wrote:
In some introspection usecases, an in-guest agent needs to communicate
with the external introspection agent. An existing mechanism is
HVMOP_guest_re
From: Tamas K Lengyel
Sent: Saturday, August 5, 2017 4:32 AM
To: Alexandru Stefan ISAILA
Cc: Xen-devel; wei.l...@citrix.com; Tim Deegan; Stefano Stabellini; Konrad
Rzeszutek Wilk; Jan Beulich; Ian Jackson; George Dunlap; Andrew Cooper; Razvan
Cojocaru
Subject
I'm sure we can to this and use a monitor op together with the
HVMOP_guest_request_vm_event event. We have discussed this
and have a good idea on how to do it.
~Alex
From: Andrew Cooper
Sent: Tuesday, August 1, 2017 1:30 PM
To: Alexandru Stefan I
15 matches
Mail list logo