On 17/11/2014 13:14, Wanpeng Li wrote:
>>
>>> Sorry, maybe I didn't state my question clearly. As Avi mentioned above
>>> "In VMX we have VPIDs, so we only need to flush if EFER changed between
>>> two invocations of the same VPID", so there is only one VPID if the
>>> guest is UP, my question is
Hi Paolo,
On 11/17/14, 8:04 PM, Paolo Bonzini wrote:
On 17/11/2014 13:00, Wanpeng Li wrote:
Sorry, maybe I didn't state my question clearly. As Avi mentioned above
"In VMX we have VPIDs, so we only need to flush if EFER changed between
two invocations of the same VPID", so there is only one VPI
On 17/11/2014 13:00, Wanpeng Li wrote:
> Sorry, maybe I didn't state my question clearly. As Avi mentioned above
> "In VMX we have VPIDs, so we only need to flush if EFER changed between
> two invocations of the same VPID", so there is only one VPID if the
> guest is UP, my question is if there n
Hi Paolo,
On 11/17/14, 7:18 PM, Paolo Bonzini wrote:
On 17/11/2014 12:17, Wanpeng Li wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations of the same VPI
On 17/11/2014 12:17, Wanpeng Li wrote:
>>
>>> It's not surprising [1]. Since the meaning of some PTE bits change [2],
>>> the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
>>> if EFER changed between two invocations of the same VPID, which isn't
>>> the case.
>
> If the
Hi Paolo,
On 11/11/14, 1:28 AM, Paolo Bonzini wrote:
On 10/11/2014 15:23, Avi Kivity wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations of the same VPID
> Assuming you're running both of my patches (LOAD_EFER regardless of
> nx, but skip LOAD_EFER of guest == host), then some of the speedup may
> be just less code running. I haven't figured out exactly when
> vmx_save_host_state runs, but my patches avoid a call to
> kvm_set_shared_msr, which is w
On Wed, Nov 12, 2014 at 7:51 AM, Paolo Bonzini wrote:
>
>
> On 12/11/2014 16:32, Gleb Natapov wrote:
>> > > > userspace exit, urn 17560 17726 17628 17572 17417
>> > > > lightweight exit, urn 3316 3342 3342 3319 3328
>> > > > userspace exit, LOAD_EFER, g
On 12/11/2014 16:32, Gleb Natapov wrote:
> > > > userspace exit, urn 17560 17726 17628 17572 17417
> > > > lightweight exit, urn 3316 3342 3342 3319 3328
> > > > userspace exit, LOAD_EFER, guest!=host 12200 11772 12130 12164 12327
> > > > lightweight
On Wed, Nov 12, 2014 at 04:26:29PM +0100, Paolo Bonzini wrote:
>
>
> On 12/11/2014 16:22, Gleb Natapov wrote:
> >> > Nehalem results:
> >> >
> >> > userspace exit, urn 17560 17726 17628 17572 17417
> >> > lightweight exit, urn 3316 3342 3342 3319 3328
On 12/11/2014 16:22, Gleb Natapov wrote:
>> > Nehalem results:
>> >
>> > userspace exit, urn 17560 17726 17628 17572 17417
>> > lightweight exit, urn 3316 3342 3342 3319 3328
>> > userspace exit, LOAD_EFER, guest!=host 12200 11772 12130 12164 12327
On Wed, Nov 12, 2014 at 12:33:32PM +0100, Paolo Bonzini wrote:
>
>
> On 10/11/2014 18:38, Gleb Natapov wrote:
> > On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote:
> >> On 10/11/2014 15:23, Avi Kivity wrote:
> >>> It's not surprising [1]. Since the meaning of some PTE bits change [2
On 10/11/2014 18:38, Gleb Natapov wrote:
> On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote:
>> On 10/11/2014 15:23, Avi Kivity wrote:
>>> It's not surprising [1]. Since the meaning of some PTE bits change [2],
>>> the TLB has to be flushed. In VMX we have VPIDs, so we only need to
On 10/11/2014 13:15, Paolo Bonzini wrote:
>
>
> On 10/11/2014 11:45, Gleb Natapov wrote:
>>> I tried making also the other shared MSRs the same between guest and
>>> host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
>>> has nothing to do. That saves about 4-500 cycles o
On Mon, Nov 10, 2014 at 2:45 AM, Gleb Natapov wrote:
> On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote:
>>
>>
>> On 09/11/2014 17:36, Andy Lutomirski wrote:
>> >> The purpose of vmexit test is to show us various overheads, so why not
>> >> measure EFER switch overhead by having two t
On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote:
> On 10/11/2014 15:23, Avi Kivity wrote:
> > It's not surprising [1]. Since the meaning of some PTE bits change [2],
> > the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
> > if EFER changed between two invocat
On 10/11/2014 15:23, Avi Kivity wrote:
> It's not surprising [1]. Since the meaning of some PTE bits change [2],
> the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
> if EFER changed between two invocations of the same VPID, which isn't the
> case.
>
> [1] after the fact
On 11/10/2014 02:15 PM, Paolo Bonzini wrote:
On 10/11/2014 11:45, Gleb Natapov wrote:
I tried making also the other shared MSRs the same between guest and
host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
has nothing to do. That saves about 4-500 cycles on inl_from_qem
On 10/11/2014 11:45, Gleb Natapov wrote:
> > I tried making also the other shared MSRs the same between guest and
> > host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
> > has nothing to do. That saves about 4-500 cycles on inl_from_qemu. I
> > do want to dig out my old
On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote:
>
>
> On 09/11/2014 17:36, Andy Lutomirski wrote:
> >> The purpose of vmexit test is to show us various overheads, so why not
> >> measure EFER switch overhead by having two tests one with equal EFER
> >> another with different EFER,
On 09/11/2014 17:36, Andy Lutomirski wrote:
>> The purpose of vmexit test is to show us various overheads, so why not
>> measure EFER switch overhead by having two tests one with equal EFER
>> another with different EFER, instead of hiding it.
>
> I'll try this. We might need three tests, thoug
On Sun, Nov 9, 2014 at 12:52 AM, Gleb Natapov wrote:
> On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote:
>> On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski wrote:
>> > On Nov 8, 2014 4:01 AM, "Gleb Natapov" wrote:
>> >>
>> >> On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirs
On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote:
> On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski wrote:
> > On Nov 8, 2014 4:01 AM, "Gleb Natapov" wrote:
> >>
> >> On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
> >> > On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonz
On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski wrote:
> On Nov 8, 2014 4:01 AM, "Gleb Natapov" wrote:
>>
>> On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
>> > On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini wrote:
>> > >
>> > >
>> > > On 07/11/2014 07:27, Andy Lutomirski wrote
On Nov 8, 2014 4:01 AM, "Gleb Natapov" wrote:
>
> On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
> > On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini wrote:
> > >
> > >
> > > On 07/11/2014 07:27, Andy Lutomirski wrote:
> > >> Is there an easy benchmark that's sensitive to the time
On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
> On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini wrote:
> >
> >
> > On 07/11/2014 07:27, Andy Lutomirski wrote:
> >> Is there an easy benchmark that's sensitive to the time it takes to
> >> round-trip from userspace to guest and back
On Fri, Nov 7, 2014 at 9:59 AM, Andy Lutomirski wrote:
> On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini wrote:
>>
>>
>> On 07/11/2014 07:27, Andy Lutomirski wrote:
>>> Is there an easy benchmark that's sensitive to the time it takes to
>>> round-trip from userspace to guest and back to userspace?
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini wrote:
>
>
> On 07/11/2014 07:27, Andy Lutomirski wrote:
>> Is there an easy benchmark that's sensitive to the time it takes to
>> round-trip from userspace to guest and back to userspace? I think I
>> may have a big speedup.
>
> The simplest is vmex
On 07/11/2014 07:27, Andy Lutomirski wrote:
> Is there an easy benchmark that's sensitive to the time it takes to
> round-trip from userspace to guest and back to userspace? I think I
> may have a big speedup.
The simplest is vmexit.flat from
git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests
Is there an easy benchmark that's sensitive to the time it takes to
round-trip from userspace to guest and back to userspace? I think I
may have a big speedup.
--Andy
--
Andy Lutomirski
AMA Capital Management, LLC
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of
30 matches
Mail list logo