On 2011-03-23 18:38, Alex Williamson wrote:
> Commit 1a836445 moved pci.o from a target object to a generic hardware
> object, which drops CONFIG_KVM_DEVICE_ASSIGNMENT. This results in
> the device assignment kludge to update INTx vectors on interrupt
> routing changes never getting called, which
On 2011-03-24 07:17, shiv chauhan wrote:
> Hi all,
>
> While remote debugging a kernel (not written in C and no debugging
> symbols) using qemu-kvm with gdb, strange it does not stop at
> breakpoint though qemu works well:
>
> Commands:
> # qemu-kvm -s -S -hda hd.img &
> #gdb
> (gdb) target remo
Hi,
this is version 2 of the TSC scaling patch-set. The main difference to
version 1 is that the scaling factor is not switched in the leightweight
exit path anymore. This is possible because production hardware will
apply the scaling factor only when the CPU is in guest mode.
Beside that change
This patch changes the kvm_guest_time_update function to use
TSC frequency the guest actually has for updating its clock.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/kvm_host.h |2 ++
arch/x86/kvm/svm.c |8
arch/x86/kvm/vmx.c |6 ++
arc
This patch implements two new vm-ioctls to get and set the
virtual_tsc_khz if the machine supports tsc-scaling. Setting
the tsc-frequency is only possible before userspace creates
any vcpu.
Signed-off-by: Joerg Roedel
---
Documentation/kvm/api.txt | 22 +++
arch/x86/include/a
This patch enhances the kvm_amd module with functions to
support the TSC_RATE_MSR which can be used to set a given
tsc frequency for the guest vcpu.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/msr-index.h |1 +
arch/x86/kvm/svm.c | 44
This patch implements the propagation of the VM
virtual_tsc_khz into each vcpu data-structure to enable the
tsc-scaling feature.
Signed-off-by: Joerg Roedel
---
arch/x86/kvm/svm.c | 33 +
1 files changed, 33 insertions(+), 0 deletions(-)
diff --git a/arch/x86/k
With TSC scaling in SVM the tsc-offset needs to be
calculated differently. This patch propagates this
calculation into the architecture specific modules so that
this complexity can be handled there.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/svm.c
The calculation of the tsc_delta value to ensure a
forward-going tsc for the guest is a function of the
host-tsc. This works as long as the guests tsc_khz is equal
to the hosts tsc_khz. With tsc-scaling hardware support this
is not longer true and the tsc_delta needs to be calculated
using guest_ts
From: Jan Kiszka
We use boot_cpu_has now.
Signed-off-by: Jan Kiszka
---
arch/x86/kvm/svm.c |3 ---
1 files changed, 0 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 2a19322..cb43e98 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -376,
On 03/24/2011 09:40 AM, Joerg Roedel wrote:
This patch enhances the kvm_amd module with functions to
support the TSC_RATE_MSR which can be used to set a given
tsc frequency for the guest vcpu.
@@ -1141,6 +1175,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
for (i = 0; i
On 03/24/2011 09:40 AM, Joerg Roedel wrote:
This patch changes the kvm_guest_time_update function to use
TSC frequency the guest actually has for updating its clock.
+ bool (*use_virtual_tsc_khz)(struct kvm_vcpu *vcpu);
Just put virtual_tsc_khz into vcpu->arch. If nonzero, consider it
On Thu, Mar 24, 2011 at 11:51:59AM +0200, Avi Kivity wrote:
> On 03/24/2011 09:40 AM, Joerg Roedel wrote:
>> This patch enhances the kvm_amd module with functions to
>> support the TSC_RATE_MSR which can be used to set a given
>> tsc frequency for the guest vcpu.
>>
>>
>> @@ -1141,6 +1175,9 @@ stat
On 03/24/2011 09:40 AM, Joerg Roedel wrote:
This patch implements the propagation of the VM
virtual_tsc_khz into each vcpu data-structure to enable the
tsc-scaling feature.
static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -1
On 03/24/2011 09:40 AM, Joerg Roedel wrote:
This patch implements two new vm-ioctls to get and set the
virtual_tsc_khz if the machine supports tsc-scaling. Setting
the tsc-frequency is only possible before userspace creates
any vcpu.
+4.54 KVM_SET_TSC_KHZ
+
+Capability: KVM_CAP_TSC_CONTROL
+Arc
On 03/24/2011 10:45 AM, Jan Kiszka wrote:
From: Jan Kiszka
We use boot_cpu_has now.
Applied, thanks.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More maj
On 03/24/2011 04:21 AM, Bhushan Bharat-R65777 wrote:
> Tracepoints are wonderful for debugging problems in the field, from my
> experience. Removing the special timing code (or rather, moving it to
> userspace) is just a bonus.
>
I think that we can always switch to tracepoints mechanism som
On Thu, Mar 24, 2011 at 12:04:13PM +0200, Avi Kivity wrote:
> On 03/24/2011 09:40 AM, Joerg Roedel wrote:
>> This patch implements the propagation of the VM
>> virtual_tsc_khz into each vcpu data-structure to enable the
>> tsc-scaling feature.
>> static void svm_write_tsc_offset(struct kvm_vcpu *
On 03/24/2011 02:37 AM, Xupeng Yun wrote:
Hi,
I deployed KVM on my Gentoo servers for performance testing, but I am now
having high
latency problems with the VirtIO NIC:
Try vhost-net, that should have much better latencies.
--
error compiling committee.c: too many arguments to function
--
On 03/18/2011 12:42 AM, Glauber Costa wrote:
According to Avi's comments over my last submission, I decided to take a
different, and more correct direction - we hope.
This patch is now using the features provided by KVM_GET_SUPPORTED_CPUID
directly to
mask out features from guest-visible cpuid.
On 03/24/2011 12:21 PM, Joerg Roedel wrote:
On Thu, Mar 24, 2011 at 12:04:13PM +0200, Avi Kivity wrote:
> On 03/24/2011 09:40 AM, Joerg Roedel wrote:
>> This patch implements the propagation of the VM
>> virtual_tsc_khz into each vcpu data-structure to enable the
>> tsc-scaling feature.
>>
On 03/18/2011 12:42 AM, Glauber Costa wrote:
This patch is a follow up to an earlier one that aims to enable
kvmclock newer msr set. This time I'm doing it through a more sane
mechanism of consulting the kernel about the supported msr set.
Thanks, applied.
--
error compiling committee.c: too m
On 03/24/2011 12:37 PM, Avi Kivity wrote:
On 03/18/2011 12:42 AM, Glauber Costa wrote:
This patch is a follow up to an earlier one that aims to enable
kvmclock newer msr set. This time I'm doing it through a more sane
mechanism of consulting the kernel about the supported msr set.
Thanks, appl
On Thu, Mar 24, 2011 at 12:14:36PM +0200, Avi Kivity wrote:
> On 03/24/2011 09:40 AM, Joerg Roedel wrote:
>> This patch implements two new vm-ioctls to get and set the
>> virtual_tsc_khz if the machine supports tsc-scaling. Setting
>> the tsc-frequency is only possible before userspace creates
>> a
On 03/23/2011 06:40 PM, Glauber Costa wrote:
As Avi recently mentioned, the new standard mechanism for exposing features
is KVM_GET_SUPPORTED_CPUID, not spanning CAPs. For some reason async pf missed
that.
So expose async_pf here.
Applied, thanks.
--
error compiling committee.c: too many ar
On 03/24/2011 12:41 PM, Joerg Roedel wrote:
>>
>> +4.54 KVM_SET_TSC_KHZ
>> +
>> +Capability: KVM_CAP_TSC_CONTROL
>> +Architectures: x86
>> +Type: vm ioctl
>> +Parameters: __u32 (in)
>> +Returns: 0 on success, -1 on error
>> +
>> +Specifies the tsc frequency for the virtual machine. This
On Thu, Mar 24, 2011 at 12:44:59PM +0200, Avi Kivity wrote:
> On 03/24/2011 12:41 PM, Joerg Roedel wrote:
>> Okay, I'll change that. But I would prefer to keep this as a vm ioctl. A
>> vcpu ioctl might be more flexible but I doubt anybody has a use-case for
>> different tsc_khz values in one VM.
>
Hi,
I'm trying to write a vioblk driver for Solaris. I've gotten it to the point
where the devices are visible to Solaris and can create and FDISK partition
table and label it.
However, when I try an use newfs to create a filesystem, the VM crashes with the
following in the log
*** glibc detecte
On Thu, Mar 24, 2011 at 11:55:06AM +, Conor Murphy wrote:
> Hi,
>
> I'm trying to write a vioblk driver for Solaris. I've gotten it to the point
> where the devices are visible to Solaris and can create and FDISK partition
> table and label it.
>
> However, when I try an use newfs to create a
Forgot to mention that when I attached gdb to the qemu-kvm process before
running the newfs in the guest, the crash does not happen
Some sort of race condition?
Thanks,
Conor
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
M
Since "Fix race between nmi injection and enabling nmi window", pending NMI
can be represented in KVM_REQ_NMI vcpu->requests bit.
When setting vcpu state via SET_VCPU_EVENTS, for example during reset,
the REQ_NMI bit should be cleared otherwise pending NMI is transferred
to nmi_pending upon vc
On 03/24/2011 02:47 PM, Marcelo Tosatti wrote:
Since "Fix race between nmi injection and enabling nmi window", pending NMI
can be represented in KVM_REQ_NMI vcpu->requests bit.
When setting vcpu state via SET_VCPU_EVENTS, for example during reset,
the REQ_NMI bit should be cleared otherwise pend
On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote:
>
> Since "Fix race between nmi injection and enabling nmi window", pending NMI
> can be represented in KVM_REQ_NMI vcpu->requests bit.
>
> When setting vcpu state via SET_VCPU_EVENTS, for example during reset,
> the REQ_NMI bit s
On 03/24/2011 03:27 PM, Gleb Natapov wrote:
On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote:
>
> Since "Fix race between nmi injection and enabling nmi window", pending NMI
> can be represented in KVM_REQ_NMI vcpu->requests bit.
>
> When setting vcpu state via SET_VCPU_EVENTS,
On Thu, Mar 24, 2011 at 03:33:35PM +0200, Avi Kivity wrote:
> On 03/24/2011 03:27 PM, Gleb Natapov wrote:
> >On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote:
> >>
> >> Since "Fix race between nmi injection and enabling nmi window", pending
> >> NMI
> >> can be represented in KVM_
Built with --enable-debug
Running under gdb gives
(gdb) where
#0 0x003d6da330c5 in raise (sig=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x003d6da34a76 in abort () at abort.c:92
#2 0x003d6da6fcfb in __libc_message (do_abort=2,
fmt=0x3d6db5ea98 "***
glibc detected *** %s:
On Thursday, March 24, 2011 at 6:22 PM, Avi Kivity wrote:
Try vhost-net, that should have much better latencies.
> Indeed, the latency is much better after switching to vhost-net, it drops
> from 3+ms to ~1.5ms on average
when there is no network load on the KVM guest, but the latencies become b
On Thu, Mar 24, 2011 at 21:45, Xupeng Yun wrote:
>
> Indeed, the latency is much better after switching to vhost-net, it drops
> from 3+ms to ~1.5ms on average
Typo, should be "it drops from 0.3+ms to ~0.15ms on average"
--
I like Linux & Python
http://blog.xupeng.me
--
To unsubscribe from this
On 03/24/2011 03:45 PM, Xupeng Yun wrote:
On Thursday, March 24, 2011 at 6:22 PM, Avi Kivity wrote:
Try vhost-net, that should have much better latencies.
> Indeed, the latency is much better after switching to vhost-net, it drops
from 3+ms to ~1.5ms on average
when there is no network load on
On Thu, Mar 24, 2011 at 22:08, Avi Kivity wrote:
>> 64 bytes from app211 (192.168.1.211): icmp_seq=3582 ttl=64 time=3.08 ms
>> 64 bytes from app211 (192.168.1.211): icmp_seq=3583 ttl=64 time=1.10 ms
>> 64 bytes from app211 (192.168.1.211): icmp_seq=3584 ttl=64 time=1.03 ms
>> 64 bytes from app211
On 03/24/2011 04:24 PM, Xupeng Yun wrote:
On Thu, Mar 24, 2011 at 22:08, Avi Kivity wrote:
>> 64 bytes from app211 (192.168.1.211): icmp_seq=3582 ttl=64 time=3.08 ms
>> 64 bytes from app211 (192.168.1.211): icmp_seq=3583 ttl=64 time=1.10 ms
>> 64 bytes from app211 (192.168.1.211): icmp_seq=35
On Thu, Mar 24, 2011 at 11:00:53AM +1030, Rusty Russell wrote:
> > With simply removing the notify here, it does help the case when TX
> > overrun hits too often, for example for 1K message size, the single
> > TCP_STREAM performance improved from 2.xGb/s to 4.xGb/s.
>
> OK, we'll be getting rid o
On Thu, Mar 24, 2011 at 22:27, Avi Kivity wrote:
> I don't know if this is exactly the point at which latency should rise; what
> I'm saying is that it can't remain low at all load levels.
Agree with you, it's reasonable. what puzzled me is that I don't know
if there is something wrong with my co
On Thu, Mar 24, 2011 at 03:27:16PM +0200, Gleb Natapov wrote:
> On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote:
> >
> > Since "Fix race between nmi injection and enabling nmi window", pending NMI
> > can be represented in KVM_REQ_NMI vcpu->requests bit.
> >
> > When setting vcp
On Thu, Mar 24, 2011 at 11:59:11AM -0300, Marcelo Tosatti wrote:
> On Thu, Mar 24, 2011 at 03:27:16PM +0200, Gleb Natapov wrote:
> > On Thu, Mar 24, 2011 at 09:47:00AM -0300, Marcelo Tosatti wrote:
> > >
> > > Since "Fix race between nmi injection and enabling nmi window", pending
> > > NMI
> >
On 03/24/2011 05:19 PM, Gleb Natapov wrote:
> Two patches one to revert REQ_NMI then another to fix the original problem
> makes backporting easier.
If we agree this is the way to go will do that.
Not sure, want to think about it for a bit.
--
error compiling committee.c: too many arguments
On Thu, 2011-03-24 at 16:28 +0200, Michael S. Tsirkin wrote:
> On Thu, Mar 24, 2011 at 11:00:53AM +1030, Rusty Russell wrote:
> > > With simply removing the notify here, it does help the case when TX
> > > overrun hits too often, for example for 1K message size, the single
> > > TCP_STREAM performa
On Thu, Mar 24, 2011 at 10:46:49AM -0700, Shirley Ma wrote:
> On Thu, 2011-03-24 at 16:28 +0200, Michael S. Tsirkin wrote:
> > On Thu, Mar 24, 2011 at 11:00:53AM +1030, Rusty Russell wrote:
> > > > With simply removing the notify here, it does help the case when TX
> > > > overrun hits too often, f
Commit 1a836445 moved pci.o from a target object to a generic hardware
object, which drops CONFIG_KVM_DEVICE_ASSIGNMENT. This results in
the device assignment kludge to update INTx vectors on interrupt
routing changes never getting called, which means device assignment
level triggered interrupts d
librbd stacks on top of librados to provide access
to rbd images.
Using librbd simplifies the qemu code, and allows
qemu to use new versions of the rbd format
with few (if any) changes.
Signed-off-by: Josh Durgin
Signed-off-by: Yehuda Sadeh
---
block/rbd.c | 784 ++--
The new format is rbd:pool/image[@snapshot][:option1=value1[:option2=value2...]]
Each option is used to configure rados, and may be any Ceph option, or "conf".
The "conf" option specifies a Ceph configuration file to read.
This allows rbd volumes from more than one Ceph cluster to be used by
speci
> -Original Message-
> From: Avi Kivity [mailto:a...@redhat.com]
> Sent: Thursday, March 24, 2011 3:51 PM
> To: Bhushan Bharat-R65777
> Cc: Alexander Graf; kvm@vger.kernel.org; bharatb.ya...@gmail.com
> Subject: Re: [PATCH] KVM:PPC Issue in exit timing clearance
>
> On 03/24/2011 04:21 A
Following dump is observed on host when clearing the exit timing counters
[root@p1021mds kvm]# echo -n 'c' > vm1200_vcpu0_timing
INFO: task echo:1276 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
echo D 0ff5bf94 0 1276
53 matches
Mail list logo