On Centos 6.4 x64,with libvirt-0.10.2-18.el6.x86_64 i am trying to
set "memory.limit_in_bytes" for all qemu process.
changed "cgconfig.conf"
group mygroup{
perm {
admin {
uid = root;
gid = root;
}
On 14/12/14 14:14, Christoffer Dall wrote:
> On Sun, Dec 14, 2014 at 11:33:04AM +, Marc Zyngier wrote:
>> On Sat, Dec 13 2014 at 11:17:29 AM, Christoffer Dall
>> wrote:
>>> It is curently possible to run a VM with architected timers support
>>> without creating an in-kernel VGIC, which will r
On 15/12/2014 02:52, Zhang Haoyu wrote:
> Yes, I find it,
> but what's the relationship between
> https://git.kernel.org/cgit/virt/kvm/kvm.git/log/?h=next
This is branch "next".
> and
> https://git.kernel.org/cgit/virt/kvm/kvm.git/log/ ?
This is branch "master".
Paolo
--
To unsubscribe fro
Hi Paolo, Yang
What's the status of this problem?
Thanks,
Zhang Haoyu
On 2014-12-04 00:13:49, Paolo Bonzini wrote:
>
>
>On 28/11/2014 12:59, Zhang, Yang Z wrote:
>>
>> According the feedback from Haoyu on my test patch which skipping the
>> interrupt injection if irq line is active (See another
On 15/12/2014 10:39, Zhang Haoyu wrote:
> Hi Paolo, Yang
> What's the status of this problem?
I will look at Yang's patch after the end of the merge window.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordo
On 15/12/2014 01:09, Samuel Thibault wrote:
> Hello,
>
> Just FTR, it seems that the overhead is due to gnumach somtimes using
> the PIC quite a lot. It used not to be too much a concern with just
> kvm, but kvm on kvm becomes too expensive for that. I've fixed gnumach
> into being a lot more
It is curently possible to run a VM with architected timers support
without creating an in-kernel VGIC, which will result in interrupts from
the virtual timer going nowhere.
To address this issue, move the architected timers initialization to the
time when we run a VCPU for the first time, and the
On 15/12/14 10:20, Christoffer Dall wrote:
> It is curently possible to run a VM with architected timers support
> without creating an in-kernel VGIC, which will result in interrupts from
> the virtual timer going nowhere.
>
> To address this issue, move the architected timers initialization to th
On 12/12/2014 12:06 PM, Christoffer Dall wrote:
> On Thu, Dec 11, 2014 at 01:38:16PM +0100, Eric Auger wrote:
>> On 12/11/2014 01:01 PM, Christoffer Dall wrote:
>>> On Wed, Dec 10, 2014 at 01:45:50PM +0100, Eric Auger wrote:
On 12/09/2014 04:44 PM, Christoffer Dall wrote:
> Userspace assum
On Mon, Dec 15, 2014 at 10:39:23AM +, Marc Zyngier wrote:
> On 15/12/14 10:20, Christoffer Dall wrote:
> > It is curently possible to run a VM with architected timers support
> > without creating an in-kernel VGIC, which will result in interrupts from
> > the virtual timer going nowhere.
> >
>
Hi Paolo,
Here's the second pull request for KVM for arm/arm64 for v3.19, which fixes
reboot problems, clarifies VCPU init, and fixes a regression concerning the
VGIC init flow.
The diffstat includes the previous pull request's patches, because the
previous pull request is not in kvm/next yet I p
When the vgic initializes its internal state it does so based on the
number of VCPUs available at the time. If we allow KVM to create more
VCPUs after the VGIC has been initialized, we are likely to error out in
unfortunate ways later, perform buffer overflows etc.
Acked-by: Marc Zyngier
Reviewe
Userspace assumes that it can wire up IRQ injections after having
created all VCPUs and after having created the VGIC, but potentially
before starting the first VCPU. This can currently lead to lost IRQs
because the state of that IRQ injection is not stored anywhere and we
don't return an error to
Some code paths will need to check to see if the internal state of the
vgic has been initialized (such as when creating new VCPUs), so
introduce such a macro that checks the nr_cpus field which is set when
the vgic has been initialized.
Also set nr_cpus = 0 in kvm_vgic_destroy, because the error p
If a VCPU was originally started with power off (typically to be brought
up by PSCI in SMP configurations), there is no need to clear the
POWER_OFF flag in the kernel, as this flag is only tested during the
init ioctl itself.
Acked-by: Marc Zyngier
Signed-off-by: Christoffer Dall
---
arch/arm/k
The implementation of KVM_ARM_VCPU_INIT is currently not doing what
userspace expects, namely making sure that a vcpu which may have been
turned off using PSCI is returned to its initial state, which would be
powered on if userspace does not set the KVM_ARM_VCPU_POWER_OFF flag.
Implement the expec
It is curently possible to run a VM with architected timers support
without creating an in-kernel VGIC, which will result in interrupts from
the virtual timer going nowhere.
To address this issue, move the architected timers initialization to the
time when we run a VCPU for the first time, and the
Introduce a new function to unmap user RAM regions in the stage2 page
tables. This is needed on reboot (or when the guest turns off the MMU)
to ensure we fault in pages again and make the dcache, RAM, and icache
coherent.
Using unmap_stage2_range for the whole guest physical range does not
work,
When a vcpu calls SYSTEM_OFF or SYSTEM_RESET with PSCI v0.2, the vcpus
should really be turned off for the VM adhering to the suggestions in
the PSCI spec, and it's the sane thing to do.
Also, clarify the behavior and expectations for exits to user space with
the KVM_EXIT_SYSTEM_EVENT case.
Acked
It is not clear that this ioctl can be called multiple times for a given
vcpu. Userspace already does this, so clarify the ABI.
Also specify that userspace is expected to always make secondary and
subsequent calls to the ioctl with the same parameters for the VCPU as
the initial call (which users
When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also
reset the HCR, because we now modify the HCR dynamically to
enable/disable trapping of guest accesses to the VM registers.
This is crucial for reboot of VMs working since otherwise we will not be
doing the necessary cache maint
From: Peter Maydell
VGIC initialization currently happens in three phases:
(1) kvm_vgic_create() (triggered by userspace GIC creation)
(2) vgic_init_maps() (triggered by userspace GIC register read/write
requests, or from kvm_vgic_init() if not already run)
(3) kvm_vgic_init() (triggered
The vgic_initialized() macro currently returns the state of the
vgic->ready flag, which indicates if the vgic is ready to be used when
running a VM, not specifically if its internal state has been
initialized.
Rename the macro accordingly in preparation for a more nuanced
initialization flow.
Ack
Hello Luis,
On 09/12/14 23:35, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez"
>
> This lets you build a kernel which can support xen dom0
> or xen guests by just using:
>
>make xenconfig
>
> on both x86 and arm64 kernels. This also splits out the
> options which are available current
On 15/12/2014 12:41, Christoffer Dall wrote:
> Hi Paolo,
>
> Here's the second pull request for KVM for arm/arm64 for v3.19, which fixes
> reboot problems, clarifies VCPU init, and fixes a regression concerning the
> VGIC init flow.
>
> The diffstat includes the previous pull request's patches,
On 14/12/2014 02:17, Eugene Korenevsky wrote:
> Hi there,
>
> Please DO NOT take v3 version of patchset in account. It contains bug
> (missing check for MSR load/store area size in
> `nested_vmx_check_msr_switch`). This bug has been fixed in v4 version
> of patchset.
The diff is just
diff --gi
> The diff is just
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index d6fe958a0403..09ccf6c09435 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -8305,6 +8305,8 @@ static int nested_vmx_check_msr_switch(struct kvm_vcpu
> *vcpu,
> WARN_ON(1);
>
On 15/12/2014 14:59, Eugene Korenevsky wrote:
>> The diff is just
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index d6fe958a0403..09ccf6c09435 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -8305,6 +8305,8 @@ static int nested_vmx_check_msr_switch(struct kv
On Tue, 9 Dec 2014, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez"
>
> This lets you build a kernel which can support xen dom0
> or xen guests by just using:
>
>make xenconfig
>
> on both x86 and arm64 kernels. This also splits out the
> options which are available currently to be bui
On 12/12/2014 20:54, Eugene Korenevsky wrote:
> Remove unused variable to get rid of compiler warning.
> And remove commented out code (it can always be restored
> from git logs).
This is also specifying that the behavior is incorrect. It should be
changed to an XFAIL instead.
Paolo
> Signed-
On 12/12/2014 17:06, Andrew Jones wrote:
> Signed-off-by: Andrew Jones
> ---
> x86/pmu.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 5c85146810cb1..f116bafebf424 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -228,14 +228,12 @@ sta
On 12/12/2014 17:06, Andrew Jones wrote:
> Signed-off-by: Andrew Jones
> ---
> x86/pmu.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 5c85146810cb1..f116bafebf424 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -228,14 +228,12 @@ sta
On 12/12/2014 17:19, Andrew Jones wrote:
>> I'm afraid I didn't test all the changes, as not all unit tests
>> could run on my test machine.
>> svm- didn't run
>> xsave - didn't run 'have xsave' tests
>> asyncpf- is commented out of unittests.cfg,
>>
Since the advent of VGIC dynamic initialization, this latter is
initialized quite late on the first vcpu run or "on-demand", when
injecting an IRQ or when the guest sets its registers.
This series now allows the user space to explicitly request the VGIC init,
when the dimensioning parameters have
To be more explicit on vgic initialization failure, -ENODEV is
returned by vgic_init when no online vcpus can be found at init.
Signed-off-by: Eric Auger
---
v2 -> v3: vgic_init_maps was renamed into vgic_init
---
virt/kvm/arm/vgic.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
Since the advent of VGIC dynamic initialization, this latter is
initialized quite late on the first vcpu run or "on-demand", when
injecting an IRQ or when the guest sets its registers.
This initialization could be initiated explicitly much earlier
by the users-space, as soon as it has provided the
When generating #PF VM-exit, check equality:
(PFEC & PFEC_MASK) == PFEC_MATCH
If there is equality, the 14 bit of exception bitmap is used to take decision
about generating #PF VM-exit. If there is inequality, inverted 14 bit is used.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 15
Signed-off-by: Michael S. Tsirkin
---
include/linux/vringh.h | 33 ++
drivers/vhost/vringh.c | 121 ++---
2 files changed, 107 insertions(+), 47 deletions(-)
diff --git a/include/linux/vringh.h b/include/linux/vringh.h
index f696dd0..a3fa5
Pass u64 everywhere.
Signed-off-by: Michael S. Tsirkin
---
include/linux/vringh.h | 4 ++--
drivers/vhost/vringh.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/vringh.h b/include/linux/vringh.h
index 749cde2..f696dd0 100644
--- a/include/linux/vringh.h
+
See patches for details.
v2:
- fix email address.
v3:
- use module parameter for configuration of value (Paolo/Radim)
v4:
- fix check for tscdeadline mode while waiting for expiration (Paolo)
- use proper delay function (Radim)
- fix LVTT tscdeadline mode check in hrtimer interrupt handler (Radi
Add tracepoint to wait_lapic_expire.
Signed-off-by: Marcelo Tosatti
Index: kvm/arch/x86/kvm/lapic.c
===
--- kvm.orig/arch/x86/kvm/lapic.c
+++ kvm/arch/x86/kvm/lapic.c
@@ -1121,6 +1121,7 @@ void wait_lapic_expire(struct kvm_vcpu *
{
For the hrtimer which emulates the tscdeadline timer in the guest,
add an option to advance expiration, and busy spin on VM-entry waiting
for the actual expiration time to elapse.
This allows achieving low latencies in cyclictest (or any scenario
which requires strict timing regarding timer expir
kvm_x86_ops->test_posted_interrupt() returns true/false depending
whether 'vector' is set.
Next patch makes use of this interface.
Signed-off-by: Marcelo Tosatti
Index: kvm/arch/x86/include/asm/kvm_host.h
===
--- kvm.orig/arch/x86/
On Sat, Dec 13, 2014 at 1:08 AM, Paolo Bonzini wrote:
>
>
> On 12/12/2014 22:39, Andy Lutomirski wrote:
>> KVM internal error. Suberror: 3
>> extra data[0]: 8202
>> extra data[1]: 31
>> EAX=8be4df61 EBX=8be4df61 ECX=3ff6002c EDX=11d293ca
>> ESI=3f08e408 EDI=3e82df7c EBP=3e82deb8 ESP=3e82de7c
>
Hi,
I am novice to kvm/qemu, perhaps you can explain something to me.
I pass-through sata controller with fakeraid. It works, but I am getting
strange results during performance testing:
A. Host Windows, 6 cores (no HT, turbo boost off): 6:23 (+- 10 secs)
B. Host Windows, 1 CPU core (other are tu
45 matches
Mail list logo