[PATCH v4] Fixes related to processing of qemu's -numa option

2012-07-16 Thread Chegu Vinod
ode 3 size: 65536 MB node 4 cpus: 40 41 42 43 44 45 46 47 48 49 node 4 size: 65536 MB node 5 cpus: 50 51 52 53 54 55 56 57 58 59 node 5 size: 65536 MB node 6 cpus: 60 61 62 63 64 65 66 67 68 69 node 6 size: 65536 MB node 7 cpus: 70 71 72 73 74 75 76 77 78 79 Signed-off-by: Chegu

How to determine the backing host physical memory for a given guest ?

2012-05-09 Thread Chegu Vinod
Hello, On an 8 socket Westmere host I am attempting to run a single guest and characterize the virtualization overhead for a system intensive workload (AIM7-high_systime) as the size of the guest scales (10way/64G, 20way/128G, ... 80way/512G). To do some comparisons between the native vs. gu

Re: How to determine the backing host physical memory for a given guest ?

2012-05-09 Thread Chegu Vinod
On 5/9/2012 6:46 AM, Avi Kivity wrote: On 05/09/2012 04:05 PM, Chegu Vinod wrote: Hello, On an 8 socket Westmere host I am attempting to run a single guest and characterize the virtualization overhead for a system intensive workload (AIM7-high_systime) as the size of the guest scales (10way

Re: How to determine the backing host physical memory for a given guest ?

2012-05-10 Thread Chegu Vinod
Andrew Theurer linux.vnet.ibm.com> writes: > > On 05/09/2012 08:46 AM, Avi Kivity wrote: > > On 05/09/2012 04:05 PM, Chegu Vinod wrote: > >> Hello, > >> > >> On an 8 socket Westmere host I am attempting to run a single guest and > >> ch

Re: How to determine the backing host physical memory for a given guest ?

2012-05-11 Thread Chegu Vinod
Chegu Vinod hp.com> writes: > > Andrew Theurer linux.vnet.ibm.com> writes: > > Regarding the -numa option : > > I had earlier (about a ~month ago) tried the -numa option. The layout I > specified didn't match the layout the guest saw. Haven't yet looke

Re: Fwd: [Qemu-devel] [PATCH v2 00/41] postcopy live migration

2012-06-04 Thread Chegu Vinod
On 6/4/2012 6:13 AM, Isaku Yamahata wrote: On Mon, Jun 04, 2012 at 05:01:30AM -0700, Chegu Vinod wrote: Hello Isaku Yamahata, Hi. I just saw your patches..Would it be possible to email me a tar bundle of these patches (makes it easier to apply the patches to a copy of the upstream qemu.git

Large sized guest taking for ever to boot...

2012-06-08 Thread Chegu Vinod
Hello, I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it on x86_64 server (with host and the guest running 3.4.1 kernel). While trying to boot a large guest (80 vcpus + 512GB) I observed that the guest took for ever to boot up... ~1 hr or even more. [This wasn't

Re: Large sized guest taking for ever to boot...

2012-06-08 Thread Chegu Vinod
On 6/8/2012 9:46 AM, Alex Williamson wrote: On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote: Hello, I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it on x86_64 server (with host and the guest running 3.4.1 kernel). BTW, I observe the same thing if i were to

Re: Large sized guest taking for ever to boot...

2012-06-08 Thread Chegu Vinod
On 6/8/2012 10:10 AM, Chegu Vinod wrote: On 6/8/2012 9:46 AM, Alex Williamson wrote: On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote: Hello, I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it on x86_64 server (with host and the guest running 3.4.1 kernel

Re: Large sized guest taking for ever to boot...

2012-06-08 Thread Chegu Vinod
On 6/8/2012 10:42 AM, Alex Williamson wrote: On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote: On 6/8/2012 9:46 AM, Alex Williamson wrote: On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote: Hello, I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it on x86_64

Re: Large sized guest taking for ever to boot...

2012-06-08 Thread Chegu Vinod
On 6/8/2012 11:08 AM, Jan Kiszka wrote: [CC'ing qemu as this discusses its code base] On 2012-06-08 19:57, Chegu Vinod wrote: On 6/8/2012 10:42 AM, Alex Williamson wrote: On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote: On 6/8/2012 9:46 AM, Alex Williamson wrote: On Fri, 2012-06-

Re: Large sized guest taking for ever to boot...

2012-06-10 Thread Chegu Vinod
On 6/10/2012 2:30 AM, Gleb Natapov wrote: On Fri, Jun 08, 2012 at 11:20:53AM -0700, Chegu Vinod wrote: On 6/8/2012 11:08 AM, Jan Kiszka wrote: BTW, another data point ...if I try to boot a the RHEL6.3 kernel in the guest (with the latest qemu.git and the 3.4.1 on the host) it boots just fine

Re: Large sized guest taking for ever to boot...

2012-06-12 Thread Chegu Vinod
On 6/8/2012 11:37 AM, Jan Kiszka wrote: On 2012-06-08 20:20, Chegu Vinod wrote: On 6/8/2012 11:08 AM, Jan Kiszka wrote: [CC'ing qemu as this discusses its code base] On 2012-06-08 19:57, Chegu Vinod wrote: On 6/8/2012 10:42 AM, Alex Williamson wrote: On Fri, 2012-06-08 at 10:10 -0700,

Re: Large sized guest taking for ever to boot...

2012-06-12 Thread Chegu Vinod
On 6/12/2012 8:39 AM, Gleb Natapov wrote: On Tue, Jun 12, 2012 at 08:33:59AM -0700, Chegu Vinod wrote: I rebuilt the 3.4.1 kernel in the guest from scratch and retried my experiments and measured the boot times... a) Host : RHEL6.3 RC1 + qemu-kvm (that came with it)& Guest : RHEL6.3

[PATCH] Fixes related to processing of qemu's -numa option

2012-06-17 Thread Chegu Vinod
6 MB node 4 cpus: 40 41 42 43 44 45 46 47 48 49 node 4 size: 65536 MB node 5 cpus: 50 51 52 53 54 55 56 57 58 59 node 5 size: 65536 MB node 6 cpus: 60 61 62 63 64 65 66 67 68 69 node 6 size: 65536 MB node 7 cpus: 70 71 72 73 74 75 76 77 78 79 node 7 size: 65536 MB Signed-off-by: Chegu Vin

Re: [Qemu-devel] [PATCH] Fixes related to processing of qemu's -numa option

2012-06-18 Thread Chegu Vinod
On 6/18/2012 1:29 PM, Eduardo Habkost wrote: On Sun, Jun 17, 2012 at 01:12:31PM -0700, Chegu Vinod wrote: The -numa option to qemu is used to create [fake] numa nodes and expose them to the guest OS instance. There are a couple of issues with the -numa option: a) Max VCPU's that c

Re: [Qemu-devel] [PATCH] Fixes related to processing of qemu's -numa option

2012-06-18 Thread Chegu Vinod
On 6/18/2012 3:11 PM, Eric Blake wrote: On 06/18/2012 04:05 PM, Andreas Färber wrote: Am 17.06.2012 22:12, schrieb Chegu Vinod: diff --git a/vl.c b/vl.c index 204d85b..1906412 100644 --- a/vl.c +++ b/vl.c @@ -28,6 +28,7 @@ #include #include #include +#include Did you check whether this

Re: [Qemu-devel] KVM call agenda for Tuesday, June 19th

2012-06-19 Thread Chegu Vinod
Hello, Wanted to share some preliminary data from live migration experiments on a setup that is perhaps one of the larger ones. We used Juan's "huge_memory" patches (without the separate migration thread) and measured the total migration time and the time taken for stage 3 ("downtime"). No

[PATCH v2] Fixes related to processing of qemu's -numa option

2012-06-19 Thread Chegu Vinod
ode 5 cpus: 50 51 52 53 54 55 56 57 58 59 node 5 size: 65536 MB node 6 cpus: 60 61 62 63 64 65 66 67 68 69 node 6 size: 65536 MB node 7 cpus: 70 71 72 73 74 75 76 77 78 79 Signed-off-by: Chegu Vinod , Jim Hull , Craig Hada --- cpus.c |3 ++- hw/pc.c |4 +++- sysemu.h |3 ++- vl.

[PATCH v3] Fixes related to processing of qemu's -numa option

2012-07-05 Thread Chegu Vinod
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 node 3 size: 65536 MB node 4 cpus: 40 41 42 43 44 45 46 47 48 49 node 4 size: 65536 MB node 5 cpus: 50 51 52 53 54 55 56 57 58 59 node 5 size: 65536 MB node 6 cpus: 60 61 62 63 64 65 66 67 68 69 node 6 size: 65536 MB node 7 cpus: 70 71 72 73 74 75 76

Performance of 40-way guest running 2.6.32-220 (RHEL6.2) vs. 3.3.1 OS

2012-04-11 Thread Chegu Vinod
Hello, While running an AIM7 (workfile.high_systime) in a single 40-way (or a single 60-way KVM guest) I noticed pretty bad performance when the guest was booted with 3.3.1 kernel when compared to the same guest booted with 2.6.32-220 (RHEL6.2) kernel. 'am still trying to dig more into the de

Re: Performance of 40-way guest running 2.6.32-220 (RHEL6.2) vs. 3.3.1 OS

2012-04-15 Thread Chegu Vinod
Rik van Riel redhat.com> writes: > > On 04/11/2012 01:21 PM, Chegu Vinod wrote: > > > > Hello, > > > > While running an AIM7 (workfile.high_systime) in a single 40-way (or a single > > 60-way KVM guest) I noticed pretty bad performance when the gues

Re: Performance of 40-way guest running 2.6.32-220 (RHEL6.2) vs. 3.3.1 OS

2012-04-16 Thread Chegu Vinod
On 4/16/2012 5:18 AM, Gleb Natapov wrote: On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote: On 04/11/2012 01:21 PM, Chegu Vinod wrote: Hello, While running an AIM7 (workfile.high_systime) in a single 40-way (or a single 60-way KVM guest) I noticed pretty bad performance when the

Re: Performance of 40-way guest running 2.6.32-220 (RHEL6.2) vs. 3.3.1 OS

2012-04-17 Thread Chegu Vinod
On 4/17/2012 2:49 AM, Gleb Natapov wrote: On Mon, Apr 16, 2012 at 07:44:39AM -0700, Chegu Vinod wrote: On 4/16/2012 5:18 AM, Gleb Natapov wrote: On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote: On 04/11/2012 01:21 PM, Chegu Vinod wrote: Hello, While running an AIM7

Networking performance on a KVM Host (with no guests)

2012-04-18 Thread Chegu Vinod
Hello, Perhaps this query was answered in the past. If yes kindly point me to the same. We noticed differences in networking performance (measured via netperf over a 10G NIC) on an X86_64 server between the following two configurations : 1) Server run as a KVM Host (but with no KVM guests c

Re: Performance of 40-way guest running 2.6.32-220 (RHEL6.2) vs. 3.3.1 OS

2012-04-18 Thread Chegu Vinod
On 4/17/2012 6:25 AM, Chegu Vinod wrote: On 4/17/2012 2:49 AM, Gleb Natapov wrote: On Mon, Apr 16, 2012 at 07:44:39AM -0700, Chegu Vinod wrote: On 4/16/2012 5:18 AM, Gleb Natapov wrote: On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote: On 04/11/2012 01:21 PM, Chegu Vinod wrote

Re: Networking performance on a KVM Host (with no guests)

2012-04-20 Thread Chegu Vinod
On 4/18/2012 10:43 PM, Gleb Natapov wrote: On Thu, Apr 19, 2012 at 03:53:39AM +, Chegu Vinod wrote: Hello, Perhaps this query was answered in the past. If yes kindly point me to the same. We noticed differences in networking performance (measured via netperf over a 10G NIC) on an X86_64

Re: Networking performance on a KVM Host (with no guests)

2012-04-20 Thread Chegu Vinod
Nadav Har'El math.technion.ac.il> writes: > > On Fri, Apr 20, 2012, Chegu Vinod wrote about "Re: Networking performance on > a KVM Host (with no guests)": > > Removing the "intel_iommu=on" boot time parameter in the Config 1 > > case seeme

Re: [PATCH RFC 1/2] kvm: Record the preemption status of vcpus using preempt notifiers

2013-03-05 Thread Chegu Vinod
tate == TASK_RUNNING) + vcpu->preempted = true; kvm_arch_vcpu_put(vcpu); } . Reviewed-by: Chegu Vinod -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH RFC 2/2] kvm: Iterate over only vcpus that are preempted

2013-03-05 Thread Chegu Vinod
continue; if (vcpu == me) continue; if (waitqueue_active(&vcpu->wq)) . Reviewed-by: Chegu Vinod -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...

Re: Preemptable Ticket Spinlock

2013-04-22 Thread Chegu Vinod
On 4/22/2013 1:50 PM, Jiannan Ouyang wrote: On Mon, Apr 22, 2013 at 4:44 PM, Peter Zijlstra wrote: On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote: IIRC one of the reasons was that the performance improvement wasn't as obvious. Rescheduling VCPUs takes a fair amount of time, quite proba

[PATCH] KVM: x86: Increase the "hard" max VCPU limit

2013-04-27 Thread Chegu Vinod
tapov http://article.gmane.org/gmane.comp.emulators.kvm.devel/99713 ) Signed-off-by: Chegu Vinod --- arch/x86/include/asm/kvm_host.h |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4979778..bc57

Re: [PATCH 3/2] vfio: Provide module option to disable vfio_iommu_type1 hugepage support

2013-05-30 Thread Chegu Vinod
} + if (unlikely(disable_hugepages)) { + vfio_lock_acct(1); + return 1; + } + /* Lock all the consecutive pages from pfn_base */ for (i = 1, vaddr += PAGE_SIZE; i < npage; i++, vaddr += PAGE_SIZE) { unsigned long pfn = 0; . Te

kvm_intel: Could not allocate 42 bytes percpu data

2013-06-24 Thread Chegu Vinod
Hello, Lots (~700+) of the following messages are showing up in the dmesg of a 3.10-rc1 based kernel (Host OS is running on a large socket count box with HT-on). [ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc from reserved chunk failed [ 82.272633] kvm_intel: Could not

Re: kvm_intel: Could not allocate 42 bytes percpu data

2013-07-01 Thread Chegu Vinod
On 6/30/2013 11:22 PM, Rusty Russell wrote: Chegu Vinod writes: Hello, Lots (~700+) of the following messages are showing up in the dmesg of a 3.10-rc1 based kernel (Host OS is running on a large socket count box with HT-on). [ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc

Re: kvm_intel: Could not allocate 42 bytes percpu data

2013-07-02 Thread Chegu Vinod
On 7/1/2013 10:49 PM, Rusty Russell wrote: Chegu Vinod writes: On 6/30/2013 11:22 PM, Rusty Russell wrote: Chegu Vinod writes: Hello, Lots (~700+) of the following messages are showing up in the dmesg of a 3.10-rc1 based kernel (Host OS is running on a large socket count box with HT-on

Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

2012-09-21 Thread Chegu Vinod
On 9/21/2012 4:59 AM, Raghavendra K T wrote: In some special scenarios like #vcpu <= #pcpu, PLE handler may prove very costly, Yes. because there is no need to iterate over vcpus and do unsuccessful yield_to burning CPU. An idea to solve this is: 1) As Avi had proposed we can modify hardwar

KVM_MAX_VCPUS

2012-10-13 Thread Chegu Vinod
Hello, Wanted to get a clarification about KVM_MAX_VCPUS(currently set to 254) in kvm_host.h file. The kvm_vcpu *vcpus array is sized based on KVM_MAX_VCPUS. (i.e. a max of 254 elements in the array). An 8bit APIC id should allow for 256 ID's. Reserving one for Broadcast should leave 255 ID's

Re: [PATCH] KVM: apic: fix LDR calculation in x2apic mode

2012-10-14 Thread Chegu Vinod
); kvm_apic_set_ldr(apic, ldr); } apic->base_address = apic->vcpu->arch.apic_base & -- Gleb. . Reviewed-by: Chegu Vinod Tested-by: Chegu Vinod -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of

Re: KVM_MAX_VCPUS

2012-10-14 Thread Chegu Vinod
On 10/14/2012 2:08 AM, Gleb Natapov wrote: On Sat, Oct 13, 2012 at 10:32:13PM -0400, Sasha Levin wrote: On 10/13/2012 06:29 PM, Chegu Vinod wrote: Hello, Wanted to get a clarification about KVM_MAX_VCPUS(currently set to 254) in kvm_host.h file. The kvm_vcpu *vcpus array is sized based on

Re: [PATCH V3 RFC 1/2] sched: Bail out of yield_to when source and target runqueue has one task

2012-11-27 Thread Chegu Vinod
On 11/27/2012 2:30 AM, Raghavendra K T wrote: On 11/26/2012 07:05 PM, Andrew Jones wrote: On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote: From: Peter Zijlstra In case of undercomitted scenarios, especially in large guests yield_to overhead is significantly high. when run queu

Re: [PATCH V3 RFC 0/2] kvm: Improving undercommit scenarios

2012-11-28 Thread Chegu Vinod
ons(-) . Tested-by: Chegu Vinod -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH V3 RFC 1/2] sched: Bail out of yield_to when source and target runqueue has one task

2012-11-28 Thread Chegu Vinod
On 11/28/2012 5:09 PM, Chegu Vinod wrote: On 11/27/2012 6:23 AM, Chegu Vinod wrote: On 11/27/2012 2:30 AM, Raghavendra K T wrote: On 11/26/2012 07:05 PM, Andrew Jones wrote: On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote: From: Peter Zijlstra In case of undercomitted

vhost-net thread getting stuck ?

2013-01-09 Thread Chegu Vinod
Hello, 'am running into an issue with the latest bits. [ Pl. see below. The vhost thread seems to be getting stuck while trying to memcopy...perhaps a bad address?. ] Wondering if this is a known issue or some recent regression ? 'am using the latest qemu (from qemu.git) and the latest kvm.

Re: [Qemu-devel] vhost-net thread getting stuck ?

2013-01-09 Thread Chegu Vinod
On 1/9/2013 8:35 PM, Jason Wang wrote: On 01/10/2013 04:25 AM, Chegu Vinod wrote: Hello, 'am running into an issue with the latest bits. [ Pl. see below. The vhost thread seems to be getting stuck while trying to memcopy...perhaps a bad address?. ] Wondering if this is a known issue or

Re: [PATCH v12 0/3] x86, apicv: Add APIC virtualization support

2013-01-24 Thread Chegu Vinod
Zhang, Yang Z intel.com> writes: > > Marcelo Tosatti wrote on 2013-01-24: > > On Wed, Jan 23, 2013 at 10:47:23PM +0800, Yang Zhang wrote: > >> From: Yang Zhang Intel.com> > >> > >> APIC virtualization is a new feature which can eliminate most of VM exit > >> when vcpu handle a interrupt: > >>