ode 3 size: 65536 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49
node 4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu
Hello,
On an 8 socket Westmere host I am attempting to run a single guest and
characterize the virtualization overhead for a system intensive
workload (AIM7-high_systime) as the size of the guest scales (10way/64G,
20way/128G, ... 80way/512G).
To do some comparisons between the native vs. gu
On 5/9/2012 6:46 AM, Avi Kivity wrote:
On 05/09/2012 04:05 PM, Chegu Vinod wrote:
Hello,
On an 8 socket Westmere host I am attempting to run a single guest and
characterize the virtualization overhead for a system intensive
workload (AIM7-high_systime) as the size of the guest scales (10way
Andrew Theurer linux.vnet.ibm.com> writes:
>
> On 05/09/2012 08:46 AM, Avi Kivity wrote:
> > On 05/09/2012 04:05 PM, Chegu Vinod wrote:
> >> Hello,
> >>
> >> On an 8 socket Westmere host I am attempting to run a single guest and
> >> ch
Chegu Vinod hp.com> writes:
>
> Andrew Theurer linux.vnet.ibm.com> writes:
>
> Regarding the -numa option :
>
> I had earlier (about a ~month ago) tried the -numa option. The layout I
> specified didn't match the layout the guest saw. Haven't yet looke
On 6/4/2012 6:13 AM, Isaku Yamahata wrote:
On Mon, Jun 04, 2012 at 05:01:30AM -0700, Chegu Vinod wrote:
Hello Isaku Yamahata,
Hi.
I just saw your patches..Would it be possible to email me a tar bundle of these
patches (makes it easier to apply the patches to a copy of the upstream
qemu.git
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel).
While trying to boot a large guest (80 vcpus + 512GB) I observed that the guest
took for ever to boot up... ~1 hr or even more. [This wasn't
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote:
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel).
BTW, I observe the same thing if i were to
On 6/8/2012 10:10 AM, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote:
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes)
and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote:
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it
on x86_64
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]
On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-
On 6/10/2012 2:30 AM, Gleb Natapov wrote:
On Fri, Jun 08, 2012 at 11:20:53AM -0700, Chegu Vinod wrote:
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
BTW, another data point ...if I try to boot a the RHEL6.3 kernel in
the guest (with the latest qemu.git and the 3.4.1 on the host) it
boots just fine
On 6/8/2012 11:37 AM, Jan Kiszka wrote:
On 2012-06-08 20:20, Chegu Vinod wrote:
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]
On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700,
On 6/12/2012 8:39 AM, Gleb Natapov wrote:
On Tue, Jun 12, 2012 at 08:33:59AM -0700, Chegu Vinod wrote:
I rebuilt the 3.4.1 kernel in the guest from scratch and retried my
experiments and measured
the boot times...
a) Host : RHEL6.3 RC1 + qemu-kvm (that came with it)& Guest :
RHEL6.3
6 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49
node 4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
node 7 size: 65536 MB
Signed-off-by: Chegu Vin
On 6/18/2012 1:29 PM, Eduardo Habkost wrote:
On Sun, Jun 17, 2012 at 01:12:31PM -0700, Chegu Vinod wrote:
The -numa option to qemu is used to create [fake] numa nodes
and expose them to the guest OS instance.
There are a couple of issues with the -numa option:
a) Max VCPU's that c
On 6/18/2012 3:11 PM, Eric Blake wrote:
On 06/18/2012 04:05 PM, Andreas Färber wrote:
Am 17.06.2012 22:12, schrieb Chegu Vinod:
diff --git a/vl.c b/vl.c
index 204d85b..1906412 100644
--- a/vl.c
+++ b/vl.c
@@ -28,6 +28,7 @@
#include
#include
#include
+#include
Did you check whether this
Hello,
Wanted to share some preliminary data from live migration experiments on a
setup
that is perhaps one of the larger ones.
We used Juan's "huge_memory" patches (without the separate migration thread)
and
measured the total migration time and the time taken for stage 3 ("downtime").
No
ode 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu Vinod , Jim Hull ,
Craig Hada
---
cpus.c |3 ++-
hw/pc.c |4 +++-
sysemu.h |3 ++-
vl.
node 3 cpus: 30 31 32 33 34 35 36 37 38 39
node 3 size: 65536 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49
node 4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76
Hello,
While running an AIM7 (workfile.high_systime) in a single 40-way (or a single
60-way KVM guest) I noticed pretty bad performance when the guest was booted
with 3.3.1 kernel when compared to the same guest booted with 2.6.32-220
(RHEL6.2) kernel.
'am still trying to dig more into the de
Rik van Riel redhat.com> writes:
>
> On 04/11/2012 01:21 PM, Chegu Vinod wrote:
> >
> > Hello,
> >
> > While running an AIM7 (workfile.high_systime) in a single 40-way (or a
single
> > 60-way KVM guest) I noticed pretty bad performance when the gues
On 4/16/2012 5:18 AM, Gleb Natapov wrote:
On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote:
On 04/11/2012 01:21 PM, Chegu Vinod wrote:
Hello,
While running an AIM7 (workfile.high_systime) in a single 40-way (or a single
60-way KVM guest) I noticed pretty bad performance when the
On 4/17/2012 2:49 AM, Gleb Natapov wrote:
On Mon, Apr 16, 2012 at 07:44:39AM -0700, Chegu Vinod wrote:
On 4/16/2012 5:18 AM, Gleb Natapov wrote:
On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote:
On 04/11/2012 01:21 PM, Chegu Vinod wrote:
Hello,
While running an AIM7
Hello,
Perhaps this query was answered in the past. If yes kindly point me to
the same.
We noticed differences in networking performance (measured via netperf
over a 10G NIC) on an X86_64 server between the following two
configurations :
1) Server run as a KVM Host (but with no KVM guests c
On 4/17/2012 6:25 AM, Chegu Vinod wrote:
On 4/17/2012 2:49 AM, Gleb Natapov wrote:
On Mon, Apr 16, 2012 at 07:44:39AM -0700, Chegu Vinod wrote:
On 4/16/2012 5:18 AM, Gleb Natapov wrote:
On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote:
On 04/11/2012 01:21 PM, Chegu Vinod wrote
On 4/18/2012 10:43 PM, Gleb Natapov wrote:
On Thu, Apr 19, 2012 at 03:53:39AM +, Chegu Vinod wrote:
Hello,
Perhaps this query was answered in the past. If yes kindly point me to
the same.
We noticed differences in networking performance (measured via netperf
over a 10G NIC) on an X86_64
Nadav Har'El math.technion.ac.il> writes:
>
> On Fri, Apr 20, 2012, Chegu Vinod wrote about "Re: Networking performance on
> a
KVM Host (with no guests)":
> > Removing the "intel_iommu=on" boot time parameter in the Config 1
> > case seeme
tate == TASK_RUNNING)
+ vcpu->preempted = true;
kvm_arch_vcpu_put(vcpu);
}
.
Reviewed-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
continue;
if (vcpu == me)
continue;
if (waitqueue_active(&vcpu->wq))
.
Reviewed-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...
On 4/22/2013 1:50 PM, Jiannan Ouyang wrote:
On Mon, Apr 22, 2013 at 4:44 PM, Peter Zijlstra wrote:
On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:
IIRC one of the reasons was that the performance improvement wasn't
as obvious. Rescheduling VCPUs takes a fair amount of time, quite
proba
tapov
http://article.gmane.org/gmane.comp.emulators.kvm.devel/99713 )
Signed-off-by: Chegu Vinod
---
arch/x86/include/asm/kvm_host.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4979778..bc57
}
+ if (unlikely(disable_hugepages)) {
+ vfio_lock_acct(1);
+ return 1;
+ }
+
/* Lock all the consecutive pages from pfn_base */
for (i = 1, vaddr += PAGE_SIZE; i < npage; i++, vaddr += PAGE_SIZE) {
unsigned long pfn = 0;
.
Te
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on).
[ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc from
reserved chunk failed
[ 82.272633] kvm_intel: Could not
On 6/30/2013 11:22 PM, Rusty Russell wrote:
Chegu Vinod writes:
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on).
[ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc
On 7/1/2013 10:49 PM, Rusty Russell wrote:
Chegu Vinod writes:
On 6/30/2013 11:22 PM, Rusty Russell wrote:
Chegu Vinod writes:
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly,
Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify hardwar
Hello,
Wanted to get a clarification about KVM_MAX_VCPUS(currently set to 254)
in kvm_host.h file. The kvm_vcpu *vcpus array is sized based on KVM_MAX_VCPUS.
(i.e. a max of 254 elements in the array).
An 8bit APIC id should allow for 256 ID's. Reserving one for Broadcast should
leave 255 ID's
);
kvm_apic_set_ldr(apic, ldr);
}
apic->base_address = apic->vcpu->arch.apic_base &
--
Gleb.
.
Reviewed-by: Chegu Vinod
Tested-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of
On 10/14/2012 2:08 AM, Gleb Natapov wrote:
On Sat, Oct 13, 2012 at 10:32:13PM -0400, Sasha Levin wrote:
On 10/13/2012 06:29 PM, Chegu Vinod wrote:
Hello,
Wanted to get a clarification about KVM_MAX_VCPUS(currently set to 254)
in kvm_host.h file. The kvm_vcpu *vcpus array is sized based on
On 11/27/2012 2:30 AM, Raghavendra K T wrote:
On 11/26/2012 07:05 PM, Andrew Jones wrote:
On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote:
From: Peter Zijlstra
In case of undercomitted scenarios, especially in large guests
yield_to overhead is significantly high. when run queu
ons(-)
.
Tested-by: Chegu Vinod
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/28/2012 5:09 PM, Chegu Vinod wrote:
On 11/27/2012 6:23 AM, Chegu Vinod wrote:
On 11/27/2012 2:30 AM, Raghavendra K T wrote:
On 11/26/2012 07:05 PM, Andrew Jones wrote:
On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote:
From: Peter Zijlstra
In case of undercomitted
Hello,
'am running into an issue with the latest bits. [ Pl. see below. The
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?. ] Wondering if
this is a known issue or
some recent regression ?
'am using the latest qemu (from qemu.git) and the latest kvm.
On 1/9/2013 8:35 PM, Jason Wang wrote:
On 01/10/2013 04:25 AM, Chegu Vinod wrote:
Hello,
'am running into an issue with the latest bits. [ Pl. see below. The
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?. ] Wondering
if this is a known issue or
Zhang, Yang Z intel.com> writes:
>
> Marcelo Tosatti wrote on 2013-01-24:
> > On Wed, Jan 23, 2013 at 10:47:23PM +0800, Yang Zhang wrote:
> >> From: Yang Zhang Intel.com>
> >>
> >> APIC virtualization is a new feature which can eliminate most of VM exit
> >> when vcpu handle a interrupt:
> >>
46 matches
Mail list logo