Re: [PATCH 4/8] KVM: MMU: drop unsync_child_bitmap

2011-12-18 Thread Avi Kivity
On 12/16/2011 12:16 PM, Xiao Guangrong wrote:
> unsync_child_bitmap is used to record which spte has unsync page or unsync
> children, we can set a free bit in the spte instead of it
>

unsync_child_bitmap takes one cacheline; the shadow page table takes
64.  This will make unsync/resync much more expensive.

I suggest to put unsync_child_bitmap at the end (together with
unsync_children) and simply not allocate it when unneeded (have two
kmem_caches for the two cases).


-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 8/8] KVM: MMU: remove PT64_SECOND_AVAIL_BITS_SHIFT

2011-12-18 Thread Avi Kivity
On 12/16/2011 12:18 PM, Xiao Guangrong wrote:
> It is not used, remove it
>
> Signed-off-by: Xiao Guangrong 
> ---
>  arch/x86/kvm/mmu.c |1 -
>  1 files changed, 0 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 5d0f0e3..234a32e 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -91,7 +91,6 @@ module_param(dbg, bool, 0644);
>  #define PTE_PREFETCH_NUM 8
>
>  #define PT_FIRST_AVAIL_BITS_SHIFT 9
> -#define PT64_SECOND_AVAIL_BITS_SHIFT 52
>
>

Don't see a reason to drop it, we may use it some day.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] kvm tools: Define __compiletime_error helper

2011-12-18 Thread Cyrill Gorcunov
To eliminate compile errors like

 |  CC   builtin-run.o
 | In file included from ../../arch/x86/include/asm/system.h:7:0,
 | from include/kvm/barrier.h:13,
 | from builtin-run.c:16:
 | ../../arch/x86/include/asm/cmpxchg.h:11:13: error: no previous prototype for 
‘__xchg_wrong_size’ [-Werror=missing-prototypes]
 | ../../arch/x86/include/asm/cmpxchg.h: In function ‘__xchg_wrong_size’:
 | ../../arch/x86/include/asm/cmpxchg.h:12:2: error: expected declaration 
specifiers before ‘__compiletime_error’

Signed-off-by: Cyrill Gorcunov 
---

Not sure if it's me only or not. I've had no such problems before.

 tools/kvm/include/kvm/compiler.h |4 
 1 file changed, 4 insertions(+)

Index: linux-2.6.git/tools/kvm/include/kvm/compiler.h
===
--- linux-2.6.git.orig/tools/kvm/include/kvm/compiler.h
+++ linux-2.6.git/tools/kvm/include/kvm/compiler.h
@@ -1,6 +1,10 @@
 #ifndef KVM_COMPILER_H_
 #define KVM_COMPILER_H_
 
+#ifndef __compiletime_error
+# define __compiletime_error(message)
+#endif
+
 #define notrace __attribute__((no_instrument_function))
 
 #endif /* KVM_COMPILER_H_ */
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] kvm tools: Submit multiple virtio-blk requests in parallel

2011-12-18 Thread Sasha Levin
On Fri, 2011-12-16 at 22:57 +0800, Asias He wrote:
> On 12/15/2011 08:15 PM, Sasha Levin wrote:
> > When using AIO, submit all requests which exists in the vring in a
> single
> > io_submit instead of one io_submit for each descriptor.
> > 
> > Benchmarks:
> > 
> > Short version: 15%+ increase in IOPS, small increase in BW.
> > 
> > Read IOPS:
> > Before:
> >   vda: ios=291792/0, merge=0/0, ticks=35229/0, in_queue=31025,
> util=61.30%
> 
> I guess you are reading the wrong IOPS number, the 'ios' is the number
> of ios performed by all groups, not the IOPS result. Find the
> 'iops' ;-)
> 
> So, Here is the number without/with this patch.
> 
> (seq-read, seq-write, rand-read, rand-write)
> 
> Before:
>   read : io=98304KB, bw=63015KB/s, iops=15753, runt=  1560msec
>   write: io=98304KB, bw=56823KB/s, iops=14205, runt=  1730msec
>   read : io=98304KB, bw=62139KB/s, iops=15534, runt=  1582msec
>   write: io=98304KB, bw=53836KB/s, iops=13458, runt=  1826msec
> 
> After:
>   read : io=98304KB, bw=63096KB/s, iops=15774, runt=  1558msec
>   write: io=98304KB, bw=55823KB/s, iops=13955, runt=  1761msec
>   read : io=98304KB, bw=59148KB/s, iops=14787, runt=  1662msec
>   write: io=98304KB, bw=55072KB/s, iops=13768, runt=  1785msec
> 
> Submit more io requests in one time is not supposed to increase the
> iops
> or bw so dramatically.
> 
> I even tried to submit all read/write ops in one io_submit which still
> ends up with very little iops or bw improvement. 

Did you test it vs /dev/shm?

These are the results I see from the same test:
Before:
read : io=1157.7MB, bw=118110KB/s, iops=29527 , runt= 10037msec
write: io=1083.6MB, bw=110867KB/s, iops=27716 , runt= 10008msec
After:
read : io=1304.4MB, bw=17KB/s, iops=4 , runt= 10017msec
write: io=1292.4MB, bw=132087KB/s, iops=33021 , runt= 10019msec

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] kvm tools: Rename pr_error to pr_err to follow kernel convention

2011-12-18 Thread Cyrill Gorcunov
The kernel already has pr_err helper lets do the same.

Signed-off-by: Cyrill Gorcunov 
---
 tools/kvm/builtin-stat.c   |2 +-
 tools/kvm/disk/core.c  |2 +-
 tools/kvm/include/kvm/util.h   |2 +-
 tools/kvm/kvm.c|2 +-
 tools/kvm/util/parse-options.c |   16 
 tools/kvm/util/util.c  |2 +-
 6 files changed, 13 insertions(+), 13 deletions(-)

Index: linux-2.6.git/tools/kvm/builtin-stat.c
===
--- linux-2.6.git.orig/tools/kvm/builtin-stat.c
+++ linux-2.6.git/tools/kvm/builtin-stat.c
@@ -68,7 +68,7 @@ static int do_memstat(const char *name,
 
r = select(1, &fdset, NULL, NULL, &t);
if (r < 0) {
-   pr_error("Could not retrieve mem stats from %s", name);
+   pr_err("Could not retrieve mem stats from %s", name);
return r;
}
r = read(sock, &stats, sizeof(stats));
Index: linux-2.6.git/tools/kvm/disk/core.c
===
--- linux-2.6.git.orig/tools/kvm/disk/core.c
+++ linux-2.6.git/tools/kvm/disk/core.c
@@ -118,7 +118,7 @@ struct disk_image **disk_image__open_all
 
disks[i] = disk_image__open(filenames[i], readonly[i]);
if (!disks[i]) {
-   pr_error("Loading disk image '%s' failed", 
filenames[i]);
+   pr_err("Loading disk image '%s' failed", filenames[i]);
goto error;
}
}
Index: linux-2.6.git/tools/kvm/include/kvm/util.h
===
--- linux-2.6.git.orig/tools/kvm/include/kvm/util.h
+++ linux-2.6.git/tools/kvm/include/kvm/util.h
@@ -38,7 +38,7 @@ extern bool do_debug_print;
 
 extern void die(const char *err, ...) NORETURN __attribute__((format (printf, 
1, 2)));
 extern void die_perror(const char *s) NORETURN;
-extern int pr_error(const char *err, ...) __attribute__((format (printf, 1, 
2)));
+extern int pr_err(const char *err, ...) __attribute__((format (printf, 1, 2)));
 extern void pr_warning(const char *err, ...) __attribute__((format (printf, 1, 
2)));
 extern void pr_info(const char *err, ...) __attribute__((format (printf, 1, 
2)));
 extern void set_die_routine(void (*routine)(const char *err, va_list params) 
NORETURN);
Index: linux-2.6.git/tools/kvm/kvm.c
===
--- linux-2.6.git.orig/tools/kvm/kvm.c
+++ linux-2.6.git/tools/kvm/kvm.c
@@ -109,7 +109,7 @@ static int kvm__check_extensions(struct
if (!kvm_req_ext[i].name)
break;
if (!kvm__supports_extension(kvm, kvm_req_ext[i].code)) {
-   pr_error("Unsuppored KVM extension detected: %s",
+   pr_err("Unsuppored KVM extension detected: %s",
kvm_req_ext[i].name);
return (int)-i;
}
Index: linux-2.6.git/tools/kvm/util/parse-options.c
===
--- linux-2.6.git.orig/tools/kvm/util/parse-options.c
+++ linux-2.6.git/tools/kvm/util/parse-options.c
@@ -17,10 +17,10 @@
 static int opterror(const struct option *opt, const char *reason, int flags)
 {
if (flags & OPT_SHORT)
-   return pr_error("switch `%c' %s", opt->short_name, reason);
+   return pr_err("switch `%c' %s", opt->short_name, reason);
if (flags & OPT_UNSET)
-   return pr_error("option `no-%s' %s", opt->long_name, reason);
-   return pr_error("option `%s' %s", opt->long_name, reason);
+   return pr_err("option `no-%s' %s", opt->long_name, reason);
+   return pr_err("option `%s' %s", opt->long_name, reason);
 }
 
 static int get_arg(struct parse_opt_ctx_t *p, const struct option *opt,
@@ -324,7 +324,7 @@ static void check_typos(const char *arg,
return;
 
if (!prefixcmp(arg, "no-")) {
-   pr_error ("did you mean `--%s` (with two dashes ?)", arg);
+   pr_err("did you mean `--%s` (with two dashes ?)", arg);
exit(129);
}
 
@@ -332,7 +332,7 @@ static void check_typos(const char *arg,
if (!options->long_name)
continue;
if (!prefixcmp(options->long_name, arg)) {
-   pr_error ("did you mean `--%s` (with two dashes ?)", 
arg);
+   pr_err("did you mean `--%s` (with two dashes ?)", arg);
exit(129);
}
}
@@ -430,7 +430,7 @@ is_abbreviated:
}
 
if (ambiguous_option)
-   return pr_error("Ambiguous option: %s "
+   return pr_err("Ambiguous option: %s "
"(could be --%s%s or --%s%s)",
arg,
(ambiguous_fla

[PATCH] kvm tools: sdl -- Fix array size for keymap

2011-12-18 Thread Cyrill Gorcunov
Index is u8 value so array size should be 256.

Signed-off-by: Cyrill Gorcunov 
---
 tools/kvm/ui/sdl.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6.git/tools/kvm/ui/sdl.c
===
--- linux-2.6.git.orig/tools/kvm/ui/sdl.c
+++ linux-2.6.git/tools/kvm/ui/sdl.c
@@ -34,7 +34,7 @@ struct set2_scancode {
.type = SCANCODE_ESCAPED,\
 }
 
-static const struct set2_scancode const keymap[255] = {
+static const struct set2_scancode const keymap[256] = {
[9] = DEFINE_SC(0x76),  /*  */
[10]= DEFINE_SC(0x16),  /* 1 */
[11]= DEFINE_SC(0x1e),  /* 2 */
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC] kvm tools: Make kvm__arch_setup_firmware to return error code

2011-12-18 Thread Cyrill Gorcunov
If some of subsequent calls fails we better to return error
code instead of dying with a message. This is a first step
in getting rid of number of die() calls we have in code.

Signed-off-by: Cyrill Gorcunov 
---
 tools/kvm/builtin-run.c |5 -
 tools/kvm/include/kvm/kvm.h |2 +-
 tools/kvm/powerpc/kvm.c |4 +++-
 tools/kvm/x86/include/kvm/mptable.h |2 +-
 tools/kvm/x86/kvm.c |4 ++--
 tools/kvm/x86/mptable.c |   10 +++---
 6 files changed, 18 insertions(+), 9 deletions(-)

Index: linux-2.6.git/tools/kvm/builtin-run.c
===
--- linux-2.6.git.orig/tools/kvm/builtin-run.c
+++ linux-2.6.git/tools/kvm/builtin-run.c
@@ -1121,7 +1121,9 @@ int kvm_cmd_run(int argc, const char **a
 
kvm__start_timer(kvm);
 
-   kvm__arch_setup_firmware(kvm);
+   exit_code = kvm__arch_setup_firmware(kvm);
+   if (exit_code)
+   goto err;
 
for (i = 0; i < nrcpus; i++) {
kvm_cpus[i] = kvm_cpu__init(kvm, i);
@@ -1151,6 +1153,7 @@ int kvm_cmd_run(int argc, const char **a
exit_code = 1;
}
 
+err:
compat__print_all_messages();
 
fb__stop();
Index: linux-2.6.git/tools/kvm/include/kvm/kvm.h
===
--- linux-2.6.git.orig/tools/kvm/include/kvm/kvm.h
+++ linux-2.6.git/tools/kvm/include/kvm/kvm.h
@@ -55,7 +55,7 @@ void kvm__remove_socket(const char *name
 
 void kvm__arch_set_cmdline(char *cmdline, bool video);
 void kvm__arch_init(struct kvm *kvm, const char *kvm_dev, const char 
*hugetlbfs_path, u64 ram_size, const char *name);
-void kvm__arch_setup_firmware(struct kvm *kvm);
+int kvm__arch_setup_firmware(struct kvm *kvm);
 bool kvm__arch_cpu_supports_vm(void);
 void kvm__arch_periodic_poll(struct kvm *kvm);
 
Index: linux-2.6.git/tools/kvm/powerpc/kvm.c
===
--- linux-2.6.git.orig/tools/kvm/powerpc/kvm.c
+++ linux-2.6.git/tools/kvm/powerpc/kvm.c
@@ -176,7 +176,7 @@ static void setup_fdt(struct kvm *kvm)
 /**
  * kvm__arch_setup_firmware
  */
-void kvm__arch_setup_firmware(struct kvm *kvm)
+int kvm__arch_setup_firmware(struct kvm *kvm)
 {
/* Load RTAS */
 
@@ -184,4 +184,6 @@ void kvm__arch_setup_firmware(struct kvm
 
/* Init FDT */
setup_fdt(kvm);
+
+   return 0;
 }
Index: linux-2.6.git/tools/kvm/x86/include/kvm/mptable.h
===
--- linux-2.6.git.orig/tools/kvm/x86/include/kvm/mptable.h
+++ linux-2.6.git/tools/kvm/x86/include/kvm/mptable.h
@@ -3,6 +3,6 @@
 
 struct kvm;
 
-void mptable_setup(struct kvm *kvm, unsigned int ncpus);
+int mptable_setup(struct kvm *kvm, unsigned int ncpus);
 
 #endif /* KVM_MPTABLE_H_ */
Index: linux-2.6.git/tools/kvm/x86/kvm.c
===
--- linux-2.6.git.orig/tools/kvm/x86/kvm.c
+++ linux-2.6.git/tools/kvm/x86/kvm.c
@@ -349,7 +349,7 @@ bool load_bzimage(struct kvm *kvm, int f
  * This function is a main routine where we poke guest memory
  * and install BIOS there.
  */
-void kvm__arch_setup_firmware(struct kvm *kvm)
+int kvm__arch_setup_firmware(struct kvm *kvm)
 {
/* standart minimal configuration */
setup_bios(kvm);
@@ -357,7 +357,7 @@ void kvm__arch_setup_firmware(struct kvm
/* FIXME: SMP, ACPI and friends here */
 
/* MP table */
-   mptable_setup(kvm, kvm->nrcpus);
+   return mptable_setup(kvm, kvm->nrcpus);
 }
 
 void kvm__arch_periodic_poll(struct kvm *kvm)
Index: linux-2.6.git/tools/kvm/x86/mptable.c
===
--- linux-2.6.git.orig/tools/kvm/x86/mptable.c
+++ linux-2.6.git/tools/kvm/x86/mptable.c
@@ -71,7 +71,7 @@ static void mptable_add_irq_src(struct m
 /**
  * mptable_setup - create mptable and fill guest memory with it
  */
-void mptable_setup(struct kvm *kvm, unsigned int ncpus)
+int mptable_setup(struct kvm *kvm, unsigned int ncpus)
 {
unsigned long real_mpc_table, real_mpf_intel, size;
struct mpf_intel *mpf_intel;
@@ -264,8 +264,11 @@ void mptable_setup(struct kvm *kvm, unsi
 */
 
if (size > (unsigned long)(MB_BIOS_END - bios_rom_size) ||
-   size > MPTABLE_MAX_SIZE)
-   die("MP table is too big");
+   size > MPTABLE_MAX_SIZE) {
+   free(mpc_table);
+   pr_err("MP table is too big");
+   return -1;
+   }
 
/*
 * OK, it is time to move it to guest memory.
@@ -273,4 +276,5 @@ void mptable_setup(struct kvm *kvm, unsi
memcpy(guest_flat_to_host(kvm, real_mpc_table), mpc_table, size);
 
free(mpc_table);
+   return 0;
 }
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at 

[PATCH RFC v3 0/2] Initial support for Microsoft Hyper-V.

2011-12-18 Thread Vadim Rozenfeld
With the following series of patches we are starting to implement
some basic Microsoft Hyper-V Enlightenment functionality. This series
is mostly about adding support for relaxed timing, spinlock,
and virtual apic.

For more Hyper-V related information please see:
"Hypervisor Functional Specification v2.0: For Windows Server 2008 R2" at
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=18673

Changelog:
 v3->v2
  - fix indention,
  - drop reading HV MSRs inside of kvm_get_msrs routine.
 v2->v1
  - remove KVM_CAP_IRQCHIP ifdef,
  - remove CONFIG_HYPERV config option,
  - move KVM leaves to new location (0x4100),
  - cosmetic changes.
 v0->v1
  - move hyper-v parameters under cpu category,
  - move hyper-v stuff to target-i386 directory,
  - make CONFIG_HYPERV enabled by default for
i386-softmmu and x86_64-softmmu configurations,
  - rearrange the patches from v0,
  - set HV_X64_MSR_HYPERCALL, HV_X64_MSR_GUEST_OS_ID,
and HV_X64_MSR_APIC_ASSIST_PAGE to 0 on system reset.


Vadim Rozenfeld (2):
  hyper-v: introduce Hyper-V support infrastructure.
  hyper-v: initialize Hyper-V CPUID leaves.

 Makefile.target  |2 +
 target-i386/cpuid.c  |   14 ++
 target-i386/hyperv.c |   65 ++
 target-i386/hyperv.h |   37 
 target-i386/kvm.c|   65 -
 5 files changed, 181 insertions(+), 2 deletions(-)
 create mode 100644 target-i386/hyperv.c
 create mode 100644 target-i386/hyperv.h

-- 
1.7.4.4

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC v3 1/2] hyper-v: introduce Hyper-V support infrastructure.

2011-12-18 Thread Vadim Rozenfeld
---
 Makefile.target  |2 +
 target-i386/cpuid.c  |   14 ++
 target-i386/hyperv.c |   65 ++
 target-i386/hyperv.h |   37 
 4 files changed, 118 insertions(+), 0 deletions(-)
 create mode 100644 target-i386/hyperv.c
 create mode 100644 target-i386/hyperv.h

diff --git a/Makefile.target b/Makefile.target
index 6e742c2..6245796 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -209,6 +209,8 @@ obj-$(CONFIG_NO_KVM) += kvm-stub.o
 obj-y += memory.o
 LIBS+=-lz
 
+obj-i386-y +=hyperv.o
+
 QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
 QEMU_CFLAGS += $(VNC_SASL_CFLAGS)
 QEMU_CFLAGS += $(VNC_JPEG_CFLAGS)
diff --git a/target-i386/cpuid.c b/target-i386/cpuid.c
index 1e8bcff..4193df1 100644
--- a/target-i386/cpuid.c
+++ b/target-i386/cpuid.c
@@ -27,6 +27,8 @@
 #include "qemu-option.h"
 #include "qemu-config.h"
 
+#include "hyperv.h"
+
 /* feature flags taken from "Intel Processor Identification and the CPUID
  * Instruction" and AMD's "CPUID Specification".  In cases of disagreement
  * between feature naming conventions, aliases may be added.
@@ -716,6 +718,14 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
const char *cpu_model)
 goto error;
 }
 x86_cpu_def->tsc_khz = tsc_freq / 1000;
+} else if (!strcmp(featurestr, "hv_spinlocks")) {
+char *err;
+numvalue = strtoul(val, &err, 0);
+if (!*val || *err) {
+fprintf(stderr, "bad numerical value %s\n", val);
+goto error;
+}
+hyperv_set_spinlock_retries(numvalue);
 } else {
 fprintf(stderr, "unrecognized feature %s\n", featurestr);
 goto error;
@@ -724,6 +734,10 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
const char *cpu_model)
 check_cpuid = 1;
 } else if (!strcmp(featurestr, "enforce")) {
 check_cpuid = enforce_cpuid = 1;
+} else if (!strcmp(featurestr, "hv_relaxed")) {
+hyperv_enable_relaxed_timing(true);
+} else if (!strcmp(featurestr, "hv_vapic")) {
+hyperv_enable_vapic_recommended(true);
 } else {
 fprintf(stderr, "feature string `%s' not in format 
(+feature|-feature|feature=xyz)\n", featurestr);
 goto error;
diff --git a/target-i386/hyperv.c b/target-i386/hyperv.c
new file mode 100644
index 000..b2e57ad
--- /dev/null
+++ b/target-i386/hyperv.c
@@ -0,0 +1,65 @@
+/*
+ * QEMU Hyper-V support
+ *
+ * Copyright Red Hat, Inc. 2011
+ *
+ * Author: Vadim Rozenfeld 
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "hyperv.h"
+
+static bool hyperv_vapic;
+static bool hyperv_relaxed_timing;
+static int hyperv_spinlock_attempts = HYPERV_SPINLOCK_NEVER_RETRY;
+
+void hyperv_enable_vapic_recommended(bool val)
+{
+hyperv_vapic = val;
+}
+
+void hyperv_enable_relaxed_timing(bool val)
+{
+hyperv_relaxed_timing = val;
+}
+
+void hyperv_set_spinlock_retries(int val)
+{
+hyperv_spinlock_attempts = val;
+if (hyperv_spinlock_attempts < 0xFFF) {
+hyperv_spinlock_attempts = 0xFFF;
+}
+}
+
+bool hyperv_enabled(void)
+{
+return hyperv_hypercall_available() || hyperv_relaxed_timing_enabled();
+}
+
+bool hyperv_hypercall_available(void)
+{
+if (hyperv_vapic ||
+(hyperv_spinlock_attempts != HYPERV_SPINLOCK_NEVER_RETRY)) {
+  return true;
+}
+return false;
+}
+
+bool hyperv_vapic_recommended(void)
+{
+return hyperv_vapic;
+}
+
+bool hyperv_relaxed_timing_enabled(void)
+{
+return hyperv_relaxed_timing;
+}
+
+int hyperv_get_spinlock_retries(void)
+{
+return hyperv_spinlock_attempts;
+}
+
diff --git a/target-i386/hyperv.h b/target-i386/hyperv.h
new file mode 100644
index 000..0d742f8
--- /dev/null
+++ b/target-i386/hyperv.h
@@ -0,0 +1,37 @@
+/*
+ * QEMU Hyper-V support
+ *
+ * Copyright Red Hat, Inc. 2011
+ *
+ * Author: Vadim Rozenfeld 
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef QEMU_HW_HYPERV_H
+#define QEMU_HW_HYPERV_H 1
+
+#include "qemu-common.h"
+#include 
+
+#ifndef HYPERV_SPINLOCK_NEVER_RETRY
+#define HYPERV_SPINLOCK_NEVER_RETRY 0x
+#endif
+
+#ifndef KVM_CPUID_SIGNATURE_NEXT
+#define KVM_CPUID_SIGNATURE_NEXT0x4100
+#endif
+
+void hyperv_enable_vapic_recommended(bool val);
+void hyperv_enable_relaxed_timing(bool val);
+void hyperv_set_spinlock_retries(int val);
+
+bool hyperv_enabled(void);
+bool hyperv_hypercall_available(void);
+bool hyperv_vapic_recommended(void);
+bool hyperv_relaxed_timing_enabled(void);
+int hyperv_get_spinlock_retries(void);
+
+#endif /* QEMU_HW_HYPERV_H */
-- 
1.7.4.4

--
To unsubscribe from this

[PATCH RFC v3 2/2] hyper-v: initialize Hyper-V CPUID leaves.

2011-12-18 Thread Vadim Rozenfeld
---
 target-i386/kvm.c |   65 +++-
 1 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 9080996..731cc8d 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -29,6 +29,7 @@
 #include "hw/pc.h"
 #include "hw/apic.h"
 #include "ioport.h"
+#include "hyperv.h"
 
 //#define DEBUG_KVM
 
@@ -381,11 +382,16 @@ int kvm_arch_init_vcpu(CPUState *env)
 cpuid_i = 0;
 
 /* Paravirtualization CPUIDs */
-memcpy(signature, "KVMKVMKVM\0\0\0", 12);
 c = &cpuid_data.entries[cpuid_i++];
 memset(c, 0, sizeof(*c));
 c->function = KVM_CPUID_SIGNATURE;
-c->eax = 0;
+if (!hyperv_enabled()) {
+memcpy(signature, "KVMKVMKVM\0\0\0", 12);
+c->eax = 0;
+} else {
+memcpy(signature, "Microsoft Hv", 12);
+c->eax = HYPERV_CPUID_MIN;
+}
 c->ebx = signature[0];
 c->ecx = signature[1];
 c->edx = signature[2];
@@ -396,6 +402,54 @@ int kvm_arch_init_vcpu(CPUState *env)
 c->eax = env->cpuid_kvm_features &
 kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0, R_EAX);
 
+if (hyperv_enabled()) {
+memcpy(signature, "Hv#1\0\0\0\0\0\0\0\0", 12);
+c->eax = signature[0];
+
+c = &cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c->function = HYPERV_CPUID_VERSION;
+c->eax = 0x1bbc;
+c->ebx = 0x00060001;
+
+c = &cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c->function = HYPERV_CPUID_FEATURES;
+if (hyperv_relaxed_timing_enabled()) {
+c->eax |= HV_X64_MSR_HYPERCALL_AVAILABLE;
+}
+if (hyperv_vapic_recommended()) {
+c->eax |= HV_X64_MSR_HYPERCALL_AVAILABLE;
+c->eax |= HV_X64_MSR_APIC_ACCESS_AVAILABLE;
+}
+
+c = &cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c->function = HYPERV_CPUID_ENLIGHTMENT_INFO;
+if (hyperv_relaxed_timing_enabled()) {
+c->eax |= HV_X64_RELAXED_TIMING_RECOMMENDED;
+}
+if (hyperv_vapic_recommended()) {
+c->eax |= HV_X64_APIC_ACCESS_RECOMMENDED;
+}
+c->ebx = hyperv_get_spinlock_retries();
+
+c = &cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c->function = HYPERV_CPUID_IMPLEMENT_LIMITS;
+c->eax = 0x40;
+c->ebx = 0x40;
+
+c = &cpuid_data.entries[cpuid_i++];
+memset(c, 0, sizeof(*c));
+c->function = KVM_CPUID_SIGNATURE_NEXT;
+memcpy(signature, "KVMKVMKVM\0\0\0", 12);
+c->eax = 0;
+c->ebx = signature[0];
+c->ecx = signature[1];
+c->edx = signature[2];
+}
+
 has_msr_async_pf_en = c->eax & (1 << KVM_FEATURE_ASYNC_PF);
 
 cpu_x86_cpuid(env, 0, 0, &limit, &unused, &unused, &unused);
@@ -962,6 +1016,13 @@ static int kvm_put_msrs(CPUState *env, int level)
 kvm_msr_entry_set(&msrs[n++], MSR_KVM_ASYNC_PF_EN,
   env->async_pf_en_msr);
 }
+if (hyperv_hypercall_available()) {
+kvm_msr_entry_set(&msrs[n++], HV_X64_MSR_GUEST_OS_ID, 0);
+kvm_msr_entry_set(&msrs[n++], HV_X64_MSR_HYPERCALL, 0);
+}
+if (hyperv_vapic_recommended()) {
+kvm_msr_entry_set(&msrs[n++], HV_X64_MSR_APIC_ASSIST_PAGE, 0);
+}
 }
 if (env->mcg_cap) {
 int i;
-- 
1.7.4.4

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFT PATCH] blkio: alloc per cpu data from worker thread context( Re: kvm deadlock)

2011-12-18 Thread Nate Custer

On Dec 16, 2011, at 2:29 PM, Vivek Goyal wrote:
> Thanks for testing it Nate. I did some debugging and found out that patch
> is doing double free on per cpu pointer hence the crash you are running
> into. I could reproduce this problem on my box. It is just a matter of
> doing rmdir on the blkio cgroup.
> 
> I understood the cmpxchg() semantics wrong. I have fixed it now and
> no crashes on directory removal. Can you please give this version a
> try.
> 
> Thanks
> Vivek

After 24 hours of stress testing the machine remains up and working without 
issue. I will continue to test it, but am reasonably confident that this patch 
resolves my issue.

Nate Custer--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] kvm tools: Use assert() helper to check a variable value

2011-12-18 Thread Cyrill Gorcunov
BUILD_BUG_ON is unable to catch errors on expression which
can't be evaluated at compile time.

Signed-off-by: Cyrill Gorcunov 
---
 tools/kvm/x86/bios.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Index: linux-2.6.git/tools/kvm/x86/bios.c
===
--- linux-2.6.git.orig/tools/kvm/x86/bios.c
+++ linux-2.6.git/tools/kvm/x86/bios.c
@@ -5,6 +5,7 @@
 #include "kvm/util.h"
 
 #include 
+#include 
 #include 
 
 #include "bios/bios-rom.h"
@@ -98,7 +99,7 @@ static void e820_setup(struct kvm *kvm)
};
}
 
-   BUILD_BUG_ON(i > E820_X_MAX);
+   assert(i <= E820_X_MAX);
 
e820->nr_map = i;
 }
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM: Don't mistreat edge-triggered INIT IPI as INIT de-assert. (LAPIC)

2011-12-18 Thread Julian Stecklina
If the guest programs an IPI with level=0 (de-assert) and trig_mode=0 (edge),
it is erroneously treated as INIT de-assert and ignored, but to quote the
spec: "For this delivery mode [INIT de-assert], the level flag must be set to
0 and trigger mode flag to 1."

Signed-off-by: Julian Stecklina 
---
 arch/x86/kvm/lapic.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index a7f3e65..260770d 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -433,7 +433,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int 
delivery_mode,
break;
 
case APIC_DM_INIT:
-   if (level) {
+   if (!trig_mode || level) {
result = 1;
vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
kvm_make_request(KVM_REQ_EVENT, vcpu);
-- 
1.7.7.4

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] kvm: make vcpu life cycle separated from kvm instance

2011-12-18 Thread Takuya Yoshikawa

Liu ping fan wrote:

Suppose the following scene,
Firstly, creating 10 kvm_vcpu for guest to take the advantage of
multi-core. Now, reclaiming some of the kvm_vcpu, so we can limit the
guest's usage of cpu. Then what about the kvm_vcpu unused? Currently
they are just idle in kernel, but with this patch, we can remove them.


Then why not write it in the changelog?


+void kvm_arch_vcpu_zap(struct work_struct *work)
+{
+ struct kvm_vcpu *vcpu = container_of(work, struct kvm_vcpu,
+ zap_work);
+ struct kvm *kvm = vcpu->kvm;

- atomic_set(&kvm->online_vcpus, 0);
- mutex_unlock(&kvm->lock);
+ kvm_clear_async_pf_completion_queue(vcpu);
+ kvm_unload_vcpu_mmu(vcpu);
+ kvm_arch_vcpu_free(vcpu);
+ kvm_put_kvm(kvm);
   }


zap is really a good name for this?


zap = destroy, so I think it is OK.


Stronger than that.
My dictionary says "to destroy sth suddenly and with force."

In the case of shadow pages, I see what the author wanted to mean by "zap".

In your case, the host really destroy a VCPU suddenly?
The guest have to unplug it before, I guess.

If you just mean "destroy", why not use it?


+#define kvm_for_each_vcpu(vcpu, kvm) \
+ list_for_each_entry_rcu(vcpu,&kvm->vcpus, list)


Is this macro really worth it?
_rcu shows readers important information, I think.


I guest kvm_for_each_vcpu is designed for hiding the details of
internal implement, and currently it is implemented by array, and my
patch will change it to linked-list,
so IMO, we can still hide the details.


Then why are you doing
list_add_rcu(&vcpu->list, &kvm->vcpus);
without introducing kvm_add_vcpu()?

You are just hiding part of the interface.
I believe this kind of incomplete abstraction should not be added.

The original code was complex enough to introduce a macro, but
list_for_each_entry_rcu(vcpu, &kvm->vcpus, list)
is simple and shows clear meaning by itself.

Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] virtio: use mandatory barriers for remote processor vdevs

2011-12-18 Thread Amos Kong

On 12/12/11 13:12, Rusty Russell wrote:

On Mon, 12 Dec 2011 11:06:53 +0800, Amos Kong  wrote:

On 12/12/11 06:27, Benjamin Herrenschmidt wrote:

On Sun, 2011-12-11 at 14:25 +0200, Michael S. Tsirkin wrote:


Forwarding some results by Amos, who run multiple netperf streams in
parallel, from an external box to the guest.  TCP_STREAM results were
noisy.  This could be due to buffering done by TCP, where packet size
varies even as message size is constant.

TCP_RR results were consistent. In this benchmark, after switching
to mandatory barriers, CPU utilization increased by up to 35% while
throughput went down by up to 14%. the normalized throughput/cpu
regressed consistently, between 7 and 35%

The "fix" applied was simply this:


What machine&   processor was this  ?


pined guest memory to numa node 1


Please try this patch.  How much does the branch cost us?

(Compiles, untested).

Thanks,
Rusty.

From: Rusty Russell
Subject: virtio: harsher barriers for virtio-mmio.

We were cheating with our barriers; using the smp ones rather than the
real device ones.  That was fine, until virtio-mmio came along, which
could be talking to a real device (a non-SMP CPU).

Unfortunately, just putting back the real barriers (reverting
d57ed95d) causes a performance regression on virtio-pci.  In
particular, Amos reports netbench's TCP_RR over virtio_net CPU
utilization increased up to 35% while throughput went down by up to
14%.

By comparison, this branch costs us???

Reference: https://lkml.org/lkml/2011/12/11/22

Signed-off-by: Rusty Russell
---
  drivers/lguest/lguest_device.c |   10 ++
  drivers/s390/kvm/kvm_virtio.c  |2 +-
  drivers/virtio/virtio_mmio.c   |7 ---
  drivers/virtio/virtio_pci.c|4 ++--
  drivers/virtio/virtio_ring.c   |   34 +-
  include/linux/virtio_ring.h|1 +
  tools/virtio/linux/virtio.h|1 +
  tools/virtio/virtio_test.c |3 ++-
  8 files changed, 38 insertions(+), 24 deletions(-)


Hi all,

I tested with the same environment and scenarios.
tested one scenarios for three times and compute the average for more 
precision.


Thanks, Amos

- compare results ---
Mon Dec 19 09:51:09 2011

1 - avg-old.netperf.exhost_guest.txt
2 - avg-fixed.netperf.exhost_guest.txt

==
TCP_STREAM
  sessions| size|throughput|   cpu| normalize|  #tx-pkts| 
#rx-pkts|  #tx-byts|  #rx-byts| #re-trans|  #tx-intr|  #rx-intr| 
#io_exit|  #irq_inj|#tpkt/#exit| #rpkt/#irq
11|   64|   1073.54| 10.50|   102| 0|31| 
0|  1612| 0|16|487641|489753| 
504764|   0.00|   0.00
21|   64|   1079.44| 10.29|   104| 0|30| 
0|  1594| 0|17|487156|488828| 
504411|   0.00|   0.00
% |  0.0|  +0.5|  -2.0|  +2.0| 0|  -3.2| 
0|  -1.1| 0|  +6.2|  -0.1|  -0.2| 
-0.1|  0|  0
12|   64|   2141.12| 15.72|   136| 0|33| 
0|  1744| 0|34|873777|972303| 
928926|   0.00|   0.00
22|   64|   2140.88| 15.64|   137| 0|33| 
0|  1744| 0|34|926588|942841| 
974095|   0.00|   0.00
% |  0.0|  -0.0|  -0.5|  +0.7| 0|   0.0| 
0|   0.0| 0|   0.0|  +6.0|  -3.0| 
+4.9|  0|  0
14|   64|   4076.80| 19.82|   205| 0|30| 
0|  1577| 0|67|   1422282|   1166425| 
1539219|   0.00|   0.00
24|   64|   4094.32| 20.70|   197| 0|31| 
0|  1612| 0|68|   1704330|   1314077| 
1833394|   0.00|   0.00
% |  0.0|  +0.4|  +4.4|  -3.9| 0|  +3.3| 
0|  +2.2| 0|  +1.5| +19.8| +12.7| 
+19.1|  0|  0
11|  256|   2867.48| 13.44|   213| 0|32| 
0|  1726| 0|14|666430|694922| 
690730|   0.00|   0.00
21|  256|   2874.20| 12.71|   226| 0|32| 
0|  1709| 0|14|697960|740407| 
721807|   0.00|   0.00
% |  0.0|  +0.2|  -5.4|  +6.1| 0|   0.0| 
0|  -1.0| 0|   0.0|  +4.7|  +6.5| 
+4.5|  0|  0
12|  256|   5642.82| 17.61|   320| 0|30| 
0|  1594| 0|30|   1226861|   1236081| 
1268562|   0.00|   0.00
22|  256|   5661.06| 17.41|   326| 0|30| 
0|  1594| 0|29|   1175696|   1143490| 
1221528|   0.00|   0.00
% |  0.0|  +0.3|  -1.1|  +1.9| 0|   0.0| 
0|   0.0|

Re: [PATCH 1/8] KVM: MMU: combine unsync and unsync_children

2011-12-18 Thread Takuya Yoshikawa

About naming issues in the kvm mmu code.

Not restricted to your patch series, so please take as a suggestion
for the future.

(2011/12/16 19:13), Xiao Guangrong wrote:

+static bool sp_is_unsync(struct kvm_mmu_page *sp)
+{
+   return sp->role.level == PT_PAGE_TABLE_LEVEL&&  sp->unsync;
+}


is_unsync_sp() is more consistent with others?
e.g. is_large_pte(), is_writable_pte(), is_last_spte()


Takuya


+
+static unsigned int sp_unsync_children_num(struct kvm_mmu_page *sp)
+{
+   unsigned int num = 0;
+
+   if (sp->role.level != PT_PAGE_TABLE_LEVEL)
+   num = sp->unsync_children;
+
+   return num;
+}
+

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/8] KVM: MMU: do not add a nonpresent spte to rmaps of its child

2011-12-18 Thread Takuya Yoshikawa

(2011/12/16 19:15), Xiao Guangrong wrote:


-static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu,
-   struct kvm_mmu_page *sp, u64 *parent_pte)
+static void mmu_page_add_set_parent_pte(struct kvm_vcpu *vcpu,
+   struct kvm_mmu_page *sp,
+   u64 *parent_pte)
  {
if (!parent_pte)
return;

+   mmu_spte_set(parent_pte, __pa(sp->spt) | SHADOW_PAGE_TABLE);
pte_list_add(vcpu, parent_pte,&sp->parent_ptes);
  }


There are a few prefixes in the kvm mmu code.

e.g. mmu_page_, kvm_mmu_, kvm_mmu_page_, ...

Sometimes we also use "sp".

How about deciding a consistent way from now on?

E.g.
if the function is static and for local use only, such a prefix
can be eliminated,

if it is used outside of mmu.c, kvm_mmu_ is needed,

we use sp for kvm_mmu_page,
...

(just an example)

Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: Make the whole guest memory mergeable

2011-12-18 Thread Zang Hongyong

于 2011/12/16,星期五 17:46, Sasha Levin 写道:

On Fri, 2011-12-16 at 17:33 +0800, Zang Hongyong wrote:

Do you see an issue with increasing kvm->ram_size?


Yes, it will cause some problems after simply increase the kvm->ram_size.
For examples:
In kvm__init_ram() code we use kvm->ram_size to calculate the size of
the second
RAM range from 4GB to the end of RAM (phys_size = kvm->ram_size -
phys_size;),
so after increase the kvm->ram_size, it will goes wrong.
This problem also happens in e820_setup() code and load_bzimage() code.

Yup, but fixing it is much easier than having two different sizes of the same 
thing.

For example, the fix for the problem in kvm__init_ram() (and e820_setup()) 
would be:

@@ -112,7 +112,7 @@ void kvm__init_ram(struct kvm *kvm)
 /* Second RAM range from 4GB to the end of RAM: */

 phys_start = 0x1ULL;
-   phys_size  = kvm->ram_size - phys_size;
+   phys_size  = kvm->ram_size - phys_start;
 host_mem   = kvm->ram_start + phys_start;

 kvm__register_mem(kvm, phys_start, phys_size, host_mem);

I basically want one memory map with one size which includes *everything*, even 
if that memory map includes a gap in the middle I still want the total size to 
include that gap.

btw, what problem do you see in load_bzimage()?


I've got what you mean.
And there's nothing wrong in load_bzimage(). It's my misunderstanding.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] virtio: use mandatory barriers for remote processor vdevs

2011-12-18 Thread Benjamin Herrenschmidt
On Mon, 2011-12-19 at 10:19 +0800, Amos Kong wrote:

> I tested with the same environment and scenarios.
> tested one scenarios for three times and compute the average for more 
> precision.
> 
> Thanks, Amos
> 
> - compare results ---
> Mon Dec 19 09:51:09 2011
> 
> 1 - avg-old.netperf.exhost_guest.txt
> 2 - avg-fixed.netperf.exhost_guest.txt

The output is word wrapped and generally unreadable. Any chance you can
provide us with a summary of the outcome ?

Cheers,
Ben.

> ==
> TCP_STREAM
>sessions| size|throughput|   cpu| normalize|  #tx-pkts| 
> #rx-pkts|  #tx-byts|  #rx-byts| #re-trans|  #tx-intr|  #rx-intr| 
> #io_exit|  #irq_inj|#tpkt/#exit| #rpkt/#irq
> 11|   64|   1073.54| 10.50|   102| 0|31| 
>  0|  1612| 0|16|487641|489753| 
> 504764|   0.00|   0.00
> 21|   64|   1079.44| 10.29|   104| 0|30| 
>  0|  1594| 0|17|487156|488828| 
> 504411|   0.00|   0.00
> % |  0.0|  +0.5|  -2.0|  +2.0| 0|  -3.2| 
>  0|  -1.1| 0|  +6.2|  -0.1|  -0.2| 
> -0.1|  0|  0
> 12|   64|   2141.12| 15.72|   136| 0|33| 
>  0|  1744| 0|34|873777|972303| 
> 928926|   0.00|   0.00
> 22|   64|   2140.88| 15.64|   137| 0|33| 
>  0|  1744| 0|34|926588|942841| 
> 974095|   0.00|   0.00
> % |  0.0|  -0.0|  -0.5|  +0.7| 0|   0.0| 
>  0|   0.0| 0|   0.0|  +6.0|  -3.0| 
> +4.9|  0|  0
> 14|   64|   4076.80| 19.82|   205| 0|30| 
>  0|  1577| 0|67|   1422282|   1166425| 
> 1539219|   0.00|   0.00
> 24|   64|   4094.32| 20.70|   197| 0|31| 
>  0|  1612| 0|68|   1704330|   1314077| 
> 1833394|   0.00|   0.00
> % |  0.0|  +0.4|  +4.4|  -3.9| 0|  +3.3| 
>  0|  +2.2| 0|  +1.5| +19.8| +12.7| 
> +19.1|  0|  0
> 11|  256|   2867.48| 13.44|   213| 0|32| 
>  0|  1726| 0|14|666430|694922| 
> 690730|   0.00|   0.00
> 21|  256|   2874.20| 12.71|   226| 0|32| 
>  0|  1709| 0|14|697960|740407| 
> 721807|   0.00|   0.00
> % |  0.0|  +0.2|  -5.4|  +6.1| 0|   0.0| 
>  0|  -1.0| 0|   0.0|  +4.7|  +6.5| 
> +4.5|  0|  0
> 12|  256|   5642.82| 17.61|   320| 0|30| 
>  0|  1594| 0|30|   1226861|   1236081| 
> 1268562|   0.00|   0.00
> 22|  256|   5661.06| 17.41|   326| 0|30| 
>  0|  1594| 0|29|   1175696|   1143490| 
> 1221528|   0.00|   0.00
> % |  0.0|  +0.3|  -1.1|  +1.9| 0|   0.0| 
>  0|   0.0| 0|  -3.3|  -4.2|  -7.5| 
> -3.7|  0|  0
> 14|  256|   9404.27| 23.55|   399| 0|33| 
>  0|  1744| 0|37|   1692245|659975| 
> 1765103|   0.00|   0.00
> 24|  256|   8761.11| 23.18|   376| 0|32| 
>  0|  1726| 0|36|   1699382|418992| 
> 1870804|   0.00|   0.00
> % |  0.0|  -6.8|  -1.6|  -5.8| 0|  -3.0| 
>  0|  -1.0| 0|  -2.7|  +0.4| -36.5| 
> +6.0|  0|  0
> 11|  512|   3803.66| 14.20|   267| 0|30| 
>  0|  1594| 0|14|693992|750078| 
> 721107|   0.00|   0.00
> 21|  512|   3838.02| 15.47|   248| 0|31| 
>  0|  1612| 0|15|811709|773505| 
> 838788|   0.00|   0.00
> % |  0.0|  +0.9|  +8.9|  -7.1| 0|  +3.3| 
>  0|  +1.1| 0|  +7.1| +17.0|  +3.1| 
> +16.3|  0|  0
> 12|  512|   8606.11| 19.34|   444| 0|32| 
>  0|  1709| 0|29|   1264624|647652| 
> 1309740|   0.00|   0.00
> 22|  512|   8127.80| 18.93|   428| 0|32| 
>  0|  1726| 0|28|   1216606|   1179269| 
> 1266260|   0.00|   0.00
> % |  0.0|  -5.6|  -2.1|  -3.6| 0|   0.0| 
>  0|  +1.0| 0|  -3.4|  -3.8| +82.1| 
> -3.3|  0|  0
> 1   

Re: [RFC] virtio: use mandatory barriers for remote processor vdevs

2011-12-18 Thread Amos Kong

On 19/12/11 10:19, Amos Kong wrote:

On 12/12/11 13:12, Rusty Russell wrote:

On Mon, 12 Dec 2011 11:06:53 +0800, Amos Kong wrote:

On 12/12/11 06:27, Benjamin Herrenschmidt wrote:

On Sun, 2011-12-11 at 14:25 +0200, Michael S. Tsirkin wrote:


Forwarding some results by Amos, who run multiple netperf streams in
parallel, from an external box to the guest. TCP_STREAM results were
noisy. This could be due to buffering done by TCP, where packet size
varies even as message size is constant.

TCP_RR results were consistent. In this benchmark, after switching
to mandatory barriers, CPU utilization increased by up to 35% while
throughput went down by up to 14%. the normalized throughput/cpu
regressed consistently, between 7 and 35%

The "fix" applied was simply this:


What machine& processor was this ?


pined guest memory to numa node 1


Please try this patch. How much does the branch cost us?

(Compiles, untested).

Thanks,
Rusty.

From: Rusty Russell
Subject: virtio: harsher barriers for virtio-mmio.

We were cheating with our barriers; using the smp ones rather than the
real device ones. That was fine, until virtio-mmio came along, which
could be talking to a real device (a non-SMP CPU).

Unfortunately, just putting back the real barriers (reverting
d57ed95d) causes a performance regression on virtio-pci. In
particular, Amos reports netbench's TCP_RR over virtio_net CPU
utilization increased up to 35% while throughput went down by up to
14%.

By comparison, this branch costs us???

Reference: https://lkml.org/lkml/2011/12/11/22

Signed-off-by: Rusty Russell
---
drivers/lguest/lguest_device.c | 10 ++
drivers/s390/kvm/kvm_virtio.c | 2 +-
drivers/virtio/virtio_mmio.c | 7 ---
drivers/virtio/virtio_pci.c | 4 ++--
drivers/virtio/virtio_ring.c | 34 +-
include/linux/virtio_ring.h | 1 +
tools/virtio/linux/virtio.h | 1 +
tools/virtio/virtio_test.c | 3 ++-
8 files changed, 38 insertions(+), 24 deletions(-)


Hi all,

I tested with the same environment and scenarios.
tested one scenarios for three times and compute the average for more
precision.

Thanks, Amos

- compare results ---
Mon Dec 19 09:51:09 2011

1 - avg-old.netperf.exhost_guest.txt
2 - avg-fixed.netperf.exhost_guest.txt

==
TCP_STREAM
sessions| size|throughput| cpu| normalize| #tx-pkts| #rx-pkts| #tx-byts|
#rx-byts| #re-trans| #tx-intr| #rx-intr| #io_exit| #irq_inj|#tpkt/#exit|
#rpkt/#irq
1 1| 64| 1073.54| 10.50| 102| 0| 31| 0| 1612| 0| 16| 487641| 489753|
504764| 0.00| 0.00
2 1| 64| 1079.44| 10.29| 104| 0| 30| 0| 1594| 0| 17| 487156| 488828|
504411| 0.00| 0.00
% | 0.0| +0.5| -2.0| +2.0| 0| -3.2| 0| -1.1| 0| +6.2| -0.1| -0.2| -0.1|


The format is broken in webpage, attached the result file.
it's also available here: http://amosk.info/download/rusty-fix-perf.txt
- compare results ---
Mon Dec 19 09:51:09 2011

1 - avg-old.netperf.exhost_guest.txt
2 - avg-fixed.netperf.exhost_guest.txt

==
TCP_STREAM
  sessions| size|throughput|   cpu| normalize|  #tx-pkts|  #rx-pkts|  
#tx-byts|  #rx-byts| #re-trans|  #tx-intr|  #rx-intr|  #io_exit|  
#irq_inj|#tpkt/#exit| #rpkt/#irq
11|   64|   1073.54| 10.50|   102| 0|31|
 0|  1612| 0|16|487641|489753|504764|   
0.00|   0.00
21|   64|   1079.44| 10.29|   104| 0|30|
 0|  1594| 0|17|487156|488828|504411|   
0.00|   0.00
% |  0.0|  +0.5|  -2.0|  +2.0| 0|  -3.2|
 0|  -1.1| 0|  +6.2|  -0.1|  -0.2|  -0.1|  
0|  0
12|   64|   2141.12| 15.72|   136| 0|33|
 0|  1744| 0|34|873777|972303|928926|   
0.00|   0.00
22|   64|   2140.88| 15.64|   137| 0|33|
 0|  1744| 0|34|926588|942841|974095|   
0.00|   0.00
% |  0.0|  -0.0|  -0.5|  +0.7| 0|   0.0|
 0|   0.0| 0|   0.0|  +6.0|  -3.0|  +4.9|  
0|  0
14|   64|   4076.80| 19.82|   205| 0|30|
 0|  1577| 0|67|   1422282|   1166425|   1539219|   
0.00|   0.00
24|   64|   4094.32| 20.70|   197| 0|31|
 0|  1612| 0|68|   1704330|   1314077|   1833394|   
0.00|   0.00
% |  0.0|  +0.4|  +4.4|  -3.9| 0|  +3.3|
 0|  +2.2| 0|  +1.5| +19.8| +12.7| +19.1|  
0|  0
11|  256|   2867.48| 13.44|   213| 0|32|
 0|  1726| 0|14|666430|694922|690730|   
0.00|   0.00
21|  256|   2874.20| 12.71|   226| 0|32|
 0|   

[PATCH v2] kvm tool: Change kvm->ram_size to real mapped size.

2011-12-18 Thread zanghongyong
From: Hongyong Zang 

If a guest's ram_size exceeds KVM_32BIT_GAP_START, the corresponding kvm tool's
virtual address size should be (ram_size + KVM_32BIT_GAP_SIZE), rather than 
ram_size.

Signed-off-by: Hongyong Zang 
---
 tools/kvm/x86/bios.c |2 +-
 tools/kvm/x86/kvm.c  |   12 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/kvm/x86/bios.c b/tools/kvm/x86/bios.c
index ded0717..06ec708 100644
--- a/tools/kvm/x86/bios.c
+++ b/tools/kvm/x86/bios.c
@@ -93,7 +93,7 @@ static void e820_setup(struct kvm *kvm)
};
mem_map[i++]= (struct e820entry) {
.addr   = 0x1ULL,
-   .size   = kvm->ram_size - KVM_32BIT_GAP_START,
+   .size   = kvm->ram_size - 0x1ULL,
.type   = E820_RAM,
};
}
diff --git a/tools/kvm/x86/kvm.c b/tools/kvm/x86/kvm.c
index d2fbbe2..11a726e 100644
--- a/tools/kvm/x86/kvm.c
+++ b/tools/kvm/x86/kvm.c
@@ -111,7 +111,7 @@ void kvm__init_ram(struct kvm *kvm)
/* Second RAM range from 4GB to the end of RAM: */
 
phys_start = 0x1ULL;
-   phys_size  = kvm->ram_size - phys_size;
+   phys_size  = kvm->ram_size - phys_start;
host_mem   = kvm->ram_start + phys_start;
 
kvm__register_mem(kvm, phys_start, phys_size, host_mem);
@@ -156,12 +156,12 @@ void kvm__arch_init(struct kvm *kvm, const char *kvm_dev, 
const char *hugetlbfs_
if (ret < 0)
die_perror("KVM_CREATE_PIT2 ioctl");
 
-   kvm->ram_size = ram_size;
-
-   if (kvm->ram_size < KVM_32BIT_GAP_START) {
-   kvm->ram_start = mmap_anon_or_hugetlbfs(hugetlbfs_path, 
ram_size);
+   if (ram_size < KVM_32BIT_GAP_START) {
+   kvm->ram_size = ram_size;
+   kvm->ram_start = mmap_anon_or_hugetlbfs(hugetlbfs_path, 
kvm->ram_size);
} else {
-   kvm->ram_start = mmap_anon_or_hugetlbfs(hugetlbfs_path, 
ram_size + KVM_32BIT_GAP_SIZE);
+   kvm->ram_size = ram_size + KVM_32BIT_GAP_SIZE;
+   kvm->ram_start = mmap_anon_or_hugetlbfs(hugetlbfs_path, 
kvm->ram_size);
if (kvm->ram_start != MAP_FAILED)
/*
 * We mprotect the gap (see kvm__init_ram() for 
details) PROT_NONE so that
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] virtio: use mandatory barriers for remote processor vdevs

2011-12-18 Thread Rusty Russell
On Tue, 13 Dec 2011 07:56:36 +0800, Amos Kong  wrote:
> On 12/12/2011 01:12 PM, Rusty Russell wrote:
> > On Mon, 12 Dec 2011 11:06:53 +0800, Amos Kong  wrote:
> >> On 12/12/11 06:27, Benjamin Herrenschmidt wrote:
> >>> On Sun, 2011-12-11 at 14:25 +0200, Michael S. Tsirkin wrote:
> >>>
>  Forwarding some results by Amos, who run multiple netperf streams in
>  parallel, from an external box to the guest.  TCP_STREAM results were
>  noisy.  This could be due to buffering done by TCP, where packet size
>  varies even as message size is constant.
> 
>  TCP_RR results were consistent. In this benchmark, after switching
>  to mandatory barriers, CPU utilization increased by up to 35% while
>  throughput went down by up to 14%. the normalized throughput/cpu
>  regressed consistently, between 7 and 35%
> 
>  The "fix" applied was simply this:
> >>>
> >>> What machine&  processor was this  ?
> >>
> >> pined guest memory to numa node 1
> > 
> > Please try this patch.  How much does the branch cost us?
> 
> Ok, I will provide the result later.

Any news?  We're cutting it very fine.  I've CC'd Linus so he knows this
is coming...

From: Rusty Russell 
Subject: virtio: harsher barriers for virtio-mmio.

We were cheating with our barriers; using the smp ones rather than the
real device ones.  That was fine, until virtio-mmio came along, which
could be talking to a real device (a non-SMP CPU).

Unfortunately, just putting back the real barriers (reverting
d57ed95d) causes a performance regression on virtio-pci.  In
particular, Amos reports netbench's TCP_RR over virtio_net CPU
utilization increased up to 35% while throughput went down by up to
14%.

By comparison, this branch costs us???

Reference: https://lkml.org/lkml/2011/12/11/22

Signed-off-by: Rusty Russell 
---
 drivers/lguest/lguest_device.c |   10 ++
 drivers/s390/kvm/kvm_virtio.c  |2 +-
 drivers/virtio/virtio_mmio.c   |7 ---
 drivers/virtio/virtio_pci.c|4 ++--
 drivers/virtio/virtio_ring.c   |   34 +-
 include/linux/virtio_ring.h|1 +
 tools/virtio/linux/virtio.h|1 +
 tools/virtio/virtio_test.c |3 ++-
 8 files changed, 38 insertions(+), 24 deletions(-)

diff --git a/drivers/lguest/lguest_device.c b/drivers/lguest/lguest_device.c
--- a/drivers/lguest/lguest_device.c
+++ b/drivers/lguest/lguest_device.c
@@ -291,11 +291,13 @@ static struct virtqueue *lg_find_vq(stru
}
 
/*
-* OK, tell virtio_ring.c to set up a virtqueue now we know its size
-* and we've got a pointer to its pages.
+* OK, tell virtio_ring.c to set up a virtqueue now we know its size
+* and we've got a pointer to its pages.  Note that we set weak_barriers
+* to 'true': the host just a(nother) SMP CPU, so we only need inter-cpu
+* barriers.
 */
-   vq = vring_new_virtqueue(lvq->config.num, LGUEST_VRING_ALIGN,
-vdev, lvq->pages, lg_notify, callback, name);
+   vq = vring_new_virtqueue(lvq->config.num, LGUEST_VRING_ALIGN, vdev,
+true, lvq->pages, lg_notify, callback, name);
if (!vq) {
err = -ENOMEM;
goto unmap;
diff --git a/drivers/s390/kvm/kvm_virtio.c b/drivers/s390/kvm/kvm_virtio.c
--- a/drivers/s390/kvm/kvm_virtio.c
+++ b/drivers/s390/kvm/kvm_virtio.c
@@ -198,7 +198,7 @@ static struct virtqueue *kvm_find_vq(str
goto out;
 
vq = vring_new_virtqueue(config->num, KVM_S390_VIRTIO_RING_ALIGN,
-vdev, (void *) config->address,
+vdev, true, (void *) config->address,
 kvm_notify, callback, name);
if (!vq) {
err = -ENOMEM;
diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -309,9 +309,10 @@ static struct virtqueue *vm_setup_vq(str
writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 
-   /* Create the vring */
-   vq = vring_new_virtqueue(info->num, VIRTIO_MMIO_VRING_ALIGN,
-vdev, info->queue, vm_notify, callback, name);
+   /* Create the vring: no weak barriers, the other side is could
+* be an independent "device". */
+   vq = vring_new_virtqueue(info->num, VIRTIO_MMIO_VRING_ALIGN, vdev,
+false, info->queue, vm_notify, callback, name);
if (!vq) {
err = -ENOMEM;
goto error_new_virtqueue;
diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c
--- a/drivers/virtio/virtio_pci.c
+++ b/drivers/virtio/virtio_pci.c
@@ -414,8 +414,8 @@ static struct virtqueue *setup_vq(struct
  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
/* create the vring */
-  

Re: [Qemu-devel] [PATCH] virtio-serial: Allow one MSI-X vector per virtqueue

2011-12-18 Thread Zang Hongyong

于 2011/12/16,星期五 17:39, Amit Shah 写道:

On (Fri) 16 Dec 2011 [09:14:26], zanghongy...@huawei.com wrote:

From: Hongyong Zang

In pci_enable_msix(), the guest's virtio-serial driver tries to set msi-x
with one vector per queue. But it fails and eventually all virtio-serial
ports share one MSI-X vector. Because every virtio-serial port has *two*
virtqueues, virtio-serial needs (port+1)*2 vectors other than (port+1).

Ouch, good catch.

One comment below:


This patch allows every virtqueue to have its own MSI-X vector.
(When the MSI-X vectors needed are more than MSIX_MAX_ENTRIES defined in
qemu: msix.c, all the queues still share one MSI-X vector as before.)

Signed-off-by: Hongyong Zang
---
  hw/virtio-pci.c |5 -
  1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index 77b75bc..2c9c6fb 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -718,8 +718,11 @@ static int virtio_serial_init_pci(PCIDevice *pci_dev)
  return -1;
  }
  vdev->nvectors = proxy->nvectors == DEV_NVECTORS_UNSPECIFIED
-? proxy->serial.max_virtserial_ports + 
1
+? (proxy->serial.max_virtserial_ports 
+ 1) * 2
  : proxy->nvectors;
+/*msix.c: #define MSIX_MAX_ENTRIES 32*/
+if (vdev->nvectors>  32)
+vdev->nvectors = 32;

This change isn't needed: if the proxy->nvectors value exceeds the max
allowed, virtio_init_pci() will end up using a shared vector instead
of separate ones.

Thanks,

Amit

.


Hi Amit,
If the nvectors exceeds the max, msix_init() will return -EINVAL in QEMU,
and the front-end driver in Guest will use regular interrupt instead of 
MSI-X.


Hongyong

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 11/13] ARM: KVM: Support SMP hosts

2011-12-18 Thread Antonios Motakis

On 12/11/2011 11:25 AM, Christoffer Dall wrote:

WARNING: This code is in development and guests do not fully boot on SMP
hosts yet.

Hello,

What would still be needed to fully booted SMP? For example, are there 
identified critical sections and structures that need to be worked on, 
or there are parts that still need to be reviewed to find those? Or is 
it only a matter of fixing up any existing locking/syncing introduced in 
this patch?


I'd like to throw some cycles on this, so I'll start by looking in this 
patch again more carefully (and guest SMP as well).


Antonios

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: Use assert() helper to check a variable value

2011-12-18 Thread Pekka Enberg

On Mon, 19 Dec 2011, Cyrill Gorcunov wrote:

BUILD_BUG_ON is unable to catch errors on expression which
can't be evaluated at compile time.

Signed-off-by: Cyrill Gorcunov 
---
tools/kvm/x86/bios.c |3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

Index: linux-2.6.git/tools/kvm/x86/bios.c
===
--- linux-2.6.git.orig/tools/kvm/x86/bios.c
+++ linux-2.6.git/tools/kvm/x86/bios.c
@@ -5,6 +5,7 @@
#include "kvm/util.h"

#include 
+#include 
#include 

#include "bios/bios-rom.h"
@@ -98,7 +99,7 @@ static void e820_setup(struct kvm *kvm)
};
}

-   BUILD_BUG_ON(i > E820_X_MAX);
+   assert(i <= E820_X_MAX);


We should use BUG_ON() like tools/perf does.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] virtio: use mandatory barriers for remote processor vdevs

2011-12-18 Thread Amos Kong

On 19/12/11 10:41, Benjamin Herrenschmidt wrote:

On Mon, 2011-12-19 at 10:19 +0800, Amos Kong wrote:


I tested with the same environment and scenarios.
tested one scenarios for three times and compute the average for more
precision.

Thanks, Amos

- compare results ---
Mon Dec 19 09:51:09 2011

1 - avg-old.netperf.exhost_guest.txt
2 - avg-fixed.netperf.exhost_guest.txt


The output is word wrapped and generally unreadable. Any chance you can
provide us with a summary of the outcome ?

Cheers,
Ben.


Hi Ben,

The change of TCP_RR Throughput is very small.
external host -> guest: Some of throughput of TCP_STREAM and TCP_MAERTS 
reduced a little.
local host -> guest: Some of throughput of TCP_STREAM and TCP_MAERTS 
increased a little.



About compare result format:
---

1 - avg-old.netperf.exhost_guest.txt


average result (tested 3 times) file of test 1

2 - avg-fixed.netperf.exhost_guest.txt


average result file of test 2


==
TCP_STREAM


^^^ protocol


  sessions| size|throughput|   cpu| normalize|  #tx-pkts| #rx-pkts| 
#tx-byts|  #rx-byts| #re-trans| #tx-intr|  #rx-intr|  #io_exit|  
#irq_inj|#tpkt/#exit| #rpkt/#irq
11|   64|   1073.54| 10.50|   102|  


^^^ average result of old kernel, start netserver in guest, start 
netperf client(s) in external host



21|   64|   1079.44| 10.29|   104|  


^^^ average result of fixed kernel


% |  0.0|  +0.5|  -2.0|  +2.0|  


^^^ augment rate between test1 and test2


sessions: netperf clients number
size: request/response sizes
#rx-pkts: received packets number
#rx-byts: received bytes number
#rx-intr: interrupt number for receive
#io_exit: io exit number
#irq_inj: injected irq number


Thanks, Amos.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] virtio-serial: Allow one MSI-X vector per virtqueue

2011-12-18 Thread Amit Shah
On (Mon) 19 Dec 2011 [14:09:43], Zang Hongyong wrote:
> 于 2011/12/16,星期五 17:39, Amit Shah 写道:
> >On (Fri) 16 Dec 2011 [09:14:26], zanghongy...@huawei.com wrote:
> >>From: Hongyong Zang
> >>
> >>In pci_enable_msix(), the guest's virtio-serial driver tries to set msi-x
> >>with one vector per queue. But it fails and eventually all virtio-serial
> >>ports share one MSI-X vector. Because every virtio-serial port has *two*
> >>virtqueues, virtio-serial needs (port+1)*2 vectors other than (port+1).
> >Ouch, good catch.
> >
> >One comment below:
> >
> >>This patch allows every virtqueue to have its own MSI-X vector.
> >>(When the MSI-X vectors needed are more than MSIX_MAX_ENTRIES defined in
> >>qemu: msix.c, all the queues still share one MSI-X vector as before.)
> >>
> >>Signed-off-by: Hongyong Zang
> >>---
> >>  hw/virtio-pci.c |5 -
> >>  1 files changed, 4 insertions(+), 1 deletions(-)
> >>
> >>diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> >>index 77b75bc..2c9c6fb 100644
> >>--- a/hw/virtio-pci.c
> >>+++ b/hw/virtio-pci.c
> >>@@ -718,8 +718,11 @@ static int virtio_serial_init_pci(PCIDevice *pci_dev)
> >>  return -1;
> >>  }
> >>  vdev->nvectors = proxy->nvectors == DEV_NVECTORS_UNSPECIFIED
> >>-? 
> >>proxy->serial.max_virtserial_ports + 1
> >>+? 
> >>(proxy->serial.max_virtserial_ports + 1) * 2
> >>  : proxy->nvectors;
> >>+/*msix.c: #define MSIX_MAX_ENTRIES 32*/
> >>+if (vdev->nvectors>  32)
> >>+vdev->nvectors = 32;
> >This change isn't needed: if the proxy->nvectors value exceeds the max
> >allowed, virtio_init_pci() will end up using a shared vector instead
> >of separate ones.
> >
> Hi Amit,
> If the nvectors exceeds the max, msix_init() will return -EINVAL in QEMU,
> and the front-end driver in Guest will use regular interrupt instead
> of MSI-X.

In that case, I believe msix_init() should be changed to attempt to
share interrupts instead of drivers doing this by themselves.

Amit
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: Use assert() helper to check a variable value

2011-12-18 Thread Cyrill Gorcunov
On Mon, Dec 19, 2011 at 09:13:28AM +0200, Pekka Enberg wrote:
> >
> >-BUILD_BUG_ON(i > E820_X_MAX);
> >+assert(i <= E820_X_MAX);
> 
> We should use BUG_ON() like tools/perf does.
> 

We dont have it yet. So I'll introduce this helper later,
but note that we will have to cover _all_ assert() calls then,
so it's better to make in a separate patch. Meanwhile such fix
it better than bug ;)

Cyrill
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html