On Mon, 22 Jun 2020 18:47:57 +0200
Paolo Bonzini wrote:
> On 22/06/20 18:08, Igor Mammedov wrote:
> > Guest fails to online hotplugged CPU with error
> > smpboot: do_boot_cpu failed(-1) to wakeup CPU#4
> >
> > It's caused by the fact that kvm_apic
lugged CPU.
Fix issue by forcing unconditional update from kvm_apic_set_state(),
like it used to be.
1)
Fixes: (4abaffce4d25a KVM: LAPIC: Recalculate apic map in batch)
Signed-off-by: Igor Mammedov
---
PS:
it's alternative to full revert of [1], I've posted earlier
https://www.mail-arch
rts commit 4abaffce4d25aa41392d2e81835592726d757857.
Signed-off-by: Igor Mammedov
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/lapic.h| 1 -
arch/x86/kvm/lapic.c| 46 +++--
arch/x86/kvm/x86.c | 1 -
4 files changed, 10 insertions(+), 39 deletion
On Fri, 19 Jun 2020 16:10:43 +0200
Paolo Bonzini wrote:
> On 19/06/20 14:36, Igor Mammedov wrote:
> > qemu-kvm -m 2G -smp 4,maxcpus=8 -monitor stdio
> > (qemu) device_add qemu64-x86_64-cpu,socket-id=4,core-id=0,thread-id=0
> >
> > in guest fails with:
> >
&g
On Fri, 19 Jun 2020 16:10:43 +0200
Paolo Bonzini wrote:
> On 19/06/20 14:36, Igor Mammedov wrote:
> > qemu-kvm -m 2G -smp 4,maxcpus=8 -monitor stdio
> > (qemu) device_add qemu64-x86_64-cpu,socket-id=4,core-id=0,thread-id=0
> >
> > in guest fails with:
> >
&g
On Wed, 26 Feb 2020 10:41:02 +0800
Wanpeng Li wrote:
> From: Wanpeng Li
>
> In the vCPU reset and set APIC_BASE MSR path, the apic map will be
> recalculated
> several times, each time it will consume 10+ us observed by ftrace in my
> non-overcommit environment since the expensive memory all
ep is turned off.
Last Breaking-Event-Address:
[<00dbaf60>] __memset+0xc/0xa0
due to ms->dirty_bitmap being NULL, which might crash the host.
Make sure that ms->dirty_bitmap is set before using it or
return -ENIVAL otherwise.
Fixes: afdad61615cc ("KVM: s390: Fix stora
t;KVM: s390: Fix storage attributes migration with memory
slots")
Signed-off-by: Igor Mammedov
---
Cc: sta...@vger.kernel.org # v4.19+
v2:
- drop WARN()
arch/s390/kvm/kvm-s390.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
i
On Mon, 9 Sep 2019 17:47:37 +0200
David Hildenbrand wrote:
> On 09.09.19 16:55, Igor Mammedov wrote:
> > If userspace doesn't set KVM_MEM_LOG_DIRTY_PAGES on memslot before calling
> > kvm_s390_vm_start_migration(), kernel will oops with:
> >
>
> We usually
ep is turned off.
Last Breaking-Event-Address:
[<00dbaf60>] __memset+0xc/0xa0
due to ms->dirty_bitmap being NULL, which migh crash the host.
Make sure that ms->dirty_bitmap is set before using it or
print a warning and return -ENIVAL otherwise.
Signed-off-by: Igor Mammedo
On Mon, 28 Jan 2019 06:52:52 -0600
Josh Poimboeuf wrote:
> On Mon, Jan 28, 2019 at 11:13:04AM +0100, Igor Mammedov wrote:
> > On Fri, 25 Jan 2019 11:02:03 -0600
> > Josh Poimboeuf wrote:
> >
> > > On Fri, Jan 25, 2019 at 10:36:57AM -0600, Josh Poimboeuf wrote:
On Fri, 25 Jan 2019 11:02:03 -0600
Josh Poimboeuf wrote:
> On Fri, Jan 25, 2019 at 10:36:57AM -0600, Josh Poimboeuf wrote:
> > How about this patch? It's just a revert of 73d5e2b47264 and
> > bc2d8d262cba, plus the 1-line vmx_vm_init() change. If it looks ok to
> > you, I can clean it up and su
In case guest is booted with one CPU present and then later
a sibling CPU is hotplugged [1], it stays offline since SMT
is disabled.
Bisects to
73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS")
which used __max_smt_threads to decide disabling SMT and in
case [1] only primary CPU thread is
On Fri, 1 Sep 2017 17:58:55 +0800
gengdongjiu wrote:
> Hi Igor,
>
> On 2017/8/29 18:20, Igor Mammedov wrote:
> > On Fri, 18 Aug 2017 22:23:43 +0800
> > Dongjiu Geng wrote:
[...]
> >
> >> +void ghes_build_acpi(GArray
On Fri, 18 Aug 2017 22:23:43 +0800
Dongjiu Geng wrote:
> This implements APEI GHES Table by passing the error CPER info
> to the guest via a fw_cfg_blob. After a CPER info is recorded, an
> SEA(Synchronous External Abort)/SEI(SError Interrupt) exception
> will be injected into the guest OS.
it's
On Mon, 31 Jul 2017 19:58:30 +0200
Gerald Schaefer wrote:
> On Mon, 31 Jul 2017 17:53:50 +0200
> Michal Hocko wrote:
>
> > On Mon 31-07-17 17:04:59, Gerald Schaefer wrote:
> > > On Mon, 31 Jul 2017 14:53:19 +0200
> > > Michal Hocko wrote:
> > >
> > > > On Mon 31-07-17 14:35:21, Gerald Sch
On Wed, 24 May 2017 14:24:11 +0200
Michal Hocko wrote:
[...]
> index facc20a3f962..ec7d6ae01c96 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2246,8 +2246,11 @@
[...]
> + movable. This means that th
On Tue, 11 Apr 2017 11:23:07 +0200
Michal Hocko wrote:
> On Tue 11-04-17 08:38:34, Igor Mammedov wrote:
> > for issue2:
> > -enable-kvm -m 2G,slots=4,maxmem=4G -smp 4 -numa node -numa node \
> > -drive if=virtio,file=disk.img -kernel bzImage -append 'root=/dev/vda1
On Tue, 11 Apr 2017 10:41:42 +0200
Michal Hocko wrote:
> On Tue 11-04-17 10:01:52, Igor Mammedov wrote:
> > On Mon, 10 Apr 2017 16:56:39 +0200
> > Michal Hocko wrote:
> [...]
> > > > #echo online_kernel > memory32/state
> > > > write error:
On Mon, 10 Apr 2017 16:56:39 +0200
Michal Hocko wrote:
> On Mon 10-04-17 16:27:49, Igor Mammedov wrote:
> [...]
> > Hi Michal,
> >
> > I've given series some dumb testing, see below for unexpected changes I've
> > noticed.
> >
> > Using the
On Mon, 10 Apr 2017 18:09:41 +0200
Michal Hocko wrote:
> On Mon 10-04-17 16:27:49, Igor Mammedov wrote:
> [...]
> > -object memory-backend-ram,id=mem1,size=256M -object
> > memory-backend-ram,id=mem0,size=256M \
> > -device pc-dimm,id=dimm1,memdev=mem1,slot=1,node=0
On Mon, 10 Apr 2017 13:03:42 +0200
Michal Hocko wrote:
> Hi,
> The last version of this series has been posted here [1]. It has seen
> some more serious testing (thanks to Reza Arbab) and fixes for the found
> issues. I have also decided to drop patch 1 [2] because it turned out to
> be more comp
On Wed, 5 Apr 2017 10:14:00 +0200
Michal Hocko wrote:
> On Fri 31-03-17 09:39:54, Michal Hocko wrote:
> > Fixed screw ups during the initial patch split up as per Hillf
> > ---
> > From 8be6c5e47de66210e47710c80e72e8abd899017b Mon Sep 17 00:00:00 2001
> > From: Michal Hocko
> > Date: Wed, 29 Mar
On Mon, 3 Apr 2017 13:55:46 +0200
Michal Hocko wrote:
> On Thu 30-03-17 13:54:48, Michal Hocko wrote:
> [...]
> > Any thoughts, complains, suggestions?
>
> Anyting? I would really appreciate a feedback from IBM and Futjitsu guys
> who have shaped this code last few years. Also Igor and Vitaly
On Thu, 16 Mar 2017 20:01:25 +0100
Andrea Arcangeli wrote:
[...]
> If we can make zone overlap work with a 100% overlap across the whole
> node that would be a fine alternative, the zoneinfo.py output will
> look weird, but if that's the only downside it's no big deal. With
> sticky movable pageb
On Mon, 13 Mar 2017 13:28:25 +0100
Michal Hocko wrote:
> On Mon 13-03-17 11:55:54, Igor Mammedov wrote:
> > On Thu, 9 Mar 2017 13:54:00 +0100
> > Michal Hocko wrote:
> >
> > [...]
> > > > It's major regression if you remove auto online in kerne
On Mon, 13 Mar 2017 11:43:02 +0100
Michal Hocko wrote:
> On Mon 13-03-17 11:31:10, Igor Mammedov wrote:
> > On Fri, 10 Mar 2017 14:58:07 +0100
> [...]
> > > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x-0x0009]
> > > [0.00] ACPI: SRAT
On Thu, 9 Mar 2017 13:54:00 +0100
Michal Hocko wrote:
[...]
> > It's major regression if you remove auto online in kernels that
> > run on top of x86 kvm/vmware hypervisors, making API cleanups
> > while breaking useful functionality doesn't make sense.
> >
> > I would ACK config option removal
last blocks. More below.
>
> On Thu 09-03-17 13:54:00, Michal Hocko wrote:
> > On Tue 07-03-17 13:40:04, Igor Mammedov wrote:
> > > On Mon, 6 Mar 2017 15:54:17 +0100
> > > Michal Hocko wrote:
> > >
> > > > On Fri 03-03-17 18:34:22, Igor Mammedov w
On Mon, 6 Mar 2017 15:54:17 +0100
Michal Hocko wrote:
> On Fri 03-03-17 18:34:22, Igor Mammedov wrote:
> > On Fri, 3 Mar 2017 09:27:23 +0100
> > Michal Hocko wrote:
> >
> > > On Thu 02-03-17 18:03:15, Igor Mammedov wrote:
> > > > On Thu, 2 Ma
On Fri, 3 Mar 2017 09:27:23 +0100
Michal Hocko wrote:
> On Thu 02-03-17 18:03:15, Igor Mammedov wrote:
> > On Thu, 2 Mar 2017 15:28:16 +0100
> > Michal Hocko wrote:
> >
> > > On Thu 02-03-17 14:53:48, Igor Mammedov wrote:
> > > [...]
> > >
On Thu, 2 Mar 2017 15:28:16 +0100
Michal Hocko wrote:
> On Thu 02-03-17 14:53:48, Igor Mammedov wrote:
> [...]
> > When trying to support memory unplug on guest side in RHEL7,
> > experience shows otherwise. Simplistic udev rule which onlines
> > added block doesn
On Mon 27-02-17 16:43:04, Michal Hocko wrote:
> On Mon 27-02-17 12:25:10, Heiko Carstens wrote:
> > On Mon, Feb 27, 2017 at 11:02:09AM +0100, Vitaly Kuznetsov wrote:
> > > A couple of other thoughts:
> > > 1) Having all newly added memory online ASAP is probably what people
> > > want for all vir
On Tue, 18 Oct 2016 16:02:07 +0200
Mike Galbraith wrote:
> On Tue, 2016-10-18 at 15:40 +0200, Igor Mammedov wrote:
> > kernel crashes at runtime due null pointer dereference at
> > select_idle_sibling()
> > -> select_idle_cpu()
> > ...
>
kernel crashes at runtime due null pointer dereference at
select_idle_sibling()
-> select_idle_cpu()
...
u64 avg_cost = this_sd->avg_scan_cost;
regression bisects to:
commit 10e2f1acd0106c05229f94c70a344ce3a2c8008b
Author: Peter Zijlstra
sched/core: Rewrite and imp
Fixes crash at boot for me.
Small nit wrt subj
s/def/deref/
Fixes crash at boot for me.
Small nit wrt subj
s/def/deref/
laim only 14e4:4365 PCI Dell card with
> SoftMAC BCM43142") Reported-by: Igor Mammedov
> Signed-off-by: Rafał Miłecki
> Cc: Stable [4.6+]
Tested-by: Igor Mammedov
> ---
> drivers/bcma/host_pci.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/bcma/h
On Fri, 22 Apr 2016 11:25:38 +0200
Greg Kurz wrote:
> Hi Radim !
>
> On Thu, 21 Apr 2016 19:36:11 +0200
> Radim Krčmář wrote:
>
> > 2016-04-21 18:45+0200, Greg Kurz:
> > > On Thu, 21 Apr 2016 18:00:19 +0200
> > > Radim Krčmář wrote:
> > >> 2016-04-21 16:20+0200, Greg Kurz:
[...]
> >
On Mon, 04 Jan 2016 11:47:12 +0100
Vitaly Kuznetsov wrote:
> Andrew Morton writes:
>
> > On Tue, 22 Dec 2015 17:32:30 +0100 Vitaly Kuznetsov
> > wrote:
> >
> >> Currently, all newly added memory blocks remain in 'offline' state unless
> >> someone onlines them, some linux distributions carr
t; Cc: Andrew Morton
> Cc: Tang Chen
> Cc: Naoya Horiguchi
> Cc: Xishi Qiu
> Cc: Sheng Yong
> Cc: David Rientjes
> Cc: Zhu Guihua
> Cc: Dan Williams
> Cc: David Vrabel
> Cc: Igor Mammedov
> Signed-off-by: Vitaly Kuznetsov
> ---
> mm/memory_hotplug.c | 17
Commit-ID: ec941c5ffede4d788b9fc008f9eeca75b9e964f5
Gitweb: http://git.kernel.org/tip/ec941c5ffede4d788b9fc008f9eeca75b9e964f5
Author: Igor Mammedov
AuthorDate: Fri, 4 Dec 2015 14:07:06 +0100
Committer: Ingo Molnar
CommitDate: Sun, 6 Dec 2015 12:46:31 +0100
x86/mm/64: Enable SWIOTLB
Commit-ID: 8dd3303001976aa8583bf20f6b93590c74114308
Gitweb: http://git.kernel.org/tip/8dd3303001976aa8583bf20f6b93590c74114308
Author: Igor Mammedov
AuthorDate: Fri, 4 Dec 2015 14:07:05 +0100
Committer: Ingo Molnar
CommitDate: Sun, 6 Dec 2015 12:46:31 +0100
x86/mm: Introduce
ml.org/lkml/2015/12/4/151
ref to v1:
https://lkml.org/lkml/2015/11/30/594
Igor Mammedov (2):
x86: introduce max_possible_pfn
x86_64: enable SWIOTLB if system has SRAT memory regions above
MAX_DMA32_PFN
arch/x86/kernel/pci-swiotlb.c | 2 +-
arch/x86/kernel/setup.c | 2 ++
arch/x86/mm
table
if any present.
Signed-off-by: Igor Mammedov
---
v3:
- make 'max_possible_pfn' 64-bit
- simplify condition to oneliner as suggested by Ingo
---
arch/x86/kernel/setup.c | 2 ++
arch/x86/mm/srat.c | 2 ++
include/linux/bootmem.h | 4
mm/bootmem.c| 1 +
f there is hotpluggable memory
regions beyond MAX_DMA32_PFN.
It fixes KVM guests when they use emulated devices
(reproduces with ata_piix, e1000 and usb devices,
RHBZ: 1275941, 1275977, 1271527)
It also fixes the HyperV, VMWare with emulated devices
which are affected by this issue as well.
Signed-of
On Fri, 4 Dec 2015 12:49:49 +0100
Ingo Molnar wrote:
>
> * Igor Mammedov wrote:
>
> > when memory hotplug enabled system is booted with less
> > than 4GB of RAM and then later more RAM is hotplugged
> > 32-bit devices stop functioning with following error:
> >
On Fri, 4 Dec 2015 12:49:49 +0100
Ingo Molnar wrote:
>
> * Igor Mammedov wrote:
>
> > when memory hotplug enabled system is booted with less
> > than 4GB of RAM and then later more RAM is hotplugged
> > 32-bit devices stop functioning with following error:
> >
f there is hotpluggable memory
regions beyond MAX_DMA32_PFN.
It fixes KVM guests when they use emulated devices
(reproduces with ata_piix, e1000 and usb devices,
RHBZ: 1275941, 1275977, 1271527)
It also fixes the HyperV, VMWare with emulated devices
which are affected by this issue as well.
Signed-of
SRAT table
if any present.
Signed-off-by: Igor Mammedov
---
arch/x86/kernel/setup.c | 2 ++
include/linux/bootmem.h | 4
mm/bootmem.c| 1 +
mm/nobootmem.c | 1 +
4 files changed, 8 insertions(+)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 29
the HyperV, VMWare with emulated devices
which are affected by this issue as well.
ref to v1:
https://lkml.org/lkml/2015/11/30/594
Igor Mammedov (2):
x86: introduce max_possible_pfn
x86_64: enable SWIOTLB if system has SRAT memory regions above
MAX_DMA32_PFN
arch/x86/kernel/pci-s
On Fri, 4 Dec 2015 09:20:50 +0100
Ingo Molnar wrote:
>
> * Igor Mammedov wrote:
>
> > diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
> > index 94c18eb..53d7951 100644
> > --- a/arch/x86/include/asm/acpi.h
> > +++ b/arch/x86/includ
e RAM less than 4GB and do not use
memory hotplug but still have hotplug regions in SRAT
(i.e. broken BIOS that can't disable mem hotplug)
can disable memory hotplug with 'acpi_no_memhotplug = 1'
to avoid automatic SWIOTLB initialization.
Tested on QEMU/KVM and HyperV.
Sig
oing on at the
> > time? A special graphics driver being loaded? That could cause issues.
> >
>
> It seems that the problem was fixed by Igor, right?
> https://lkml.org/lkml/2014/3/6/257
That might help.
"stuck" CPU14 means that master CPU has given up on the a
t_vmcb() time.
--
* AMD64 Architecture Programmer’s Manual,
Volume 2: System Programming, rev: 3.25
15.19 Paged Real Mode
** Opteron 1216
Signed-off-by: Igor Mammedov
---
arch/x86/kvm/svm.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fdb8cb6..
On Thu, 30 Jul 2015 09:33:57 +0300
"Michael S. Tsirkin" wrote:
> On Thu, Jul 30, 2015 at 08:26:03AM +0200, Igor Mammedov wrote:
> > On Wed, 29 Jul 2015 18:28:26 +0300
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jul 29, 2015 at 04:29:23P
On Thu, 30 Jul 2015 09:33:57 +0300
"Michael S. Tsirkin" wrote:
> On Thu, Jul 30, 2015 at 08:26:03AM +0200, Igor Mammedov wrote:
> > On Wed, 29 Jul 2015 18:28:26 +0300
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jul 29, 2015 at 04:29:23P
On Wed, 29 Jul 2015 18:28:26 +0300
"Michael S. Tsirkin" wrote:
> On Wed, Jul 29, 2015 at 04:29:23PM +0200, Igor Mammedov wrote:
> > although now there is vhost module max_mem_regions option
> > to set custom limit it doesn't help for default setups,
> > sinc
On Wed, 29 Jul 2015 17:43:17 +0300
"Michael S. Tsirkin" wrote:
> On Wed, Jul 29, 2015 at 04:29:22PM +0200, Igor Mammedov wrote:
> > From: "Michael S. Tsirkin"
> >
> > Userspace currently simply tries to give vhost as many regions
> > as it h
s max),
so that default deployments would work out of box.
Signed-off-by: Igor Mammedov
---
PS:
Users that would want to lock down vhost could still
use max_mem_regions option to set lower limit, but
I expect it would be minority.
---
include/uapi/linux/vhost.h | 2 +-
1 file changed, 1 insertion(
7;s left unused, let's make that mean that the current userspace
behaviour (trial and error) is required, just in case we want it back.
Signed-off-by: Michael S. Tsirkin
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 7 ++-
include/uapi/linux/vhost.h | 17 ++
Igor Mammedov (1):
vhost: increase default limit of nregions from 64 to 509
Michael S. Tsirkin (1):
vhost: add ioctl to query nregions upper limit
drivers/vhost/vhost.c | 7 ++-
include/uapi/linux/vhost.h | 17 -
2 files changed, 22 insertions(+), 2 deletions
On Thu, 2 Jul 2015 15:08:11 +0200
Igor Mammedov wrote:
> it became possible to use a bigger amount of memory
> slots, which is used by memory hotplug for
> registering hotplugged memory.
> However QEMU crashes if it's used with more than ~60
> pc-dimm devices and vhost-ne
: Dan Carpenter
Suggested-by: Julia Lawall
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index a9fe859..3702487 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost
On Mon, 13 Jul 2015 23:19:59 +0300
"Michael S. Tsirkin" wrote:
> On Mon, Jul 13, 2015 at 08:15:30PM +0200, Igor Mammedov wrote:
> > while on x86 target vmalloc.h is included indirectly through
> > other heaedrs, it's not included on SPARC.
> > Fix issue
while on x86 target vmalloc.h is included indirectly through
other heaedrs, it's not included on SPARC.
Fix issue by including vmalloc.h directly from vhost.c
like it's done in vhost/net.c
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 1 +
1 file changed, 1 insertion(+)
di
On Thu, 2 Jul 2015 15:08:09 +0200
Igor Mammedov wrote:
> changes since v3:
> * rebased on top of vhost-next branch
> changes since v2:
> * drop cache patches for now as suggested
> * add max_mem_regions module parameter instead of unconditionally
> increasing limit
h older QEMU's which could use large amount of memory
regions.
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 20
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 71bb468..6488011 100644
--- a/dr
emory regions.
Allow to tweak limit via max_mem_regions module paramemter
with default value set to 64 slots.
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 64
2 slots in default QEMU configuration.
Igor Mammedov (2):
vhost: extend memory regions allocation to vmalloc
vhost: add max_mem_regions module parameter
drivers/vhost/vhost.c | 28 ++--
1 file changed, 22 insertions(+), 6 deletions(-)
--
1.8.3.1
--
To unsubscribe from
emory regions.
Allow to tweak limit via max_mem_regions module paramemter
with default value set to 64 slots.
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 99
http://www.spinics.net/lists/kvm/msg117654.html
Series allows to tweak vhost's memory regions count limit.
It fixes VM crashing on memory hotplug due to vhost refusing
accepting more than 64 memory regions with max_mem_regions
set to more than 262 slots in default QEMU configuration.
Igor Mammedov (2
h older QEMU's which could use large amount of memory
regions.
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index f1e07b8..99931a0 100644
--
On Wed, 24 Jun 2015 17:08:56 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 24, 2015 at 04:52:29PM +0200, Igor Mammedov wrote:
> > On Wed, 24 Jun 2015 16:17:46 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 24, 2015 at 04:07:27PM
On Wed, 24 Jun 2015 16:17:46 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 24, 2015 at 04:07:27PM +0200, Igor Mammedov wrote:
> > On Wed, 24 Jun 2015 15:49:27 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > Userspace currently simply tries to
ning on an
> old kernel, you get -1 and you can assume at least 64 slots. Since 0
> value's left unused, let's make that mean that the current userspace
> behaviour (trial and error) is required, just in case we want it back.
>
> Signed-off-by: Michael S. Tsirkin
&
On Fri, 19 Jun 2015 18:33:39 +0200
"Michael S. Tsirkin" wrote:
> On Fri, Jun 19, 2015 at 06:26:27PM +0200, Paolo Bonzini wrote:
> >
> >
> > On 19/06/2015 18:20, Michael S. Tsirkin wrote:
> > > > We could, but I/O is just an example. It can be I/O, a network ring,
> > > > whatever. We cannot a
e:
> > >>
> > >>
> > >> On 18/06/2015 13:41, Michael S. Tsirkin wrote:
> > >>> On Thu, Jun 18, 2015 at 01:39:12PM +0200, Igor Mammedov wrote:
> > >>>> Lets leave decision upto users instead of making them live with
> > &g
On Thu, 18 Jun 2015 13:41:22 +0200
"Michael S. Tsirkin" wrote:
> On Thu, Jun 18, 2015 at 01:39:12PM +0200, Igor Mammedov wrote:
> > Lets leave decision upto users instead of making them live with
> > crashing guests.
>
> Come on, let's fix it in userspace.
I
On Thu, 18 Jun 2015 11:50:22 +0200
"Michael S. Tsirkin" wrote:
> On Thu, Jun 18, 2015 at 11:12:24AM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 18:30:02 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 17, 2015 at 06:09:21PM
On Wed, 17 Jun 2015 18:30:02 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 06:09:21PM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 17:38:40 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 17, 2015 at 05:12:57PM
On Wed, 17 Jun 2015 18:47:18 +0200
Paolo Bonzini wrote:
>
>
> On 17/06/2015 18:41, Michael S. Tsirkin wrote:
> > On Wed, Jun 17, 2015 at 06:38:25PM +0200, Paolo Bonzini wrote:
> >>
> >>
> >> On 17/06/2015 18:34, Michael S. Tsirkin wrote:
> >>> On Wed, Jun 17, 2015 at 06:31:32PM +0200, Paolo Bon
On Wed, 17 Jun 2015 18:30:02 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 06:09:21PM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 17:38:40 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 17, 2015 at 05:12:57PM
On Wed, 17 Jun 2015 17:38:40 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 05:12:57PM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 16:32:02 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun
On Wed, 17 Jun 2015 16:32:02 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 03:20:44PM +0200, Paolo Bonzini wrote:
> >
> >
> > On 17/06/2015 15:13, Michael S. Tsirkin wrote:
> > > > > Considering userspace can be malicious, I guess yes.
> > > > I don't think it's a valid concern in
-
upstream| 0.3% | - | 3.5%
this series | 0.2% | 0.5% | 0.7%
where "non cached" column reflects trashing wokload
with constant cache miss. More details on timing in
respective patches.
Igor Mammedov (6):
vhost: use binary search instead of linear in find_region()
vhost: extend m
that brings down translate_desc() cost to around 210ns
if accessed descriptors are from the same memory region.
Signed-off-by: Igor Mammedov
---
that's what netperf/iperf workloads were during testing.
---
drivers/vhost/vhost.c | 16 +---
drivers/vhost/vhost.h | 1 +
2
h older QEMU's which could use large amount of memory
regions.
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index f1e07b8..99931a0 100644
--
allowed number
of slots is increased to 509 like it has been done in KVM.
Signed-off-by: Igor Mammedov
---
v2:
move kvfree() to 2/2 where it belongs
---
drivers/vhost/vhost.c | 36 +++-
1 file changed, 27 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost
el
in module vhost-net refuses to accept more than 64
memory regions.
Increase VHOST_MEMORY_MAX_NREGIONS limit from 64 to 509
to match KVM_USER_MEM_SLOTS to fix issue for vhost-net
and current QEMU versions.
Signed-off-by: Igor Mammedov
---
drivers/vhost/vhost.c | 2 +-
1 file changed, 1 inse
branches
with a single remaining length check and execute
next iov steps only when it needed.
It saves a tiny 2% of translate_desc() execution time.
Signed-off-by: Igor Mammedov
---
PS:
I'm not sure if iov_size > 0 is always true, if it's not
then better to drop this patch.
---
with cashing enabled for sequential workload
doesn't seem to be affected much vs version without static key switch,
i.e. still the same 0.2% of total time with key(NOPs) consuming
5ms on 5min workload.
Signed-off-by: Igor Mammedov
---
I don't have a test case for trashing workload thoug
On Wed, 17 Jun 2015 13:51:56 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 01:48:03PM +0200, Igor Mammedov wrote:
> > > > So far it's kernel limitation and this patch fixes crashes
> > > > that users see now, with the rest of patches e
On Wed, 17 Jun 2015 12:46:09 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 12:37:42PM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 12:11:09 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 17, 2015 at 10:54:21AM
On Wed, 17 Jun 2015 12:11:09 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 10:54:21AM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 09:39:06 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 17, 2015 at 09:28:02AM
On Wed, 17 Jun 2015 09:39:06 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 09:28:02AM +0200, Igor Mammedov wrote:
> > On Wed, 17 Jun 2015 08:34:26 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Wed, Jun 17, 2015 at 12:00:56AM
On Wed, 17 Jun 2015 08:31:23 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 12:19:15AM +0200, Igor Mammedov wrote:
> > On Tue, 16 Jun 2015 23:16:07 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Tue, Jun 16, 2015 at 06:33:34PM
On Wed, 17 Jun 2015 08:34:26 +0200
"Michael S. Tsirkin" wrote:
> On Wed, Jun 17, 2015 at 12:00:56AM +0200, Igor Mammedov wrote:
> > On Tue, 16 Jun 2015 23:14:20 +0200
> > "Michael S. Tsirkin" wrote:
> >
> > > On Tue, Jun 16, 2015 at 06:33:37P
On Tue, 16 Jun 2015 23:16:07 +0200
"Michael S. Tsirkin" wrote:
> On Tue, Jun 16, 2015 at 06:33:34PM +0200, Igor Mammedov wrote:
> > Series extends vhost to support upto 509 memory regions,
> > and adds some vhost:translate_desc() performance improvemnts
> > so it
On Tue, 16 Jun 2015 23:14:20 +0200
"Michael S. Tsirkin" wrote:
> On Tue, Jun 16, 2015 at 06:33:37PM +0200, Igor Mammedov wrote:
> > since commit
> > 1d4e7e3 kvm: x86: increase user memory slots to 509
> >
> > it became possible to use a bigger amount
1 - 100 of 226 matches
Mail list logo