Re: [PATCH] kvm: lapic: fix broken vcpu hotplug

2020-06-23 Thread Igor Mammedov
On Mon, 22 Jun 2020 18:47:57 +0200 Paolo Bonzini wrote: > On 22/06/20 18:08, Igor Mammedov wrote: > > Guest fails to online hotplugged CPU with error > > smpboot: do_boot_cpu failed(-1) to wakeup CPU#4 > > > > It's caused by the fact that kvm_apic

[PATCH] kvm: lapic: fix broken vcpu hotplug

2020-06-22 Thread Igor Mammedov
lugged CPU. Fix issue by forcing unconditional update from kvm_apic_set_state(), like it used to be. 1) Fixes: (4abaffce4d25a KVM: LAPIC: Recalculate apic map in batch) Signed-off-by: Igor Mammedov --- PS: it's alternative to full revert of [1], I've posted earlier https://www.mail-arch

[PATCH] Revert "KVM: LAPIC: Recalculate apic map in batch"

2020-06-22 Thread Igor Mammedov
rts commit 4abaffce4d25aa41392d2e81835592726d757857. Signed-off-by: Igor Mammedov --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/lapic.h| 1 - arch/x86/kvm/lapic.c| 46 +++-- arch/x86/kvm/x86.c | 1 - 4 files changed, 10 insertions(+), 39 deletion

Re: [PATCH v3] KVM: LAPIC: Recalculate apic map in batch

2020-06-21 Thread Igor Mammedov
On Fri, 19 Jun 2020 16:10:43 +0200 Paolo Bonzini wrote: > On 19/06/20 14:36, Igor Mammedov wrote: > > qemu-kvm -m 2G -smp 4,maxcpus=8 -monitor stdio > > (qemu) device_add qemu64-x86_64-cpu,socket-id=4,core-id=0,thread-id=0 > > > > in guest fails with: > > &g

Re: [PATCH v3] KVM: LAPIC: Recalculate apic map in batch

2020-06-19 Thread Igor Mammedov
On Fri, 19 Jun 2020 16:10:43 +0200 Paolo Bonzini wrote: > On 19/06/20 14:36, Igor Mammedov wrote: > > qemu-kvm -m 2G -smp 4,maxcpus=8 -monitor stdio > > (qemu) device_add qemu64-x86_64-cpu,socket-id=4,core-id=0,thread-id=0 > > > > in guest fails with: > > &g

Re: [PATCH v3] KVM: LAPIC: Recalculate apic map in batch

2020-06-19 Thread Igor Mammedov
On Wed, 26 Feb 2020 10:41:02 +0800 Wanpeng Li wrote: > From: Wanpeng Li > > In the vCPU reset and set APIC_BASE MSR path, the apic map will be > recalculated > several times, each time it will consume 10+ us observed by ftrace in my > non-overcommit environment since the expensive memory all

[PATCH v2] KVM: s390: kvm_s390_vm_start_migration: check dirty_bitmap before using it as target for memset()

2019-09-11 Thread Igor Mammedov
ep is turned off. Last Breaking-Event-Address: [<00dbaf60>] __memset+0xc/0xa0 due to ms->dirty_bitmap being NULL, which might crash the host. Make sure that ms->dirty_bitmap is set before using it or return -ENIVAL otherwise. Fixes: afdad61615cc ("KVM: s390: Fix stora

[PATCH] KVM: s390: kvm_s390_vm_start_migration: check dirty_bitmap before using it as target for memset()

2019-09-10 Thread Igor Mammedov
t;KVM: s390: Fix storage attributes migration with memory slots") Signed-off-by: Igor Mammedov --- Cc: sta...@vger.kernel.org # v4.19+ v2: - drop WARN() arch/s390/kvm/kvm-s390.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c i

Re: [PATCH] kvm_s390_vm_start_migration: check dirty_bitmap before using it as target for memset()

2019-09-09 Thread Igor Mammedov
On Mon, 9 Sep 2019 17:47:37 +0200 David Hildenbrand wrote: > On 09.09.19 16:55, Igor Mammedov wrote: > > If userspace doesn't set KVM_MEM_LOG_DIRTY_PAGES on memslot before calling > > kvm_s390_vm_start_migration(), kernel will oops with: > > > > We usually

[PATCH] kvm_s390_vm_start_migration: check dirty_bitmap before using it as target for memset()

2019-09-09 Thread Igor Mammedov
ep is turned off. Last Breaking-Event-Address: [<00dbaf60>] __memset+0xc/0xa0 due to ms->dirty_bitmap being NULL, which migh crash the host. Make sure that ms->dirty_bitmap is set before using it or print a warning and return -ENIVAL otherwise. Signed-off-by: Igor Mammedo

Re: cpu/hotplug: broken sibling thread hotplug

2019-01-28 Thread Igor Mammedov
On Mon, 28 Jan 2019 06:52:52 -0600 Josh Poimboeuf wrote: > On Mon, Jan 28, 2019 at 11:13:04AM +0100, Igor Mammedov wrote: > > On Fri, 25 Jan 2019 11:02:03 -0600 > > Josh Poimboeuf wrote: > > > > > On Fri, Jan 25, 2019 at 10:36:57AM -0600, Josh Poimboeuf wrote:

Re: cpu/hotplug: broken sibling thread hotplug

2019-01-28 Thread Igor Mammedov
On Fri, 25 Jan 2019 11:02:03 -0600 Josh Poimboeuf wrote: > On Fri, Jan 25, 2019 at 10:36:57AM -0600, Josh Poimboeuf wrote: > > How about this patch? It's just a revert of 73d5e2b47264 and > > bc2d8d262cba, plus the 1-line vmx_vm_init() change. If it looks ok to > > you, I can clean it up and su

cpu/hotplug: broken sibling thread hotplug

2019-01-24 Thread Igor Mammedov
In case guest is booted with one CPU present and then later a sibling CPU is hotplugged [1], it stays offline since SMT is disabled. Bisects to 73d5e2b47264 ("cpu/hotplug: detect SMT disabled by BIOS") which used __max_smt_threads to decide disabling SMT and in case [1] only primary CPU thread is

Re: [Qemu-devel] [PATCH v11 2/6] ACPI: Add APEI GHES Table Generation support

2017-09-01 Thread Igor Mammedov
On Fri, 1 Sep 2017 17:58:55 +0800 gengdongjiu wrote: > Hi Igor, > > On 2017/8/29 18:20, Igor Mammedov wrote: > > On Fri, 18 Aug 2017 22:23:43 +0800 > > Dongjiu Geng wrote: [...] > > > >> +void ghes_build_acpi(GArray

Re: [Qemu-devel] [PATCH v11 2/6] ACPI: Add APEI GHES Table Generation support

2017-08-29 Thread Igor Mammedov
On Fri, 18 Aug 2017 22:23:43 +0800 Dongjiu Geng wrote: > This implements APEI GHES Table by passing the error CPER info > to the guest via a fw_cfg_blob. After a CPER info is recorded, an > SEA(Synchronous External Abort)/SEI(SError Interrupt) exception > will be injected into the guest OS. it's

Re: [RFC PATCH 0/5] mm, memory_hotplug: allocate memmap from hotadded memory

2017-08-01 Thread Igor Mammedov
On Mon, 31 Jul 2017 19:58:30 +0200 Gerald Schaefer wrote: > On Mon, 31 Jul 2017 17:53:50 +0200 > Michal Hocko wrote: > > > On Mon 31-07-17 17:04:59, Gerald Schaefer wrote: > > > On Mon, 31 Jul 2017 14:53:19 +0200 > > > Michal Hocko wrote: > > > > > > > On Mon 31-07-17 14:35:21, Gerald Sch

Re: [RFC PATCH 2/2] mm, memory_hotplug: drop CONFIG_MOVABLE_NODE

2017-05-24 Thread Igor Mammedov
On Wed, 24 May 2017 14:24:11 +0200 Michal Hocko wrote: [...] > index facc20a3f962..ec7d6ae01c96 100644 > --- a/Documentation/admin-guide/kernel-parameters.txt > +++ b/Documentation/admin-guide/kernel-parameters.txt > @@ -2246,8 +2246,11 @@ [...] > + movable. This means that th

Re: [PATCH -v2 0/9] mm: make movable onlining suck less

2017-04-11 Thread Igor Mammedov
On Tue, 11 Apr 2017 11:23:07 +0200 Michal Hocko wrote: > On Tue 11-04-17 08:38:34, Igor Mammedov wrote: > > for issue2: > > -enable-kvm -m 2G,slots=4,maxmem=4G -smp 4 -numa node -numa node \ > > -drive if=virtio,file=disk.img -kernel bzImage -append 'root=/dev/vda1

Re: [PATCH -v2 0/9] mm: make movable onlining suck less

2017-04-11 Thread Igor Mammedov
On Tue, 11 Apr 2017 10:41:42 +0200 Michal Hocko wrote: > On Tue 11-04-17 10:01:52, Igor Mammedov wrote: > > On Mon, 10 Apr 2017 16:56:39 +0200 > > Michal Hocko wrote: > [...] > > > > #echo online_kernel > memory32/state > > > > write error:

Re: [PATCH -v2 0/9] mm: make movable onlining suck less

2017-04-11 Thread Igor Mammedov
On Mon, 10 Apr 2017 16:56:39 +0200 Michal Hocko wrote: > On Mon 10-04-17 16:27:49, Igor Mammedov wrote: > [...] > > Hi Michal, > > > > I've given series some dumb testing, see below for unexpected changes I've > > noticed. > > > > Using the

Re: [PATCH -v2 0/9] mm: make movable onlining suck less

2017-04-10 Thread Igor Mammedov
On Mon, 10 Apr 2017 18:09:41 +0200 Michal Hocko wrote: > On Mon 10-04-17 16:27:49, Igor Mammedov wrote: > [...] > > -object memory-backend-ram,id=mem1,size=256M -object > > memory-backend-ram,id=mem0,size=256M \ > > -device pc-dimm,id=dimm1,memdev=mem1,slot=1,node=0

Re: [PATCH -v2 0/9] mm: make movable onlining suck less

2017-04-10 Thread Igor Mammedov
On Mon, 10 Apr 2017 13:03:42 +0200 Michal Hocko wrote: > Hi, > The last version of this series has been posted here [1]. It has seen > some more serious testing (thanks to Reza Arbab) and fixes for the found > issues. I have also decided to drop patch 1 [2] because it turned out to > be more comp

Re: [PATCH v1 1/6] mm: get rid of zone_is_initialized

2017-04-05 Thread Igor Mammedov
On Wed, 5 Apr 2017 10:14:00 +0200 Michal Hocko wrote: > On Fri 31-03-17 09:39:54, Michal Hocko wrote: > > Fixed screw ups during the initial patch split up as per Hillf > > --- > > From 8be6c5e47de66210e47710c80e72e8abd899017b Mon Sep 17 00:00:00 2001 > > From: Michal Hocko > > Date: Wed, 29 Mar

Re: [PATCH 0/6] mm: make movable onlining suck less

2017-04-03 Thread Igor Mammedov
On Mon, 3 Apr 2017 13:55:46 +0200 Michal Hocko wrote: > On Thu 30-03-17 13:54:48, Michal Hocko wrote: > [...] > > Any thoughts, complains, suggestions? > > Anyting? I would really appreciate a feedback from IBM and Futjitsu guys > who have shaped this code last few years. Also Igor and Vitaly

Re: ZONE_NORMAL vs. ZONE_MOVABLE

2017-03-17 Thread Igor Mammedov
On Thu, 16 Mar 2017 20:01:25 +0100 Andrea Arcangeli wrote: [...] > If we can make zone overlap work with a 100% overlap across the whole > node that would be a fine alternative, the zoneinfo.py output will > look weird, but if that's the only downside it's no big deal. With > sticky movable pageb

Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks

2017-03-14 Thread Igor Mammedov
On Mon, 13 Mar 2017 13:28:25 +0100 Michal Hocko wrote: > On Mon 13-03-17 11:55:54, Igor Mammedov wrote: > > On Thu, 9 Mar 2017 13:54:00 +0100 > > Michal Hocko wrote: > > > > [...] > > > > It's major regression if you remove auto online in kerne

Re: WTH is going on with memory hotplug sysf interface (was: Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks)

2017-03-13 Thread Igor Mammedov
On Mon, 13 Mar 2017 11:43:02 +0100 Michal Hocko wrote: > On Mon 13-03-17 11:31:10, Igor Mammedov wrote: > > On Fri, 10 Mar 2017 14:58:07 +0100 > [...] > > > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x-0x0009] > > > [0.00] ACPI: SRAT

Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks

2017-03-13 Thread Igor Mammedov
On Thu, 9 Mar 2017 13:54:00 +0100 Michal Hocko wrote: [...] > > It's major regression if you remove auto online in kernels that > > run on top of x86 kvm/vmware hypervisors, making API cleanups > > while breaking useful functionality doesn't make sense. > > > > I would ACK config option removal

Re: WTH is going on with memory hotplug sysf interface (was: Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks)

2017-03-13 Thread Igor Mammedov
last blocks. More below. > > On Thu 09-03-17 13:54:00, Michal Hocko wrote: > > On Tue 07-03-17 13:40:04, Igor Mammedov wrote: > > > On Mon, 6 Mar 2017 15:54:17 +0100 > > > Michal Hocko wrote: > > > > > > > On Fri 03-03-17 18:34:22, Igor Mammedov w

Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks

2017-03-07 Thread Igor Mammedov
On Mon, 6 Mar 2017 15:54:17 +0100 Michal Hocko wrote: > On Fri 03-03-17 18:34:22, Igor Mammedov wrote: > > On Fri, 3 Mar 2017 09:27:23 +0100 > > Michal Hocko wrote: > > > > > On Thu 02-03-17 18:03:15, Igor Mammedov wrote: > > > > On Thu, 2 Ma

Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks

2017-03-03 Thread Igor Mammedov
On Fri, 3 Mar 2017 09:27:23 +0100 Michal Hocko wrote: > On Thu 02-03-17 18:03:15, Igor Mammedov wrote: > > On Thu, 2 Mar 2017 15:28:16 +0100 > > Michal Hocko wrote: > > > > > On Thu 02-03-17 14:53:48, Igor Mammedov wrote: > > > [...] > > >

Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks

2017-03-02 Thread Igor Mammedov
On Thu, 2 Mar 2017 15:28:16 +0100 Michal Hocko wrote: > On Thu 02-03-17 14:53:48, Igor Mammedov wrote: > [...] > > When trying to support memory unplug on guest side in RHEL7, > > experience shows otherwise. Simplistic udev rule which onlines > > added block doesn

Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks

2017-03-02 Thread Igor Mammedov
On Mon 27-02-17 16:43:04, Michal Hocko wrote: > On Mon 27-02-17 12:25:10, Heiko Carstens wrote: > > On Mon, Feb 27, 2017 at 11:02:09AM +0100, Vitaly Kuznetsov wrote: > > > A couple of other thoughts: > > > 1) Having all newly added memory online ASAP is probably what people > > > want for all vir

Re: regression since 4.8 and newer in select_idle_siblings()

2016-10-18 Thread Igor Mammedov
On Tue, 18 Oct 2016 16:02:07 +0200 Mike Galbraith wrote: > On Tue, 2016-10-18 at 15:40 +0200, Igor Mammedov wrote: > > kernel crashes at runtime due null pointer dereference at > > select_idle_sibling() > > -> select_idle_cpu() > > ... >

regression since 4.8 and newer in select_idle_siblings()

2016-10-18 Thread Igor Mammedov
kernel crashes at runtime due null pointer dereference at select_idle_sibling() -> select_idle_cpu() ... u64 avg_cost = this_sd->avg_scan_cost; regression bisects to: commit 10e2f1acd0106c05229f94c70a344ce3a2c8008b Author: Peter Zijlstra sched/core: Rewrite and imp

[PATCH 1/2] x86/x2apic: fix NULL pointer def during boot

2016-08-10 Thread Igor Mammedov
Fixes crash at boot for me. Small nit wrt subj s/def/deref/

[PATCH 1/2] x86/x2apic: fix NULL pointer def during boot

2016-08-10 Thread Igor Mammedov
Fixes crash at boot for me. Small nit wrt subj s/def/deref/

Re: [PATCH FIX 4.6+] bcma: add PCI ID for Foxconn's BCM43142 device

2016-07-12 Thread Igor Mammedov
laim only 14e4:4365 PCI Dell card with > SoftMAC BCM43142") Reported-by: Igor Mammedov > Signed-off-by: Rafał Miłecki > Cc: Stable [4.6+] Tested-by: Igor Mammedov > --- > drivers/bcma/host_pci.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/drivers/bcma/h

Re: [PATCH v4 2/2] KVM: move vcpu id checking to archs

2016-04-22 Thread Igor Mammedov
On Fri, 22 Apr 2016 11:25:38 +0200 Greg Kurz wrote: > Hi Radim ! > > On Thu, 21 Apr 2016 19:36:11 +0200 > Radim Krčmář wrote: > > > 2016-04-21 18:45+0200, Greg Kurz: > > > On Thu, 21 Apr 2016 18:00:19 +0200 > > > Radim Krčmář wrote: > > >> 2016-04-21 16:20+0200, Greg Kurz: [...] > >

Re: [PATCH v2] memory-hotplug: add automatic onlining policy for the newly added memory

2016-01-04 Thread Igor Mammedov
On Mon, 04 Jan 2016 11:47:12 +0100 Vitaly Kuznetsov wrote: > Andrew Morton writes: > > > On Tue, 22 Dec 2015 17:32:30 +0100 Vitaly Kuznetsov > > wrote: > > > >> Currently, all newly added memory blocks remain in 'offline' state unless > >> someone onlines them, some linux distributions carr

Re: [PATCH] memory-hotplug: don't BUG() in register_memory_resource()

2015-12-18 Thread Igor Mammedov
t; Cc: Andrew Morton > Cc: Tang Chen > Cc: Naoya Horiguchi > Cc: Xishi Qiu > Cc: Sheng Yong > Cc: David Rientjes > Cc: Zhu Guihua > Cc: Dan Williams > Cc: David Vrabel > Cc: Igor Mammedov > Signed-off-by: Vitaly Kuznetsov > --- > mm/memory_hotplug.c | 17

[tip:x86/mm] x86/mm/64: Enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-06 Thread tip-bot for Igor Mammedov
Commit-ID: ec941c5ffede4d788b9fc008f9eeca75b9e964f5 Gitweb: http://git.kernel.org/tip/ec941c5ffede4d788b9fc008f9eeca75b9e964f5 Author: Igor Mammedov AuthorDate: Fri, 4 Dec 2015 14:07:06 +0100 Committer: Ingo Molnar CommitDate: Sun, 6 Dec 2015 12:46:31 +0100 x86/mm/64: Enable SWIOTLB

[tip:x86/mm] x86/mm: Introduce max_possible_pfn

2015-12-06 Thread tip-bot for Igor Mammedov
Commit-ID: 8dd3303001976aa8583bf20f6b93590c74114308 Gitweb: http://git.kernel.org/tip/8dd3303001976aa8583bf20f6b93590c74114308 Author: Igor Mammedov AuthorDate: Fri, 4 Dec 2015 14:07:05 +0100 Committer: Ingo Molnar CommitDate: Sun, 6 Dec 2015 12:46:31 +0100 x86/mm: Introduce

[PATCH v3 0/2] x86: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
ml.org/lkml/2015/12/4/151 ref to v1: https://lkml.org/lkml/2015/11/30/594 Igor Mammedov (2): x86: introduce max_possible_pfn x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN arch/x86/kernel/pci-swiotlb.c | 2 +- arch/x86/kernel/setup.c | 2 ++ arch/x86/mm

[PATCH v3 1/2] x86: introduce max_possible_pfn

2015-12-04 Thread Igor Mammedov
table if any present. Signed-off-by: Igor Mammedov --- v3: - make 'max_possible_pfn' 64-bit - simplify condition to oneliner as suggested by Ingo --- arch/x86/kernel/setup.c | 2 ++ arch/x86/mm/srat.c | 2 ++ include/linux/bootmem.h | 4 mm/bootmem.c| 1 +

[PATCH v3 2/2] x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
f there is hotpluggable memory regions beyond MAX_DMA32_PFN. It fixes KVM guests when they use emulated devices (reproduces with ata_piix, e1000 and usb devices, RHBZ: 1275941, 1275977, 1271527) It also fixes the HyperV, VMWare with emulated devices which are affected by this issue as well. Signed-of

Re: [PATCH v2 2/2] x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
On Fri, 4 Dec 2015 12:49:49 +0100 Ingo Molnar wrote: > > * Igor Mammedov wrote: > > > when memory hotplug enabled system is booted with less > > than 4GB of RAM and then later more RAM is hotplugged > > 32-bit devices stop functioning with following error: > >

Re: [PATCH v2 2/2] x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
On Fri, 4 Dec 2015 12:49:49 +0100 Ingo Molnar wrote: > > * Igor Mammedov wrote: > > > when memory hotplug enabled system is booted with less > > than 4GB of RAM and then later more RAM is hotplugged > > 32-bit devices stop functioning with following error: > >

[PATCH v2 2/2] x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
f there is hotpluggable memory regions beyond MAX_DMA32_PFN. It fixes KVM guests when they use emulated devices (reproduces with ata_piix, e1000 and usb devices, RHBZ: 1275941, 1275977, 1271527) It also fixes the HyperV, VMWare with emulated devices which are affected by this issue as well. Signed-of

[PATCH v2 1/2] x86: introduce max_possible_pfn

2015-12-04 Thread Igor Mammedov
SRAT table if any present. Signed-off-by: Igor Mammedov --- arch/x86/kernel/setup.c | 2 ++ include/linux/bootmem.h | 4 mm/bootmem.c| 1 + mm/nobootmem.c | 1 + 4 files changed, 8 insertions(+) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 29

[PATCH v2 0/2] x86: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
the HyperV, VMWare with emulated devices which are affected by this issue as well. ref to v1: https://lkml.org/lkml/2015/11/30/594 Igor Mammedov (2): x86: introduce max_possible_pfn x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN arch/x86/kernel/pci-s

Re: [PATCH] x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-12-04 Thread Igor Mammedov
On Fri, 4 Dec 2015 09:20:50 +0100 Ingo Molnar wrote: > > * Igor Mammedov wrote: > > > diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h > > index 94c18eb..53d7951 100644 > > --- a/arch/x86/include/asm/acpi.h > > +++ b/arch/x86/includ

[PATCH] x86_64: enable SWIOTLB if system has SRAT memory regions above MAX_DMA32_PFN

2015-11-30 Thread Igor Mammedov
e RAM less than 4GB and do not use memory hotplug but still have hotplug regions in SRAT (i.e. broken BIOS that can't disable mem hotplug) can disable memory hotplug with 'acpi_no_memhotplug = 1' to avoid automatic SWIOTLB initialization. Tested on QEMU/KVM and HyperV. Sig

Re: [BUG?] kernel OOPS at kmem_cache_alloc_node() because of smp_processor_id()

2015-10-16 Thread Igor Mammedov
oing on at the > > time? A special graphics driver being loaded? That could cause issues. > > > > It seems that the problem was fixed by Igor, right? > https://lkml.org/lkml/2014/3/6/257 That might help. "stuck" CPU14 means that master CPU has given up on the a

[PATCH] kvm: svm: reset mmu on VCPU reset

2015-09-18 Thread Igor Mammedov
t_vmcb() time. -- * AMD64 Architecture Programmer’s Manual, Volume 2: System Programming, rev: 3.25 15.19 Paged Real Mode ** Opteron 1216 Signed-off-by: Igor Mammedov --- arch/x86/kvm/svm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index fdb8cb6..

Re: [PATCH 2/2] vhost: increase default limit of nregions from 64 to 509

2015-07-30 Thread Igor Mammedov
On Thu, 30 Jul 2015 09:33:57 +0300 "Michael S. Tsirkin" wrote: > On Thu, Jul 30, 2015 at 08:26:03AM +0200, Igor Mammedov wrote: > > On Wed, 29 Jul 2015 18:28:26 +0300 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jul 29, 2015 at 04:29:23P

Re: [PATCH 2/2] vhost: increase default limit of nregions from 64 to 509

2015-07-30 Thread Igor Mammedov
On Thu, 30 Jul 2015 09:33:57 +0300 "Michael S. Tsirkin" wrote: > On Thu, Jul 30, 2015 at 08:26:03AM +0200, Igor Mammedov wrote: > > On Wed, 29 Jul 2015 18:28:26 +0300 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jul 29, 2015 at 04:29:23P

Re: [PATCH 2/2] vhost: increase default limit of nregions from 64 to 509

2015-07-29 Thread Igor Mammedov
On Wed, 29 Jul 2015 18:28:26 +0300 "Michael S. Tsirkin" wrote: > On Wed, Jul 29, 2015 at 04:29:23PM +0200, Igor Mammedov wrote: > > although now there is vhost module max_mem_regions option > > to set custom limit it doesn't help for default setups, > > sinc

Re: [PATCH 1/2] vhost: add ioctl to query nregions upper limit

2015-07-29 Thread Igor Mammedov
On Wed, 29 Jul 2015 17:43:17 +0300 "Michael S. Tsirkin" wrote: > On Wed, Jul 29, 2015 at 04:29:22PM +0200, Igor Mammedov wrote: > > From: "Michael S. Tsirkin" > > > > Userspace currently simply tries to give vhost as many regions > > as it h

[PATCH 2/2] vhost: increase default limit of nregions from 64 to 509

2015-07-29 Thread Igor Mammedov
s max), so that default deployments would work out of box. Signed-off-by: Igor Mammedov --- PS: Users that would want to lock down vhost could still use max_mem_regions option to set lower limit, but I expect it would be minority. --- include/uapi/linux/vhost.h | 2 +- 1 file changed, 1 insertion(

[PATCH 1/2] vhost: add ioctl to query nregions upper limit

2015-07-29 Thread Igor Mammedov
7;s left unused, let's make that mean that the current userspace behaviour (trial and error) is required, just in case we want it back. Signed-off-by: Michael S. Tsirkin Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 7 ++- include/uapi/linux/vhost.h | 17 ++

[PATCH 0/2] vhost: add ioctl to query nregions limit and rise default limit

2015-07-29 Thread Igor Mammedov
Igor Mammedov (1): vhost: increase default limit of nregions from 64 to 509 Michael S. Tsirkin (1): vhost: add ioctl to query nregions upper limit drivers/vhost/vhost.c | 7 ++- include/uapi/linux/vhost.h | 17 - 2 files changed, 22 insertions(+), 2 deletions

Re: [PATCH v4 2/2] vhost: add max_mem_regions module parameter

2015-07-16 Thread Igor Mammedov
On Thu, 2 Jul 2015 15:08:11 +0200 Igor Mammedov wrote: > it became possible to use a bigger amount of memory > slots, which is used by memory hotplug for > registering hotplugged memory. > However QEMU crashes if it's used with more than ~60 > pc-dimm devices and vhost-ne

[PATCH] fixup! vhost: extend memory regions allocation to vmalloc

2015-07-15 Thread Igor Mammedov
: Dan Carpenter Suggested-by: Julia Lawall Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 5 + 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index a9fe859..3702487 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost

Re: [PATCH] vhost: fix build failure on SPARC

2015-07-13 Thread Igor Mammedov
On Mon, 13 Jul 2015 23:19:59 +0300 "Michael S. Tsirkin" wrote: > On Mon, Jul 13, 2015 at 08:15:30PM +0200, Igor Mammedov wrote: > > while on x86 target vmalloc.h is included indirectly through > > other heaedrs, it's not included on SPARC. > > Fix issue

[PATCH] vhost: fix build failure on SPARC

2015-07-13 Thread Igor Mammedov
while on x86 target vmalloc.h is included indirectly through other heaedrs, it's not included on SPARC. Fix issue by including vmalloc.h directly from vhost.c like it's done in vhost/net.c Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 1 + 1 file changed, 1 insertion(+) di

Re: [PATCH v4 0/2] vhost: support more than 64 memory regions

2015-07-08 Thread Igor Mammedov
On Thu, 2 Jul 2015 15:08:09 +0200 Igor Mammedov wrote: > changes since v3: > * rebased on top of vhost-next branch > changes since v2: > * drop cache patches for now as suggested > * add max_mem_regions module parameter instead of unconditionally > increasing limit

[PATCH v4 1/2] vhost: extend memory regions allocation to vmalloc

2015-07-02 Thread Igor Mammedov
h older QEMU's which could use large amount of memory regions. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 20 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 71bb468..6488011 100644 --- a/dr

[PATCH v4 2/2] vhost: add max_mem_regions module parameter

2015-07-02 Thread Igor Mammedov
emory regions. Allow to tweak limit via max_mem_regions module paramemter with default value set to 64 slots. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 64

[PATCH v4 0/2] vhost: support more than 64 memory regions

2015-07-02 Thread Igor Mammedov
2 slots in default QEMU configuration. Igor Mammedov (2): vhost: extend memory regions allocation to vmalloc vhost: add max_mem_regions module parameter drivers/vhost/vhost.c | 28 ++-- 1 file changed, 22 insertions(+), 6 deletions(-) -- 1.8.3.1 -- To unsubscribe from

[PATCH v3 2/2] vhost: add max_mem_regions module parameter

2015-07-01 Thread Igor Mammedov
emory regions. Allow to tweak limit via max_mem_regions module paramemter with default value set to 64 slots. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 99

[PATCH v3 0/2] vhost: support more than 64 memory regions

2015-07-01 Thread Igor Mammedov
http://www.spinics.net/lists/kvm/msg117654.html Series allows to tweak vhost's memory regions count limit. It fixes VM crashing on memory hotplug due to vhost refusing accepting more than 64 memory regions with max_mem_regions set to more than 262 slots in default QEMU configuration. Igor Mammedov (2

[PATCH v3 1/2] vhost: extend memory regions allocation to vmalloc

2015-07-01 Thread Igor Mammedov
h older QEMU's which could use large amount of memory regions. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 22 +- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index f1e07b8..99931a0 100644 --

Re: [PATCH RFC] vhost: add ioctl to query nregions upper limit

2015-06-25 Thread Igor Mammedov
On Wed, 24 Jun 2015 17:08:56 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 24, 2015 at 04:52:29PM +0200, Igor Mammedov wrote: > > On Wed, 24 Jun 2015 16:17:46 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 24, 2015 at 04:07:27PM

Re: [PATCH RFC] vhost: add ioctl to query nregions upper limit

2015-06-24 Thread Igor Mammedov
On Wed, 24 Jun 2015 16:17:46 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 24, 2015 at 04:07:27PM +0200, Igor Mammedov wrote: > > On Wed, 24 Jun 2015 15:49:27 +0200 > > "Michael S. Tsirkin" wrote: > > > > > Userspace currently simply tries to

Re: [PATCH RFC] vhost: add ioctl to query nregions upper limit

2015-06-24 Thread Igor Mammedov
ning on an > old kernel, you get -1 and you can assume at least 64 slots. Since 0 > value's left unused, let's make that mean that the current userspace > behaviour (trial and error) is required, just in case we want it back. > > Signed-off-by: Michael S. Tsirkin &

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-22 Thread Igor Mammedov
On Fri, 19 Jun 2015 18:33:39 +0200 "Michael S. Tsirkin" wrote: > On Fri, Jun 19, 2015 at 06:26:27PM +0200, Paolo Bonzini wrote: > > > > > > On 19/06/2015 18:20, Michael S. Tsirkin wrote: > > > > We could, but I/O is just an example. It can be I/O, a network ring, > > > > whatever. We cannot a

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-18 Thread Igor Mammedov
e: > > >> > > >> > > >> On 18/06/2015 13:41, Michael S. Tsirkin wrote: > > >>> On Thu, Jun 18, 2015 at 01:39:12PM +0200, Igor Mammedov wrote: > > >>>> Lets leave decision upto users instead of making them live with > > &g

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-18 Thread Igor Mammedov
On Thu, 18 Jun 2015 13:41:22 +0200 "Michael S. Tsirkin" wrote: > On Thu, Jun 18, 2015 at 01:39:12PM +0200, Igor Mammedov wrote: > > Lets leave decision upto users instead of making them live with > > crashing guests. > > Come on, let's fix it in userspace. I

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-18 Thread Igor Mammedov
On Thu, 18 Jun 2015 11:50:22 +0200 "Michael S. Tsirkin" wrote: > On Thu, Jun 18, 2015 at 11:12:24AM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 18:30:02 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 17, 2015 at 06:09:21PM

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-18 Thread Igor Mammedov
On Wed, 17 Jun 2015 18:30:02 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 06:09:21PM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 17:38:40 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 17, 2015 at 05:12:57PM

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 18:47:18 +0200 Paolo Bonzini wrote: > > > On 17/06/2015 18:41, Michael S. Tsirkin wrote: > > On Wed, Jun 17, 2015 at 06:38:25PM +0200, Paolo Bonzini wrote: > >> > >> > >> On 17/06/2015 18:34, Michael S. Tsirkin wrote: > >>> On Wed, Jun 17, 2015 at 06:31:32PM +0200, Paolo Bon

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 18:30:02 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 06:09:21PM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 17:38:40 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 17, 2015 at 05:12:57PM

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 17:38:40 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 05:12:57PM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 16:32:02 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 16:32:02 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 03:20:44PM +0200, Paolo Bonzini wrote: > > > > > > On 17/06/2015 15:13, Michael S. Tsirkin wrote: > > > > > Considering userspace can be malicious, I guess yes. > > > > I don't think it's a valid concern in

[PATCH v2 0/6] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
- upstream| 0.3% | - | 3.5% this series | 0.2% | 0.5% | 0.7% where "non cached" column reflects trashing wokload with constant cache miss. More details on timing in respective patches. Igor Mammedov (6): vhost: use binary search instead of linear in find_region() vhost: extend m

[PATCH v2 3/6] vhost: add per VQ memory region caching

2015-06-17 Thread Igor Mammedov
that brings down translate_desc() cost to around 210ns if accessed descriptors are from the same memory region. Signed-off-by: Igor Mammedov --- that's what netperf/iperf workloads were during testing. --- drivers/vhost/vhost.c | 16 +--- drivers/vhost/vhost.h | 1 + 2

[PATCH v2 2/6] vhost: extend memory regions allocation to vmalloc

2015-06-17 Thread Igor Mammedov
h older QEMU's which could use large amount of memory regions. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 22 +- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index f1e07b8..99931a0 100644 --

[PATCH v2 1/6] vhost: use binary search instead of linear in find_region()

2015-06-17 Thread Igor Mammedov
allowed number of slots is increased to 509 like it has been done in KVM. Signed-off-by: Igor Mammedov --- v2: move kvfree() to 2/2 where it belongs --- drivers/vhost/vhost.c | 36 +++- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/drivers/vhost

[PATCH v2 6/6] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
el in module vhost-net refuses to accept more than 64 memory regions. Increase VHOST_MEMORY_MAX_NREGIONS limit from 64 to 509 to match KVM_USER_MEM_SLOTS to fix issue for vhost-net and current QEMU versions. Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 2 +- 1 file changed, 1 inse

[PATCH v2 4/6] vhost: translate_desc: optimization for desc.len < region size

2015-06-17 Thread Igor Mammedov
branches with a single remaining length check and execute next iov steps only when it needed. It saves a tiny 2% of translate_desc() execution time. Signed-off-by: Igor Mammedov --- PS: I'm not sure if iov_size > 0 is always true, if it's not then better to drop this patch. ---

[PATCH v2 5/6] vhost: add 'translation_cache' module parameter

2015-06-17 Thread Igor Mammedov
with cashing enabled for sequential workload doesn't seem to be affected much vs version without static key switch, i.e. still the same 0.2% of total time with key(NOPs) consuming 5ms on 5min workload. Signed-off-by: Igor Mammedov --- I don't have a test case for trashing workload thoug

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 13:51:56 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 01:48:03PM +0200, Igor Mammedov wrote: > > > > So far it's kernel limitation and this patch fixes crashes > > > > that users see now, with the rest of patches e

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 12:46:09 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 12:37:42PM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 12:11:09 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 17, 2015 at 10:54:21AM

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 12:11:09 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 10:54:21AM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 09:39:06 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 17, 2015 at 09:28:02AM

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 09:39:06 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 09:28:02AM +0200, Igor Mammedov wrote: > > On Wed, 17 Jun 2015 08:34:26 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Wed, Jun 17, 2015 at 12:00:56AM

Re: [PATCH 0/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 08:31:23 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 12:19:15AM +0200, Igor Mammedov wrote: > > On Tue, 16 Jun 2015 23:16:07 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Tue, Jun 16, 2015 at 06:33:34PM

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-17 Thread Igor Mammedov
On Wed, 17 Jun 2015 08:34:26 +0200 "Michael S. Tsirkin" wrote: > On Wed, Jun 17, 2015 at 12:00:56AM +0200, Igor Mammedov wrote: > > On Tue, 16 Jun 2015 23:14:20 +0200 > > "Michael S. Tsirkin" wrote: > > > > > On Tue, Jun 16, 2015 at 06:33:37P

Re: [PATCH 0/5] vhost: support upto 509 memory regions

2015-06-16 Thread Igor Mammedov
On Tue, 16 Jun 2015 23:16:07 +0200 "Michael S. Tsirkin" wrote: > On Tue, Jun 16, 2015 at 06:33:34PM +0200, Igor Mammedov wrote: > > Series extends vhost to support upto 509 memory regions, > > and adds some vhost:translate_desc() performance improvemnts > > so it

Re: [PATCH 3/5] vhost: support upto 509 memory regions

2015-06-16 Thread Igor Mammedov
On Tue, 16 Jun 2015 23:14:20 +0200 "Michael S. Tsirkin" wrote: > On Tue, Jun 16, 2015 at 06:33:37PM +0200, Igor Mammedov wrote: > > since commit > > 1d4e7e3 kvm: x86: increase user memory slots to 509 > > > > it became possible to use a bigger amount

  1   2   3   >