Hi Vitaly,
On 4/29/20 7:36 PM, Vitaly Kuznetsov wrote:
Currently, APF mechanism relies on the #PF abuse where the token is being
passed through CR2. If we switch to using interrupts to deliver page-ready
notifications we need a different way to pass the data. Extent the existing
'struct kvm_vcpu
Hi Vitaly,
On 4/29/20 7:36 PM, Vitaly Kuznetsov wrote:
If two page ready notifications happen back to back the second one is not
delivered and the only mechanism we currently have is
kvm_check_async_pf_completion() check in vcpu_run() loop. The check will
only be performed with the next vmexit w
Hi Vitaly,
On 4/29/20 7:36 PM, Vitaly Kuznetsov wrote:
KVM now supports using interrupt for type 2 APF event delivery (page ready
notifications). Switch KVM guests to using it when the feature is present.
Signed-off-by: Vitaly Kuznetsov
---
arch/x86/entry/entry_32.S | 5
arch
: Vitaly Kuznetsov
---
arch/x86/kvm/x86.c | 16 +---
1 file changed, 1 insertion(+), 15 deletions(-)
Reviewed-by: Gavin Shan
Hi Vitaly,
On 5/5/20 6:16 PM, Vitaly Kuznetsov wrote:
On 4/29/20 7:36 PM, Vitaly Kuznetsov wrote:
If two page ready notifications happen back to back the second one is not
delivered and the only mechanism we currently have is
kvm_check_async_pf_completion() check in vcpu_run() loop. The check w
The function add_huge_page_size(), wrapper of hugetlb_add_hstate(),
avoids to register duplicated huge page states for same size. However,
the same logic has been included in hugetlb_add_hstate(). So it seems
unnecessary to keep add_huge_page_size() and this just removes it.
Signed-off-by: Gavin
Hi Marc, Paolo,
On 6/1/20 7:21 PM, Paolo Bonzini wrote:
On 31/05/20 14:44, Marc Zyngier wrote:
Is there an ARM-approved way to reuse the S2 fault syndromes to detect
async page faults?
It would mean being able to set an ESR_EL2 register value into ESR_EL1,
and there is nothing in the archite
The target CPU type is validated when KVM module is initialized.
However, we always have a valid target CPU type since commit
("arm64/kvm: Add generic v8 KVM target").
So it's unnecessary to validate the target CPU type at that time
and this just drops it.
Signed-off-by: Gavin
Since commit ("arm64/kvm: Add generic v8 KVM target"),
there is no negative number returned from kvm_target_cpu(). So it's
not needed to validate its return value in kvm_vcpu_preferred_target()
and this just drops the unnecessary check.
Signed-off-by: Gavin Shan
---
arch/arm64/
The macro CONT_PTE_SHIFT actually depends on CONT_SHIFT, which has
been defined in page-def.h, based on CONFIG_ARM64_CONT_SHIFT. Lets
reflect the dependency.
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/pgtable-hwdef.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git
nused.
This removes the unused macro (CONT_RANGE_OFFSET).
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/pgtable-hwdef.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h
b/arch/arm64/include/asm/pgtable-hwdef.h
index d400a4d9aee2..8a399e66683
8
v1: https://lkml.org/lkml/2020/10/21/460
arch/arm64/kvm/mmu.c | 1 +
1 file changed, 1 insertion(+)
Reviewed-by: Gavin Shan
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 19aacc7..d4cd253 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -839,6 +839,7 @@ st
patch resolves the issue by fail the unsupported huge
page sizes back to nearby one. Ideally, we teach the stage-2 page table
to use contiguous mapping in this case, but the page-table walker doesn't
it well and needs some sort of reworks and I will do that in the future.
Gavin Shan (3):
The 52-bits physical address is disabled until CONFIG_ARM64_PA_BITS_52
is chosen. This uses option for that check, to avoid the unconditional
check on PAGE_SHIFT in the hot path and thus save some CPU cycles.
Signed-off-by: Gavin Shan
---
arch/arm64/kvm/hyp/pgtable.c | 10 ++
1 file
arm64: Try PMD block mappings if PUD mappings
are not supported").
Signed-off-by: Gavin Shan
---
arch/arm64/kvm/mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index a816cb8e619b..0f51585adc04 100644
--- a/arch/arm64/
_SHIFT and CONT_PTE_SHIFT
fail back to PMD_SHIFT and PAGE_SHIFT separately.
Signed-off-by: Gavin Shan
---
arch/arm64/kvm/mmu.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 0f51585adc04..81cbdc368246 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch
Hi Marc,
On 10/25/20 8:52 PM, Marc Zyngier wrote:
On Sun, 25 Oct 2020 01:27:37 +0100,
Gavin Shan wrote:
The 52-bits physical address is disabled until CONFIG_ARM64_PA_BITS_52
is chosen. This uses option for that check, to avoid the unconditional
check on PAGE_SHIFT in the hot path and thus
Hi Marc,
On 10/25/20 9:05 PM, Marc Zyngier wrote:
On Sun, 25 Oct 2020 01:27:38 +0100,
Gavin Shan wrote:
PUD huge page isn't available when CONFIG_ARM64_4K_PAGES is disabled.
In this case, we needn't try to map the memory through PUD huge pages
to save some CPU cycles in the hot p
Hi Marc,
On 10/25/20 9:48 PM, Marc Zyngier wrote:
On Sun, 25 Oct 2020 01:27:39 +0100,
Gavin Shan wrote:
The huge page could be mapped through multiple contiguous PMDs or PTEs.
The corresponding huge page sizes aren't supported by the page table
walker currently.
This fails the unsupp
_SHIFT and CONT_PTE_SHIFT
fail back to PMD_SHIFT and PAGE_SHIFT separately.
Suggested-by: Marc Zyngier
Signed-off-by: Gavin Shan
---
v2: Reorganize the code as Marc suggested
---
arch/arm64/kvm/mmu.c | 26 +++---
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/k
On 10/26/20 7:53 PM, Marc Zyngier wrote:
On 2020-10-25 22:23, Gavin Shan wrote:
Hi Marc,
On 10/25/20 8:52 PM, Marc Zyngier wrote:
On Sun, 25 Oct 2020 01:27:37 +0100,
Gavin Shan wrote:
The 52-bits physical address is disabled until CONFIG_ARM64_PA_BITS_52
is chosen. This uses option for
: 2f40c46021bbb ("KVM: arm64: Use fallback mapping sizes for contiguous
huge page sizes")
Reported-by: Eric Auger
Signed-off-by: Gavin Shan
---
arch/arm64/kvm/mmu.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 57972bdb213a..1a
Hi Anshuman,
On 9/10/20 4:17 PM, Anshuman Khandual wrote:
On 09/08/2020 12:49 PM, Gavin Shan wrote:
The macro CONT_PTE_SHIFT actually depends on CONT_SHIFT, which has
been defined in page-def.h, based on CONFIG_ARM64_CONT_SHIFT. Lets
reflect the dependency.
Signed-off-by: Gavin Shan
Hi Anshuman,
On 9/10/20 7:28 PM, Anshuman Khandual wrote:
On 09/10/2020 02:01 PM, Gavin Shan wrote:
On 9/10/20 4:17 PM, Anshuman Khandual wrote:
On 09/08/2020 12:49 PM, Gavin Shan wrote:
The macro CONT_PTE_SHIFT actually depends on CONT_SHIFT, which has
been defined in page-def.h, based on
def.h are removed as they
are not used by anyone.
* CONT_PTE_SHIFT is determined by CONFIG_ARM64_CONT_PTE_SHIFT.
Signed-off-by: Gavin Shan
---
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/page-def.h | 5 -
arch/arm64/include/asm/pgtable-hwdef.h | 4 +-
nused.
This removes the unused macro (CONT_RANGE_OFFSET).
Signed-off-by: Gavin Shan
Reviewed-by: Anshuman Khandual
---
arch/arm64/include/asm/pgtable-hwdef.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h
b/arch/arm64/include/asm/pgtable-hwdef.h
in
Similar to how CONT_PTE_SHIFT is determined, this introduces a new
kernel option (CONFIG_CONT_PMD_SHIFT) to determine CONT_PMD_SHIFT.
Signed-off-by: Gavin Shan
---
arch/arm64/Kconfig | 6 ++
arch/arm64/include/asm/pgtable-hwdef.h | 10 ++
2 files changed, 8
Hi Robin,
On 9/17/20 8:22 PM, Robin Murphy wrote:
On 2020-09-17 04:35, Gavin Shan wrote:
On 9/16/20 6:28 PM, Will Deacon wrote:
On Wed, Sep 16, 2020 at 01:25:23PM +1000, Gavin Shan wrote:
This enables color zero pages by allocating contigous page frames
for it. The number of pages for this
D are folded to PGD.
This removes __{pgd, pud, pmd, pte}_error() and call pr_err() from
{pgd, pud, pmd, pte}_ERROR() directly, similar to what x86/powerpc
are doing. With this, the code looks a bit simplified either.
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/pgtable.h
to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url:
https://github.com/0day-ci/linux/commits/Gavin-Shan/arm64-mm-Refactor-pgd-pud-pmd-pte-_ERROR/20200913-194714
bas
D are folded to PGD.
This removes __{pgd, pud, pmd, pte}_error() and call pr_err() from
{pgd, pud, pmd, pte}_ERROR() directly, similar to what x86/powerpc
are doing. With this, the code looks a bit simplified either.
Signed-off-by: Gavin Shan
---
v2: Fix build warning caused by wrong printk format
---
Hi Anshuman,
On 9/14/20 3:31 PM, Anshuman Khandual wrote:
On 09/14/2020 05:17 AM, Gavin Shan wrote:
The function __{pgd, pud, pmd, pte}_error() are introduced so that
they can be called by {pgd, pud, pmd, pte}_ERROR(). However, some
of the functions could never be called when the corresponding
Hi Catalin,
On 9/23/20 6:43 PM, Catalin Marinas wrote:
On Wed, Sep 23, 2020 at 03:37:19PM +1000, Gavin Shan wrote:
The feature of color zero pages isn't enabled on arm64, meaning all
read-only (anonymous) VM areas are backed up by same zero page. It
leads pressure to L1 (data) cache on re
Hi Will,
On 9/16/20 6:28 PM, Will Deacon wrote:
On Wed, Sep 16, 2020 at 01:25:23PM +1000, Gavin Shan wrote:
This enables color zero pages by allocating contigous page frames
for it. The number of pages for this is determined by L1 dCache
(or iCache) size, which is probbed from the hardware
Hi Robin,
On 9/16/20 8:46 PM, Robin Murphy wrote:
On 2020-09-16 09:28, Will Deacon wrote:
On Wed, Sep 16, 2020 at 01:25:23PM +1000, Gavin Shan wrote:
This enables color zero pages by allocating contigous page frames
for it. The number of pages for this is determined by L1 dCache
(or iCache
Hi Anshuman,
On 9/21/20 10:40 PM, Anshuman Khandual wrote:
On 09/21/2020 08:26 AM, Gavin Shan wrote:
On 9/17/20 8:22 PM, Robin Murphy wrote:
On 2020-09-17 04:35, Gavin Shan wrote:
On 9/16/20 6:28 PM, Will Deacon wrote:
On Wed, Sep 16, 2020 at 01:25:23PM +1000, Gavin Shan wrote:
This
In sdei_event_create(), the event number is retrieved from the
variable @event_num for the shared event. The event number was
stored in the event instance. So we can fetch it from the event
instance, similar to what we're doing for the private event.
Signed-off-by: Gavin Shan
Review
_t for @fn argument in sdei_do_cross_call()
as the function is called on target CPU(s).
* Remove unnecessary space before @event in sdei_do_cross_call()
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Reviewed-by: James Morse
---
drivers/firmware/arm_sdei.c | 14 +---
in sdei_event_create() (James)
Fix broken case for device-tree in sdei_init() (James)
Gavin Shan (13):
firmware: arm_sdei: Remove sdei_is_err()
firmware: arm_sdei: Common block for failing path in
sdei_event_create()
firmware: arm_sdei: Retrieve event number from eve
invoke_sdei_fn() because it's always overridden afterwards.
This shouldn't cause functional changes.
Signed-off-by: Gavin Shan
Reviewed-by: James Morse
Reviewed-by: Jonathan Cameron
---
drivers/firmware/arm_sdei.c | 26 +++---
1 file changed, 3 insertions(+), 23
_ACPI is enabled.
* @acpi_disabled in defined in include/acpi.h when CONFIG_ACPI
is disabled.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Acked-by: James Morse
---
drivers/firmware/arm_sdei.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/fir
s moved to sdei_event_register() and sdei_reregister_shared().
This shouldn't cause any logical changes.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Reviewed-by: James Morse
---
drivers/firmware/arm_sdei.c | 77 ++---
1 file changed, 28 insertions(+), 49 del
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Reviewed-by: James Morse
---
drivers/firmware/arm_sdei.c | 18 ++
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
index 361d142ad2a8..840754dcc6ca 10
This removes the redundant error message in sdei_probe() because
the case can be identified from the errno in next error message.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Acked-by: James Morse
---
drivers/firmware/arm_sdei.c | 2 --
1 file changed, 2 deletions(-)
diff --git a
benefit is to make CROSSCALL_INIT and struct sdei_crosscall_args
are only visible to sdei_do_{cross, local}_call().
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Reviewed-by: James Morse
---
drivers/firmware/arm_sdei.c | 41 ++---
1 file changed, 25
In sdei_init(), the nested statements can be avoided by bailing
on error from platform_driver_register() or absent ACPI SDEI table.
With it, the code looks a bit more readable.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Reviewed-by: James Morse
---
drivers/firmware/arm_sdei.c
th in sdei_event_create()
to resolve the issue. This shouldn't cause functional changes.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Acked-by: James Morse
---
drivers/firmware/arm_sdei.c | 30 --
1 file changed, 16 insertions(+), 14 deletions(-)
di
needed if the device isn't existing.
Besides, the errno (@ret) should be updated accordingly in this
case.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
Reviewed-by: James Morse
---
drivers/firmware/arm_sdei.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-
nts can be
avoid to make the code a bit cleaner.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
---
drivers/firmware/arm_sdei.c | 52 ++---
1 file changed, 25 insertions(+), 27 deletions(-)
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/
nts can be
avoid to make the code a bit cleaner.
Signed-off-by: Gavin Shan
Reviewed-by: Jonathan Cameron
---
drivers/firmware/arm_sdei.c | 29 ++---
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sd
TCH[2/2] allocates the needed zero pages according to L1 cache size
Changelog
=
v2:
* Rebased to 5.9.rc6 (Gavin)
* Retrieve cache topology from ACPI/DT(Will/Robin)
Gavin Shan (2):
arm64/mm: Introduce zero PGD table
arm
, which is decoupled from the zero
page(s).
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/mmu_context.h | 6 +++---
arch/arm64/include/asm/pgtable.h | 2 ++
arch/arm64/kernel/setup.c| 2 +-
arch/arm64/kernel/vmlinux.lds.S | 4
arch/arm64/mm/proc.S | 2 +
: Gavin Shan
---
arch/arm64/include/asm/cache.h | 3 ++
arch/arm64/include/asm/pgtable.h | 9 -
arch/arm64/kernel/cacheinfo.c| 67
arch/arm64/mm/init.c | 37 ++
arch/arm64/mm/mmu.c | 7
drivers/base/cacheinfo.c
/mmu.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
With the following nit-picky comments resolved:
Reviewed-by: Gavin Shan
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 75df62fea1b6..df3b7415b128 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
Hi Anshuman,
On 9/21/20 10:05 PM, Anshuman Khandual wrote:
This enables MEM_OFFLINE memory event handling. It will help intercept any
possible error condition such as if boot memory some how still got offlined
even after an explicit notifier failure, potentially by a future change in
generic hot
, which is decoupled from the zero
page(s).
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/mmu_context.h | 6 +++---
arch/arm64/include/asm/pgtable.h | 2 ++
arch/arm64/kernel/setup.c| 2 +-
arch/arm64/kernel/vmlinux.lds.S | 4
arch/arm64/mm/proc.S | 2 +
called after the page
allocator begins to work, to allocate the contigous pages
needed by color zero page.
* Reworked ZERO_PAGE() and define __HAVE_COLOR_ZERO_PAGE.
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/cache.h | 22
arch/arm64/include/asm
TCH[2/2] allocates the needed zero pages according to L1 cache size
Gavin Shan (2):
arm64/mm: Introduce zero PGD table
arm64/mm: Enable color zero pages
arch/arm64/include/asm/cache.h | 22 +
arch/arm64/include/asm/mmu_context.h | 6 ++---
arch/arm64/include/asm/pgta
On 9/15/20 1:26 PM, Liu Shixin wrote:
Simplify the return expression.
Signed-off-by: Liu Shixin
---
Reviewed-by: Gavin Shan
drivers/firmware/arm_sdei.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
On 5/6/20 5:19 PM, Will Deacon wrote:
On Wed, May 06, 2020 at 12:36:43PM +0530, Anshuman Khandual wrote:
On 05/06/2020 12:16 PM, Gavin Shan wrote:
The function add_huge_page_size(), wrapper of hugetlb_add_hstate(),
avoids to register duplicated huge page states for same size. However,
the
From: Will Deacon
We can advertise ourselves to guests as KVM and provide a basic features
bitmap for discoverability of future hypervisor services.
Cc: Marc Zyngier
Signed-off-by: Will Deacon
Signed-off-by: Gavin Shan
---
virt/kvm/arm/hypercalls.c | 29 +++--
1 file
@esr.
* The parameters are reorder based on their importance.
This shouldn't cause any functional changes.
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/kvm_host.h | 4
virt/kvm/arm/mmu.c| 14 --
2 files changed, 12 insertions(+), 6 deletions(-)
this, the
caller has flexibility on where the ESR is read. It shouldn't cause
any functional changes.
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/kvm_emulate.h | 83 +++-
arch/arm64/kvm/handle_exit.c | 20 --
arch/arm64/kvm/hyp/switch.c
This replace the variable names to make them self-explaining. The
tracepoint isn't changed accordingly because they're part of ABI:
* @hsr to @esr
* @hsr_ec to @ec
* Use kvm_vcpu_trap_get_class() helper if possible
Signed-off-by: Gavin Shan
---
arch/arm64/kvm/handle_e
Since kvm/arm32 was removed, this renames kvm_vcpu_get_hsr() to
kvm_vcpu_get_esr() to it a bit more self-explaining because the
functions returns ESR instead of HSR on aarch64. This shouldn't
cause any functional changes.
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/kvm_emulate.h
Rutland)
* Delayed wakeup mechanism in guest kernel (Gavin
Shan)
* Stability improvement in the guest kernel: delayed wakeup mechanism,
external abort disallowed region, lazily clear async page fault,
disabled interrupt on acquiring the head's loc
tible with KVM. Once this has been established,
additional services can be discovered via a feature bitmap.
Cc: Marc Zyngier
Signed-off-by: Will Deacon
Signed-off-by: Gavin Shan
---
arch/arm64/include/asm/hypervisor.h | 11 +
arch/arm64/kernel/setup.c
ble the feature.
Signed-off-by: Gavin Shan
---
arch/arm64/Kconfig | 11 +
arch/arm64/include/asm/exception.h | 3 +
arch/arm64/include/asm/kvm_para.h | 27 +-
arch/arm64/kernel/entry.S | 33 +++
arch/arm64/kernel/process.c| 4 +
arch/arm64/mm/fault.c
This adds API cpu_rq_is_locked() to check if the CPU's runqueue has been
locked or not. It's used in the subsequent patch to determine the task
wakeup should be executed immediately or delayed.
Signed-off-by: Gavin Shan
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 8 +
t.
* The signals are fired and consumed in sequential fashion. It means
no more signals will be fired if there is pending one, awaiting the
guest to consume. It's because the injected data abort faults have
to be done in sequential fashion.
Signed-off-by: Gavin Shan
--
On Mon, Apr 17, 2017 at 01:36:19PM -0400, David Miller wrote:
>From: Cédric Le Goater
>Date: Fri, 14 Apr 2017 10:56:37 +0200
>
>> htonl was used instead of ntohl. Surely a typo.
>>
>> Signed-off-by: Cédric Le Goater
>
>I don't think so, "checksum" is of type "u32" thus is in host byte
>order. T
iov_numvfs to 0, choose whether to probe or not, and then resume
>> sriov_numvfs.
>>
>> Signed-off-by: Bodong Wang
>> Signed-off-by: Eli Cohen
>> Reviewed-by: Gavin Shan
>> Reviewed-by: Alex Williamson
>
>Whoa, I reviewed the last version, that's d
On 11/20/24 3:49 PM, zhangjiao2 wrote:
From: zhang jiao
There is no need to define a local variable 'page',
just use outer variable 'page'.
Signed-off-by: zhang jiao
---
drivers/virtio/virtio_balloon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Gavin Shan
201 - 273 of 273 matches
Mail list logo