On 11/9/24 00:36, Ankur Arora wrote:

Shrikanth Hegde <sshe...@linux.ibm.com> writes:

Define preempt lazy bit for Powerpc. Use bit 9 which is free and within
16 bit range of NEED_RESCHED, so compiler can issue single andi.

Since Powerpc doesn't use the generic entry/exit, add lazy check at exit
to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for
return to kernel.

Ran a few benchmarks and db workload on Power10. Performance is close to
preempt=none/voluntary. It is possible that some patterns would
differ in lazy[2]. More details of preempt lazy is here [1]

Since Powerpc system can have large core count and large memory,
preempt lazy is going to be helpful in avoiding soft lockup issues.

[1]: https://lore.kernel.org/lkml/20241007074609.447006...@infradead.org/
[2]: 
https://lore.kernel.org/all/1a973dda-c79e-4d95-935b-e4b93eb07...@linux.ibm.com/

Signed-off-by: Shrikanth Hegde <sshe...@linux.ibm.com>

Looks good. Reviewed-by: <ankur.a.ar...@oracle.com>

Thank you Ankur for taking a look and rwb tag.


However, I just checked and powerpc does not have
CONFIG_KVM_XFER_TO_GUEST_WORK. Do you need this additional patch
for handling the lazy bit at KVM guest entry?

will take a look. Thanks for the pointers.


diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f14329989e9a..7bdf7015bb65 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -84,7 +84,8 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
         hard_irq_disable();

         while (true) {
-               if (need_resched()) {
+               unsigned long tf = read_thread_flags();
+               if (tf & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) {
                         local_irq_enable();
                         cond_resched();
                         hard_irq_disable();


Ankur


Reply via email to