88297
precopy ram: 2603645 kbytes
downtime ram: 9254 kbytes
Signed-off-by: Kautuk Consul
---
Documentation/virt/kvm/api.rst | 2 +-
arch/powerpc/include/uapi/asm/kvm.h | 2 ++
arch/powerpc/kvm/Kconfig| 2 ++
arch/powerpc/kvm/book3s.c | 46 ++
Hi Jordan,
On 2023-07-06 14:15:13, Jordan Niethe wrote:
>
>
> On 8/6/23 10:34 pm, Kautuk Consul wrote:
>
> Need at least a little context in the commit message itself:
>
> "Enable ring-based dirty memory tracking on ppc64:"
Sure will take this i
Hi Everyone,
On 2023-06-08 08:34:48, Kautuk Consul wrote:
> - Enable CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL as ppc64 is weakly
> ordered.
> - Enable CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP because the
> kvmppc_xive_native_set_attr is called in the context of an ioctl
> syscal
Hi Nick/Gavin/Everyone,
On 2023-06-08 08:34:48, Kautuk Consul wrote:
> - Enable CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL as ppc64 is weakly
> ordered.
> - Enable CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP because the
> kvmppc_xive_native_set_attr is called in the context of an ioctl
>
t function to support
the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT config option.
On testing with live migration it was found that there is around
150-180 ms improvment in overall migration time with this patch.
Signed-off-by: Kautuk Consul
---
Documentation/virt/kvm/api.rst | 2 +-
arch/po
On 2023-04-12 12:34:13, Kautuk Consul wrote:
> Hi,
>
> On 2023-04-11 16:35:10, Michael Ellerman wrote:
> > Kautuk Consul writes:
> > > On 2023-04-07 09:01:29, Sean Christopherson wrote:
> > >> On Fri, Apr 07, 2023, Bagas Sanjaya wrote:
> > >> &g
Hi,
On 2023-04-11 16:35:10, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-04-07 09:01:29, Sean Christopherson wrote:
> >> On Fri, Apr 07, 2023, Bagas Sanjaya wrote:
> >> > On Fri, Apr 07, 2023 at 05:31:47AM -0400, Kautuk Consul wrote:
> >>
On 2023-04-07 09:01:29, Sean Christopherson wrote:
> On Fri, Apr 07, 2023, Bagas Sanjaya wrote:
> > On Fri, Apr 07, 2023 at 05:31:47AM -0400, Kautuk Consul wrote:
> > > I used the unlikely() macro on the return values of the k.alloc
> > > calls and found that it change
I used the unlikely() macro on the return values of the k.alloc
calls and found that it changes the code generation a bit.
Optimize all return paths of k.alloc calls by improving
branch prediction on return value of k.alloc.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_nested.c
On 2023-03-30 10:59:19, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-03-28 23:02:09, Michael Ellerman wrote:
> >> Kautuk Consul writes:
> >> > On 2023-03-28 15:44:02, Kautuk Consul wrote:
> >> >> On 2023-03-28 20:44:48, Michael El
On 2023-03-28 23:02:09, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-03-28 15:44:02, Kautuk Consul wrote:
> >> On 2023-03-28 20:44:48, Michael Ellerman wrote:
> >> > Kautuk Consul writes:
> >> > > kvmppc_vcore_create() might not be abl
On 2023-03-28 15:44:02, Kautuk Consul wrote:
> On 2023-03-28 20:44:48, Michael Ellerman wrote:
> > Kautuk Consul writes:
> > > kvmppc_vcore_create() might not be able to allocate memory through
> > > kzalloc. In that case the kvm->arch.online_vcores shouldn'
On 2023-03-28 20:44:48, Michael Ellerman wrote:
> Kautuk Consul writes:
> > kvmppc_vcore_create() might not be able to allocate memory through
> > kzalloc. In that case the kvm->arch.online_vcores shouldn't be
> > incremented.
>
> I agree that looks wrong.
&g
Hi,
On 2023-03-23 03:47:18, Kautuk Consul wrote:
> kvmppc_vcore_create() might not be able to allocate memory through
> kzalloc. In that case the kvm->arch.online_vcores shouldn't be
> incremented.
> Add a check for kzalloc failure and return with -ENOMEM from
> kvm
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_CODE_START_LOCAL
and SYM_CODE_END.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 ++--
1 file chang
On 2023-03-27 20:30:02, Nicholas Piggin wrote:
> On Mon Mar 27, 2023 at 8:04 PM AEST, Kautuk Consul wrote:
> > kvmppc_hv_entry isn't called from anywhere other than
> > book3s_hv_rmhandlers.S itself. Removing .global scope for
> > this function and annotating it with SY
On 2023-03-27 15:25:24, Kautuk Consul wrote:
> On 2023-03-27 19:51:34, Nicholas Piggin wrote:
> > On Mon Mar 27, 2023 at 7:34 PM AEST, Kautuk Consul wrote:
> > > On 2023-03-27 14:58:03, Kautuk Consul wrote:
> > > > On 2023-03-27 19:19:37, Nicholas Piggin wrote:
>
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_CODE_START_LOCAL
and SYM_CODE_END.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 ++--
1 file chang
On 2023-03-27 19:51:34, Nicholas Piggin wrote:
> On Mon Mar 27, 2023 at 7:34 PM AEST, Kautuk Consul wrote:
> > On 2023-03-27 14:58:03, Kautuk Consul wrote:
> > > On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> > > > On Thu Mar 16, 2023 at 3:1
On 2023-03-27 15:04:38, Kautuk Consul wrote:
> On 2023-03-27 14:58:03, Kautuk Consul wrote:
> > On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> > > On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > > > kvmppc_hv_entry isn
On 2023-03-27 14:58:03, Kautuk Consul wrote:
> On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> > On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > > kvmppc_hv_entry isn't called from anywhere other than
> > > book3s_hv_rmhandlers.S itself. Remov
On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > kvmppc_hv_entry isn't called from anywhere other than
> > book3s_hv_rmhandlers.S itself. Removing .global scope for
> > this function and annotati
kvmppc_vcore_create() might not be able to allocate memory through
kzalloc. In that case the kvm->arch.online_vcores shouldn't be
incremented.
Add a check for kzalloc failure and return with -ENOMEM from
kvmppc_core_vcpu_create_hv().
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/boo
Hi,
On 2023-03-22 23:17:35, Michael Ellerman wrote:
> Kautuk Consul writes:
> > kvmppc_hv_entry is called from only 2 locations within
> > book3s_hv_rmhandlers.S. Both of those locations set r4
> > as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
> > So, shift
On 2023-03-21 10:24:36, Kautuk Consul wrote:
> > Is r4 there only used for CONFIG_KVM_BOOK3S_HV_P8_TIMING? Could put it
> > under there. Although you then lose the barrier if it's disabled, that
> > is okay if you're sure that's the only memory operation being
Hi,
On 2023-03-21 14:15:14, Nicholas Piggin wrote:
> On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > kvmppc_hv_entry is called from only 2 locations within
> > book3s_hv_rmhandlers.S. Both of those locations set r4
> > as HSTATE_KVM_VCPU(r13) before calling kv
- remove .global scope of kvmppc_hv_entry
- remove r4 argument to kvmppc_hv_entry as it is not required
Changes since v2:
- Add the lwsync instruction before the load to r4 to order
load of vcore before load of vcpu
Kautuk Consul (2):
arch/powerpc/kvm: kvmppc_hv_entry: remove .global scope
kvmppc_hv_entry is called from only 2 locations within
book3s_hv_rmhandlers.S. Both of those locations set r4
as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
So, shift the r4 load instruction to kvmppc_hv_entry and
thus modify the calling convention of this function.
Signed-off-by: Kautuk
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_INNER_LABEL.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 3 +--
1 file changed, 1 insertion(+), 2 dele
Hi,
On 2023-03-16 14:39:08, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-03-15 15:48:53, Michael Ellerman wrote:
> >> Kautuk Consul writes:
> >> > kvmppc_hv_entry is called from only 2 locations within
> >> > book3s_hv_rmhandlers.S
On 2023-03-15 10:48:01, Kautuk Consul wrote:
> On 2023-03-15 15:48:53, Michael Ellerman wrote:
> > Kautuk Consul writes:
> > > kvmppc_hv_entry is called from only 2 locations within
> > > book3s_hv_rmhandlers.S. Both of those locations set r4
> > > as
On 2023-03-15 15:48:53, Michael Ellerman wrote:
> Kautuk Consul writes:
> > kvmppc_hv_entry is called from only 2 locations within
> > book3s_hv_rmhandlers.S. Both of those locations set r4
> > as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
> > So, shift
Hi everyone,
Anyone interested in reviewing this small patch-set ?
I tested it on P8 and it works fine.
Thanks.
On 2023-03-06 07:37:38, Kautuk Consul wrote:
> - remove .global scope of kvmppc_hv_entry
> - remove r4 argument to kvmppc_hv_entry as it is not required
>
> Chan
kvmppc_hv_entry is called from only 2 locations within
book3s_hv_rmhandlers.S. Both of those locations set r4
as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
So, shift the r4 load instruction to kvmppc_hv_entry and
thus modify the calling convention of this function.
Signed-off-by: Kautuk
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_INNER_LABEL.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 3 +--
1 file changed, 1 insertion(+), 2 dele
- remove .global scope of kvmppc_hv_entry
- remove r4 argument to kvmppc_hv_entry as it is not required
Changes since v1:
- replaced .global by SYM_INNER_LABEL for kvmpcc_hv_entry
Kautuk Consul (2):
arch/powerpc/kvm: kvmppc_hv_entry: remove .global scope
arch/powerpc/kvm: kvmppc_hv_entry
Hi,
On 2023-02-20 10:53:55, Kautuk Consul wrote:
> kvmppc_hv_entry is called from only 2 locations within
> book3s_hv_rmhandlers.S. Both of those locations set r4
> as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
> So, shift the r4 load instruction to kvmppc_hv_entry and
>
On 2023-02-24 16:45:45, Sathvika Vasireddy wrote:
> On 23/02/23 10:39, Kautuk Consul wrote:
>
> > Hi Sathvika,
> > > Just one question though. Went through the code again and I think
> > > that this place shouldn't be proper to insert a SYM_FUNC_END
> >
Hi Sathvika,
>
> Just one question though. Went through the code again and I think
> that this place shouldn't be proper to insert a SYM_FUNC_END
> because we haven't entered the guest at this point and the name
> of the function is kvmppc_hv_entry which I think implies that
> this SYM_FUNC_END s
> You are correct, the patch is wrong because it fails to account for IO
> accesses.
Okay, I looked at the PowerPC ISA and found:
"The memory barrier provides an ordering function for the storage accesses
caused by Load, Store,and dcbz instructions that are executed by the processor
executing th
On 2023-02-22 20:16:10, Paul E. McKenney wrote:
> On Thu, Feb 23, 2023 at 09:31:48AM +0530, Kautuk Consul wrote:
> > On 2023-02-22 09:47:19, Paul E. McKenney wrote:
> > > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> > > > A link from ibm.com sta
On 2023-02-23 14:51:25, Michael Ellerman wrote:
> Hi Paul,
>
> "Paul E. McKenney" writes:
> > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> >> A link from ibm.com states:
> >> "Ensures that all instructions preceding the call
On 2023-02-22 09:47:19, Paul E. McKenney wrote:
> On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> > A link from ibm.com states:
> > "Ensures that all instructions preceding the call to __lwsync
> > complete before any subsequent store instructions c
> >> I'd have preferred with 'asm volatile' though.
> > Sorry about that! That wasn't the intent of this patch.
> > Probably another patch series should change this manner of #defining
> > assembly.
>
> Why adding new line wrong then have to have another patch to make them
> right ?
>
> When you
On Wed, Feb 22, 2023 at 09:44:54AM +, Christophe Leroy wrote:
>
>
> Le 22/02/2023 à 10:30, Kautuk Consul a écrit :
> > Again, could some IBM/non-IBM employees do basic sanity kernel load
> > testing on PPC64 UP and SMP systems for this patch?
> > would deeply appr
>
> Reviewed-by: Christophe Leroy
Thanks!
>
> > ---
> > arch/powerpc/include/asm/barrier.h | 7 +++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/arch/powerpc/include/asm/barrier.h
> > b/arch/powerpc/include/asm/barrier.h
> > index b95b666f0374..e088dacc0ee8 100644
> > --- a/
Again, could some IBM/non-IBM employees do basic sanity kernel load
testing on PPC64 UP and SMP systems for this patch?
would deeply appreciate it! :-)
Thanks again!
Sorry, sent the wrong patch!
Please ignore this one.
Sending the v2 in another email.
On Wed, Feb 22, 2023 at 02:31:12PM +0530, Kautuk Consul wrote:
> A link from ibm.com states:
> "Ensures that all instructions preceding the call to __lwsync
> complete before any subsequent stor
re defined to lwsync.
But this same understanding applies to parallel pipeline
execution on each PowerPC processor.
So, use the lwsync instruction for rmb() and wmb() on the PPC
architectures that support it.
Signed-off-by: Kautuk Consul
---
arch/powerpc/include/asm/barrier.h | 7 +++
1 file
re defined to lwsync.
But this same understanding applies to parallel pipeline
execution on each PowerPC processor.
So, use the lwsync instruction for rmb() and wmb() on the PPC
architectures that support it.
Also removed some useless spaces.
Signed-off-by: Kautuk Consul
---
arch/powerpc/i
> No, I don't mean to use the existing #ifdef/elif/else.
>
> Define an #ifdef /#else dedicated to xmb macros.
>
> Something like that:
>
> @@ -35,9 +35,15 @@
>* However, on CPUs that don't support lwsync, lwsync actually maps to a
>* heavy-weight sync, so smp_wmb() can be a lighter-wei
On Wed, Feb 22, 2023 at 08:28:19AM +, Christophe Leroy wrote:
>
>
> Le 22/02/2023 à 09:21, Kautuk Consul a écrit :
> >> On Wed, Feb 22, 2023 at 07:02:34AM +, Christophe Leroy wrote:
> >>>> +/* Redefine rmb() to lwsync. */
> >>>
> >&
> On Wed, Feb 22, 2023 at 07:02:34AM +, Christophe Leroy wrote:
> > > +/* Redefine rmb() to lwsync. */
> >
> > WHat's the added value of this comment ? Isn't it obvious in the line
> > below that rmb() is being defined to lwsync ? Please avoid useless comments.
> Sure.
Sorry, forgot to add th
On Wed, Feb 22, 2023 at 07:02:34AM +, Christophe Leroy wrote:
>
>
> Le 22/02/2023 à 07:01, Kautuk Consul a écrit :
> > A link from ibm.com states:
> > "Ensures that all instructions preceding the call to __lwsync
> > complete before any subsequent st
Hi All,
On Wed, Feb 22, 2023 at 11:31:07AM +0530, Kautuk Consul wrote:
> /* The sub-arch has lwsync */
> #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
> -#define SMPWMB LWSYNC
> +#undef rmb
> +#undef wmb
> +/* Redefine rmb() to lwsync. */
> +#define r
re defined to lwsync.
But this same understanding applies to parallel pipeline
execution on each PowerPC processor.
So, use the lwsync instruction for rmb() and wmb() on the PPC
architectures that support it.
Also removed some useless spaces.
Signed-off-by: Kautuk Consul
---
arch/powerpc/i
On Mon, Feb 20, 2023 at 01:41:38PM +0530, Kautuk Consul wrote:
> On Mon, Feb 20, 2023 at 01:31:40PM +0530, Sathvika Vasireddy wrote:
> > Placing SYM_FUNC_END(kvmppc_hv_entry) before kvmppc_got_guest() should do:
> >
> > @@ -502,12 +500,10 @@ END_FTR_SECTION_IF
On Mon, Feb 20, 2023 at 01:31:40PM +0530, Sathvika Vasireddy wrote:
> Placing SYM_FUNC_END(kvmppc_hv_entry) before kvmppc_got_guest() should do:
>
> @@ -502,12 +500,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> * *
> */
Hi Sathvika,
(Sorry didn't include list in earlier email.)
On Mon, Feb 20, 2023 at 12:35:09PM +0530, Sathvika Vasireddy wrote:
> Hi Kautuk,
>
> On 20/02/23 10:53, Kautuk Consul wrote:
> > kvmppc_hv_entry isn't called from anywhere other than
> > book3s_hv_rmhand
kvmppc_hv_entry is called from only 2 locations within
book3s_hv_rmhandlers.S. Both of those locations set r4
as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
So, shift the r4 load instruction to kvmppc_hv_entry and
thus modify the calling convention of this function.
Signed-off-by: Kautuk
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers
- remove .global scope of kvmppc_hv_entry
- remove r4 argument to kvmppc_hv_entry as it is not required
Kautuk Consul (2):
arch/powerpc/kvm: kvmppc_hv_entry: remove .global scope
arch/powerpc/kvm: kvmppc_hv_entry: remove r4 argument
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 10 --
1
Add a check for p->state == TASK_RUNNING so that any wake-ups on
task_struct p in the interim lead to 0 being returned by get_wchan().
Signed-off-by: Kautuk Consul
---
arch/powerpc/kernel/process.c |3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/ker
Ugh, sorry I forgot to attach the stress_32k.c test-case to this email.
Please find it attached to this one.
#include
#include
#include
#include
#include
#define ALLOC_BYTE 512*1024
#define COUNT 50
void *alloc_function_one( void *ptr );
void *alloc_function_two( void *ptr );
void *alloc_fu
ase review these patches.
Signed-off-by: Mohd. Faris
Signed-off-by: Kautuk Consul
---
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
>>
>> I have done that port already :-) It's in powerpc-next which I will send
>> to Linus to pull today or tomorrow
Oops, sorry I didn't know that as I don't regularly follow the powerpc tree.
I just did this change for all architectures. :)
>
> BTW. Feel free to review what I've done, it's
retryable as well as killable.
These changes reduce the mmap_sem hold time, which is crucial
during OOM killer invocation.
Port these changes to powerpc.
Signed-off-by: Mohd. Faris
Signed-off-by: Kautuk Consul
---
arch/powerpc/mm/fault.c | 51 +-
1 files
Hi,
Sorry I found one more defect in this patch below.
Im gonna send a v3 for this patch.
On Tue, Mar 20, 2012 at 10:12 AM, Kautuk Consul wrote:
> Commit d065bd810b6deb67d4897a14bfe21f8eb526ba99
> (mm: retry page fault when blocking on disk transfer) and
&g
On Tue, Mar 20, 2012 at 10:06 AM, Peter Zijlstra wrote:
> On Tue, 2012-03-20 at 09:19 -0400, Kautuk Consul wrote:
>> +#ifd#ifdef CONFIG_PPC_SMLPAR
>> + if (firmware_has_feature(FW_FEATURE_CMO)) {
>> +
retryable as well as killable.
These changes reduce the mmap_sem hold time, which is crucial
during OOM killer invocation.
Port these changes to powerpc.
Signed-off-by: Mohd. Faris
Signed-off-by: Kautuk Consul
---
arch/powerpc/mm/fault.c | 51 +-
1 files
>
> What about arch/um/?
> Does UML not need this change?
Oh yes, extremely sorry I accidentally missed that one out.
Mind if I send it separately ?
>
> --
> Thanks,
> //richard
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.o
retryable as well as killable.
These changes reduce the mmap_sem hold time, which is crucial
during OOM killer invocation.
Port these changes to powerpc.
Signed-off-by: Mohd. Faris
Signed-off-by: Kautuk Consul
---
arch/powerpc/mm/fault.c | 51 +-
1 files
porting chainges were accepted, me and my
co-worker decided to take the initiative to port these changes to all other
MMU based architectures.
Please review and do write back if there is any way I need to
improve/rewrite any
of these patches.
Signed-off-by: Mohd. Faris
Signed-off-by: Kautuk Consul
73 matches
Mail list logo