On 2023-03-28 15:44:02, Kautuk Consul wrote:
> On 2023-03-28 20:44:48, Michael Ellerman wrote:
> > Kautuk Consul <[email protected]> writes:
> > > kvmppc_vcore_create() might not be able to allocate memory through
> > > kzalloc. In that case the kvm->arch.online_vcores shouldn't be
> > > incremented.
> > 
> > I agree that looks wrong.
> > 
> > Have you tried to test what goes wrong if it fails? It looks like it
> > will break the LPCR update, which likely will cause the guest to crash
> > horribly.
Also, are you referring to the code in kvmppc_update_lpcr()?
That code will not crash as it checks for the vc before trying to
dereference it.
But the following 2 places that utilize the arch.online_vcores will have
problems in logic if the usermode test-case doesn't pull down the
kvm context after the -ENOMEM vcpu allocation failure:
book3s_hv.c:3030:       if (!kvm->arch.online_vcores) {
book3s_hv_rm_mmu.c:44:  if (kvm->arch.online_vcores == 1 && 
local_paca->kvm_hstate.kvm_vcpu)

> Not sure about LPCR update, but with and without the patch qemu exits
> and so the kvm context is pulled down fine.
> > 
> > You could use CONFIG_FAIL_SLAB and fail-nth etc. to fail just one
> > allocation for a guest. Or probably easier to just hack the code to fail
> > the 4th time it's called using a static counter.
> I am using live debug and I set the r3 return value to 0x0 after the
> call to kzalloc.
> > 
> > Doesn't really matter but could be interesting.
> With and without this patch qemu quits with:
> qemu-system-ppc64: kvm_init_vcpu: kvm_get_vcpu failed (0): Cannot allocate 
> memory
> 
> That's because qemu will shut down when any vcpu is not able
> to be allocated.
> > 
> > > Add a check for kzalloc failure and return with -ENOMEM from
> > > kvmppc_core_vcpu_create_hv().
> > >
> > > Signed-off-by: Kautuk Consul <[email protected]>
> > > ---
> > >  arch/powerpc/kvm/book3s_hv.c | 10 +++++++---
> > >  1 file changed, 7 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> > > index 6ba68dd6190b..e29ee755c920 100644
> > > --- a/arch/powerpc/kvm/book3s_hv.c
> > > +++ b/arch/powerpc/kvm/book3s_hv.c
> > > @@ -2968,13 +2968,17 @@ static int kvmppc_core_vcpu_create_hv(struct 
> > > kvm_vcpu *vcpu)
> > >                   pr_devel("KVM: collision on id %u", id);
> > >                   vcore = NULL;
> > >           } else if (!vcore) {
> > > +                 vcore = kvmppc_vcore_create(kvm,
> > > +                                 id & ~(kvm->arch.smt_mode - 1));
> > 
> > That line doesn't need to be wrapped, we allow 90 columns.
> > 
> > > +                 if (unlikely(!vcore)) {
> > > +                         mutex_unlock(&kvm->lock);
> > > +                         return -ENOMEM;
> > > +                 }
> > 
> > Rather than introducing a new return point here, I think it would be
> > preferable to use the existing !vcore case below.
> > 
> > >                   /*
> > >                    * Take mmu_setup_lock for mutual exclusion
> > >                    * with kvmppc_update_lpcr().
> > >                    */
> > > -                 err = -ENOMEM;
> > > -                 vcore = kvmppc_vcore_create(kvm,
> > > -                                 id & ~(kvm->arch.smt_mode - 1));
> > 
> > So leave that as is (maybe move the comment down).
> > 
> > And wrap the below in:
> > 
> >  +                      if (vcore) {
> > 
> > >                   mutex_lock(&kvm->arch.mmu_setup_lock);
> > >                   kvm->arch.vcores[core] = vcore;
> > >                   kvm->arch.online_vcores++;
> >                     
> >                     mutex_unlock(&kvm->arch.mmu_setup_lock);
> >  +                      }
> >             }
> >     }
> > 
> > Meaning the vcore == NULL case will fall through to here and return via
> > this existing path:
> > 
> >     mutex_unlock(&kvm->lock);
> > 
> >     if (!vcore)
> >             return err;
> > 
> > 
> > cheers

Reply via email to