David Hildenbrand writes:
> For now, distributions implement advanced udev rules to essentially
> - Don't online any hotplugged memory (s390x)
> - Online all memory to ZONE_NORMAL (e.g., most virt environments like
> hyperv)
> - Online all memory to ZONE_MOVABLE in case the zone imbalance is ta
ned, which seems to be
> good enough for now.
>
> Cc: "K. Y. Srinivasan"
> Cc: Haiyang Zhang
> Cc: Stephen Hemminger
> Cc: Wei Liu
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: "Rafael J. Wysocki"
> Cc: Ba
Baoquan He writes:
> On 03/17/20 at 11:49am, David Hildenbrand wrote:
>> Distributions nowadays use udev rules ([1] [2]) to specify if and
>> how to online hotplugged memory. The rules seem to get more complex with
>> many special cases. Due to the various special cases,
>> CONFIG_MEMORY_HOTPLUG_
Baoquan He writes:
> Is there a reason hyperV need boot with small memory, then enlarge it
> with huge memory? Since it's a real case in hyperV, I guess there must
> be reason, I am just curious.
>
It doesn't really *need* to but this can be utilized in e.g. 'hot
standby' schemes I believe. Also
le_mmio_return() is also able to extruct 'struct kvm_run' from'
'struct kvm_vcpu'. This likely deserves it's own patch though.
> if (ret)
> return ret;
> }
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 74bdb7bf3295..e18faea89146 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3135,7 +3135,7 @@ static long kvm_vcpu_ioctl(struct file *filp,
> synchronize_rcu();
> put_pid(oldpid);
> }
> - r = kvm_arch_vcpu_ioctl_run(vcpu, vcpu->run);
> + r = kvm_arch_vcpu_ioctl_run(vcpu);
> trace_kvm_userspace_exit(vcpu->run->exit_reason, r);
> break;
> }
Looked at non-x86 arches just briefly but there seems to be no
controversy here, so
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
ut;
> }
>
> - sync_regs(vcpu, kvm_run);
> + sync_regs(vcpu);
> enable_cpu_timer_accounting(vcpu);
>
> might_fault();
> @@ -4393,7 +4400,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> }
>
> disable_cpu_timer_accounting(vcpu);
> - store_regs(vcpu, kvm_run);
> + store_regs(vcpu);
>
> kvm_sigset_deactivate(vcpu);
Haven't tried to compile this but the change itself looks obviously
correct, so
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
gt; --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1892,7 +1892,6 @@ static void handle_access_fault(struct kvm_vcpu *vcpu,
> phys_addr_t fault_ipa)
> /**
> * kvm_handle_guest_abort - handles all 2nd stage aborts
> * @vcpu:the VCPU pointer
> - * @run: the kvm_run structure
> *
> * Any abort that gets to the host is almost guaranteed to be caused by a
> * missing second stage translation table entry, which can mean that either
> the
> @@ -1901,7 +1900,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu,
> phys_addr_t fault_ipa)
> * space. The distinction is based on the IPA causing the fault and whether
> this
> * memory region has been registered as standard RAM by user space.
> */
> -int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> {
> unsigned long fault_status;
> phys_addr_t fault_ipa;
> @@ -1980,7 +1979,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
>* of the page size.
>*/
> fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> - ret = io_mem_abort(vcpu, run, fault_ipa);
> + ret = io_mem_abort(vcpu, fault_ipa);
> goto out_unlock;
> }
Haven't tried to compile this but the change itself looks obviously
correct, so
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
5a3987f3ebf3 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -290,8 +290,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
> r = RESUME_HOST;
> break;
> }
> - r = kvmhv_run_single_vcpu(vcpu->arch.kvm_run, vcpu, hdec_exp,
> - lpcr);
> + r = kvmhv_run_single_vcpu(vcpu->run, vcpu, hdec_exp, lpcr);
> } while (is_kvmppc_resume_guest(r));
>
> /* save L2 state for return */
FWIW,
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
Tianjia Zhang writes:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining red
David Hildenbrand writes:
> On 02/10/2018 15:47, Michal Hocko wrote:
...
>>
>> Why do you need a generic hotplug rule in the first place? Why don't you
>> simply provide different set of rules for different usecases? Let users
>> decide which usecase they prefer rather than try to be clever whic
Michal Hocko writes:
> On Wed 03-10-18 15:38:04, Vitaly Kuznetsov wrote:
>> David Hildenbrand writes:
>>
>> > On 02/10/2018 15:47, Michal Hocko wrote:
>> ...
>> >>
>> >> Why do you need a generic hotplug rule in the first place? Why don
Dave Hansen writes:
> On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
>> It is more than just memmaps (e.g. forking udev process doing memory
>> onlining also needs memory) but yes, the main idea is to make the
>> onlining synchronous with hotplug.
>
> That's a g
Sean Christopherson writes:
> To make it obvious that KVM doesn't have a lurking bug, cleanup eVMCS
> enabling if kvm_init() fails even though the enabling doesn't strictly
> need to be unwound. eVMCS enabling only toggles values that are fully
> contained in the VMX module, i.e. it's technicall
Sean Christopherson writes:
> On Thu, Nov 03, 2022, Vitaly Kuznetsov wrote:
>> Sean Christopherson writes:
>> > + /*
>> > + * Reset everything to support using non-enlightened VMCS access later
>> > + * (e.g. when we reloa
-
> - if (ms_hyperv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
> - vmx_x86_ops.enable_l2_tlb_flush
> - = hv_enable_l2_tlb_flush;
> -
> - } else {
> - enlightened_vmcs = false;
> - }
> -#endif
> + hv_init_evmcs();
>
> r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
>__alignof__(struct vcpu_vmx), THIS_MODULE);
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
e VP assist page is unmapped
> during CPU hot unplug, and so KVM's clearing of the eVMCS controls needs
> to occur with CPU hot(un)plug disabled, otherwise KVM could attempt to
> write to a CPU's VP assist page after it's unmapped.
>
> Reported-by: Vitaly Kuznetsov
&g
Hi,
s,memhp_auto_offline,memhp_auto_online, in the subject please :-)
Nathan Fontenot writes:
> Commit 31bc3858e "add automatic onlining policy for the newly added memory"
> provides the capability to have added memory automatically onlined
> during add, but this appears to be slightly broken.
Michal Hocko writes:
> On Wed 22-02-17 10:32:34, Vitaly Kuznetsov wrote:
> [...]
>> > There is a workaround in that a user could online the memory or have
>> > a udev rule to online the memory by using the sysfs interface. The
>> > sysfs interface to online
Michal Hocko writes:
> On Thu 23-02-17 14:31:24, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
>>
>> > On Wed 22-02-17 10:32:34, Vitaly Kuznetsov wrote:
>> > [...]
>> >> > There is a workaround in that a user could online the memory or have
&
Michal Hocko writes:
> On Thu 23-02-17 16:49:06, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
>>
>> > On Thu 23-02-17 14:31:24, Vitaly Kuznetsov wrote:
>> >> Michal Hocko writes:
>> >>
>> >> > On Wed 22-02-17 10:32:34, Vitaly
Michal Hocko writes:
> On Thu 23-02-17 17:36:38, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
> [...]
>> > Is a grow from 256M -> 128GB really something that happens in real life?
>> > Don't get me wrong but to me this sounds quite exaggerated. Hotmem ad
Michal Hocko writes:
> On Thu 23-02-17 19:14:27, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
>>
>> > On Thu 23-02-17 17:36:38, Vitaly Kuznetsov wrote:
>> >> Michal Hocko writes:
>> > [...]
>> >> > Is a grow from 256M -> 128G
Michal Hocko writes:
> On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
>>
>> > On Thu 23-02-17 19:14:27, Vitaly Kuznetsov wrote:
> [...]
>> >> Virtual guests under stress were getting into OOM easily and the OOM
>> >>
Michal Hocko writes:
> On Fri 24-02-17 16:05:18, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
>>
>> > On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
> [...]
>> >> Just did a quick (and probably dirty) test, increasing guest memory from
>> &g
Michal Hocko writes:
> On Fri 24-02-17 17:09:13, Vitaly Kuznetsov wrote:
>> I have a smal guest and I want to add more memory to it and the
>> result is ... OOM. Not something I expected.
>
> Which is not all that unexpected if you use a technology which has to
> alloca
Michal Hocko writes:
> On Fri 24-02-17 17:40:25, Vitaly Kuznetsov wrote:
>> Michal Hocko writes:
>>
>> > On Fri 24-02-17 17:09:13, Vitaly Kuznetsov wrote:
> [...]
>> >> While this will most probably work for me I still disagree with the
>> >>
26 matches
Mail list logo