On 26/07/18 15:50, Roger Pau Monné wrote:
> On Thu, Jul 26, 2018 at 01:22:33PM +0200, Juergen Gross wrote:
>> On 26/07/18 13:11, Roger Pau Monné wrote:
>>> On Thu, Jul 26, 2018 at 10:45:08AM +0100, George Dunlap wrote:
>>>> On Thu, Jul 26, 2018 at 12:07 AM, Boris Ostrovsky
>>>> <boris.ostrov...@oracle.com> wrote:
>>>>> On 07/25/2018 02:56 PM, Andrew Cooper wrote:
>>>>>> On 25/07/18 17:29, Juergen Gross wrote:
>>>>>>> On 25/07/18 18:12, Roger Pau Monné wrote:
>>>>>>>> On Wed, Jul 25, 2018 at 05:05:35PM +0300, berca...@amazon.com wrote:
>>>>>>>>> On 07/25/2018 05:02 PM, Wei Liu wrote:
>>>>>>>>>> On Wed, Jul 25, 2018 at 03:41:11PM +0200, Juergen Gross wrote:
>>>>>>>>>>> On 25/07/18 15:35, Roger Pau Monné wrote:
>>>>>>>>>>>>> What could be causing the available memory loss problem?
>>>>>>>>>>>> That seems to be Linux aggressively ballooning out memory, you go 
>>>>>>>>>>>> from
>>>>>>>>>>>> 7129M total memory to 246M. Are you creating a lot of domains?
>>>>>>>>>>> This might be related to the tools thinking dom0 is a PV domain.
>>>>>>>>>> Good point.
>>>>>>>>>>
>>>>>>>>>> In that case, xenstore-ls -fp would also be useful. The output should
>>>>>>>>>> show the balloon target for Dom0.
>>>>>>>>>>
>>>>>>>>>> You can also try to set the autoballoon to off in /etc/xen/xl.cfg to 
>>>>>>>>>> see
>>>>>>>>>> if it makes any difference.
>>>>>>>>>>
>>>>>>>>>> Wei.
>>>>>>>>> Also tried setting autoballooning off, but it had no effect.
>>>>>>>> This is a Linux/libxl issue that I'm not sure what's the best way to
>>>>>>>> solve. Linux has the following 'workaround' in the balloon driver:
>>>>>>>>
>>>>>>>> err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
>>>>>>>>                &static_max);
>>>>>>>> if (err != 1)
>>>>>>>>     static_max = new_target;
>>>>>>>> else
>>>>>>>>     static_max >>= PAGE_SHIFT - 10;
>>>>>>>> target_diff = xen_pv_domain() ? 0
>>>>>>>>             : static_max - balloon_stats.target_pages;
>>>>>>> Hmm, shouldn't PVH behave the same way as PV here? I don't think
>>>>>>> there is memory missing for PVH, opposed to HVM's firmware memory.
>>>>>>>
>>>>>>> Adding Boris for a second opinion.
>>>>>
>>>>> (Notwithstanding Andrews' rant below ;-))
>>>>>
>>>>> I am trying to remember --- what memory were we trying not to online for
>>>>> HVM here?
>>>>
>>>> My general memory of the situation is this:
>>>>
>>>> * Balloon drivers are told to reach a "target" value for max_pages.
>>>> * max_pages includes all memory assigned to the guest, including video
>>>> ram, "special" pages, ipxe ROMs, bios ROMs from passed-through
>>>> devices, and so on.
>>>> * Unfortunately, the balloon driver doesn't know what their max_pages
>>>> value is and can't read it.
>>>> * So what the balloon drivers do at the moment (as I understand it) is
>>>> look at the memory *reported as RAM*, and do a calculation:
>>>>   visible_ram - target_max_pages = pages_in_balloon
>>>>
>>>> You can probably see why this won't work -- the result is that the
>>>> guest balloons down to (target_max_pages + non_ram_pages).  This is
>>>> kind of messy for normal guests, but when you have a
>>>> populate-on-demand guest, that leaves non_ram_pages amount of PoD ram
>>>> in the guest.  The hypervisor then spends a huge amount of work
>>>> swapping the PoD pages around under the guest's feet, until it can't
>>>> find any more zeroed guest pages to use, and it crashes the guest.
>>>>
>>>> The kludge we have right now is to make up a number for HVM guests
>>>> which is slightly larger than non_ram_pages, and tell the guest to aim
>>>> for *that* instead.
>>>>
>>>> I think what we need is for the *toolstack* to calculate the size of
>>>> the balloon rather than the guest, and tell the balloon driver how big
>>>> to make its balloon, rather than the balloon driver trying to figure
>>>> that out on its own.
>>>
>>> Maybe the best option would be for the toolstack to fetch the e820
>>> memory map and set the target based on the size of the RAM regions in
>>> there for PVH Dom0? That would certainly match the expectations of the
>>> guest.
>>>
>>> Note that for DomUs if hvmloader (or any other component) inside of
>>> the guest changes the memory map it would also have to adjust the
>>> value in the xenstore 'target' node.
>>
>> How would it do that later when the guest is already running?
> 
> hvmloader should modify the 'target' xenstore node if it changes the
> memory map.
> 
> So the value provided by the toolstack would match the amount of RAM
> in the memory map up to the point where the guest is started, from
> there on anything inside the guest changing the memory map should also
> change the xenstore value.

So what should libxl write into target when the user specifies a new
value via "xl mem-set" then? It doesn't know whether the guest is still
trying to reach the old target, so it can't trust the current memory
allocated and target value in Xenstore to match.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to