On 24.04.2018 16:44, Igor Mammedov wrote:
> On Tue, 24 Apr 2018 15:41:23 +0200
> David Hildenbrand <da...@redhat.com> wrote:
> 
>> On 24.04.2018 15:31, Igor Mammedov wrote:
>>> On Mon, 23 Apr 2018 14:52:37 +0200
>>> David Hildenbrand <da...@redhat.com> wrote:
>>>   
>>>>>     
>>>>>> +    /* we will need a new memory slot for kvm and vhost */
>>>>>> +    if (kvm_enabled() && !kvm_has_free_slot(machine)) {
>>>>>> +        error_setg(errp, "hypervisor has no free memory slots left");
>>>>>> +        return;
>>>>>> +    }
>>>>>> +    if (!vhost_has_free_slot()) {
>>>>>> +        error_setg(errp, "a used vhost backend has no free memory slots 
>>>>>> left");
>>>>>> +        return;
>>>>>> +    }    
>>>>> move these checks to pre_plug time
>>>>>     
>>>>>> +
>>>>>> +    memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr);    
>>>>> missing vmstate registration?    
>>>>
>>>> Missed this one: To be called by the caller. Important because e.g. for
>>>> virtio-pmem we don't want this (I assume :) ).  
>>> if pmem isn't on shared storage, then We'd probably want to migrate
>>> it as well, otherwise target would experience data loss.
>>> Anyways, I'd just reat it as normal RAM in migration case  
>>
>> Yes, if we realize that all MemoryDevices need this call, we can move it
>> to that place, too.
>>
>> Wonder if we might want to make this configurable for virtio-pmem later
>> on (via a flag or sth like that).
> I don't see any reason why we wouldn't like it to be migrated,
> it's the same as nvdimm but with another qemu:guest ABI
> and async flush instead of sync one we have with nvdimm.
> 

Didn't you just mention "shared storage" ? :)

Anyhow, I leave such stuff to Pankaj to figure out. I remember him
working on some page cache details. Once clarified, this is easily
refactored later on.

-- 

Thanks,

David / dhildenb

Reply via email to