On 07.12.2020 16:27, Oleksandr wrote:
> On 07.12.20 13:13, Jan Beulich wrote:
>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
>>> @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct 
>>> hvm_ioreq_server *s)
>>>       return rc;
>>>   }
>>>   
>>> -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>>>   {
>>>       hvm_unmap_ioreq_gfn(s, true);
>>>       hvm_unmap_ioreq_gfn(s, false);
>> How is this now different from ...
>>
>>> @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct 
>>> hvm_ioreq_server *s,
>>>       return rc;
>>>   }
>>>   
>>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
>>> +{
>>> +    hvm_remove_ioreq_gfn(s, false);
>>> +    hvm_remove_ioreq_gfn(s, true);
>>> +}
>> ... this? Imo if at all possible there should be no such duplication
>> (i.e. at least have this one simply call the earlier one).
> 
> I am afraid, I don't see any duplication between mentioned functions. 
> Would you mind explaining?

Ouch - somehow my eyes considered "unmap" == "remove". I'm sorry
for the noise.

>>> @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct 
>>> domain *d, ioservid_t id,
>>>       return rc;
>>>   }
>>>   
>>> +/* Called with ioreq_server lock held */
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags)
>>> +{
>>> +    int rc = p2m_set_ioreq_server(d, flags, s);
>>> +
>>> +    if ( rc == 0 && flags == 0 )
>>> +    {
>>> +        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> +    }
>>> +
>>> +    return rc;
>>> +}
>>> +
>>>   /*
>>>    * Map or unmap an ioreq server to specific memory type. For now, only
>>>    * HVMMEM_ioreq_server is supported, and in the future new types can be
>>> @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain 
>>> *d, ioservid_t id,
>>>       if ( s->emulator != current->domain )
>>>           goto out;
>>>   
>>> -    rc = p2m_set_ioreq_server(d, flags, s);
>>> +    rc = arch_ioreq_server_map_mem_type(d, s, flags);
>>>   
>>>    out:
>>>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>   
>>> -    if ( rc == 0 && flags == 0 )
>>> -    {
>>> -        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> -
>>> -        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> -            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> -    }
>>> -
>>>       return rc;
>>>   }
>> While you mention this change in the description, I'm still
>> missing justification as to why the change is safe to make. I
>> continue to think p2m_change_entry_type_global() would better
>> not be called inside the locked region, if at all possible.
> Well. I am afraid, I don't have a 100% justification why the change is 
> safe to make as well
> as I don't see an obvious reason why it is not safe to make (at least I 
> didn't find a possible deadlock scenario while investigating the code).
> I raised a question earlier whether I can fold this check in, which 
> implied calling p2m_change_entry_type_global() with ioreq_server lock held.

I'm aware of the earlier discussion. But "didn't find" isn't good
enough in a case like this, and since it's likely hard to indeed
prove there's no deadlock possible, I think it's best to avoid
having to provide such a proof by avoiding the nesting.

> If there is a concern with calling this inside the locked region 
> (unfortunately still unclear for me at the moment), I will try to find 
> another way how to split hvm_map_mem_type_to_ioreq_server() without
> potentially unsafe change here *and* exposing 
> p2m_change_entry_type_global() to the common code. Right now, I don't 
> have any ideas how this could be split other than
> introducing one more hook here to deal with p2m_change_entry_type_global 
> (probably arch_ioreq_server_map_mem_type_complete?), but I don't expect 
> it to be accepted.
> I appreciate any ideas on that.

Is there a reason why the simplest solution (two independent
arch_*() calls) won't do? If so, what are the constraints?
Can the first one e.g. somehow indicate what needs to happen
after the lock was dropped? But the two calls look independent
right now, so I don't see any complicating factors.

>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>> @@ -19,6 +19,25 @@
>>>   #ifndef __ASM_X86_HVM_IOREQ_H__
>>>   #define __ASM_X86_HVM_IOREQ_H__
>>>   
>>> +#define HANDLE_BUFIOREQ(s) \
>>> +    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
>>> +
>>> +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
>>> +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
>>> +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
>>> +int arch_ioreq_server_map_mem_type(struct domain *d,
>>> +                                   struct hvm_ioreq_server *s,
>>> +                                   uint32_t flags);
>>> +bool arch_ioreq_server_destroy_all(struct domain *d);
>>> +int arch_ioreq_server_get_type_addr(const struct domain *d,
>>> +                                    const ioreq_t *p,
>>> +                                    uint8_t *type,
>>> +                                    uint64_t *addr);
>>> +void arch_ioreq_domain_init(struct domain *d);
>>> +
>>>   bool hvm_io_pending(struct vcpu *v);
>>>   bool handle_hvm_io_completion(struct vcpu *v);
>>>   bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
>> What's the plan here? Introduce them into the x86 header just
>> to later move the entire block into the common one? Wouldn't
>> it make sense to introduce the common header here right away?
>> Or do you expect to convert some of the simpler ones to inline
>> functions later on?
> The former. The subsequent patch is moving the the entire block(s) from 
> here and from x86/hvm/ioreq.c to the common code in one go.

I think I saw it move the _other_ pieces there, and this block
left here. (FAOD my comment is about the arch_*() declarations
you add, not the patch context in view.)

> I thought it was a little bit odd to expose a header before exposing an 
> implementation to the common code. Another reason is to minimize places 
> that need touching by current patch.

By exposing arch_*() declarations you don't give the impression
of exposing any "implementation". These are helpers the
implementation is to invoke; I'm fine with you moving the
declarations of the functions actually constituting this
component's external interface only once you also move the
implementation to common code.

Jan

Reply via email to