>>> On 09.06.16 at 13:54, <andrew.coop...@citrix.com> wrote:
> On 08/06/16 14:09, Jan Beulich wrote:
>> ... to avoid re-doing the same translation later again (in a retry, for
>> example). This doesn't help very often according to my testing, but
>> it's pretty cheap to have, and will be of further use subsequently.
>>
>> Signed-off-by: Jan Beulich <jbeul...@suse.com>
>>
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi
>>      return cache;
>>  }
>>  
>> +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla,
>> +                                 unsigned long gpa, bool_t write)
>> +{
>> +    if ( vio->mmio_access.gla_valid )
>> +        return;
>> +
>> +    vio->mmio_gva = gla & PAGE_MASK;
> 
> This suggest that mmio_vga is mis-named.
> 
> Looking at the uses, handle_mmio_with_translation() is used
> inconsistently, with virtual addresses from the shadow code, but linear
> addresses from nested hap code.
> 
> Clearly one of these two users are buggy for guests running in a non
> flat way, and it looks to be the shadow side which is buggy.

Right - this field is certainly meant to be a linear address (as all
segment information is gone by the time we get here). But I can't
seem to see an issue with the shadow instance: The "virtual"
address here is the CR2 value of a #PF, which clearly is a linear
address.

And anyway all you say is orthogonal to the change here.

> Following my recent fun with invlpg handling, I am much more conscious
> about the distinction between virtual and linear addresses.  I wonder if
> we want to go so far as to have a TYPE_SAFE() for it, to try and avoid
> further misuse?

Not sure; if you're up to it, I wouldn't mind, but maybe an
alternative would be to have a (segment,offset) container
instead for virtual addresses, such that only linear addresses
can get passed as plain number?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to