>>> On 03.07.17 at 19:24, wrote:
> On 03/07/17 17:06, Jan Beulich wrote:
> On 03.07.17 at 17:07, wrote:
>>> On 22/06/17 10:06, Jan Beulich wrote:
> +*mfn++ = _mfn(page_to_mfn(page));
> +frame++;
> +
> +if ( p2m_is_discard_write(p2mt) )
> +{
On 03/07/17 17:06, Jan Beulich wrote:
On 03.07.17 at 17:07, wrote:
>> On 22/06/17 10:06, Jan Beulich wrote:
+{
+ASSERT_UNREACHABLE();
+goto unhandleable;
+}
+
+do {
+enum hvm_translation_result res;
+struct pa
>>> On 03.07.17 at 17:07, wrote:
> On 22/06/17 10:06, Jan Beulich wrote:
>>> +{
>>> +ASSERT_UNREACHABLE();
>>> +goto unhandleable;
>>> +}
>>> +
>>> +do {
>>> +enum hvm_translation_result res;
>>> +struct page_info *page;
>>> +pagefault_info_t pfi
On 22/06/17 10:06, Jan Beulich wrote:
>
>> +/*
>> + * mfn points to the next free slot. All used slots have a page
>> reference
>> + * held on them.
>> + */
>> +mfn_t *mfn = &hvmemul_ctxt->mfn[0];
>> +
>> +/*
>> + * The caller has no legitimate reason for trying a zero
>>> On 21.06.17 at 17:12, wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -498,6 +498,159 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa,
> }
>
> /*
> + * Map the frame(s) covering an individual linear access, for writeable
> + * access. May return NULL
> -Original Message-
> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
> Sent: 21 June 2017 16:13
> To: Xen-devel
> Cc: Andrew Cooper ; Jan Beulich
> ; Paul Durrant ; Razvan
> Cojocaru ; Mihai Donțu
>
> Subject: [PATCH 6/6] x86/hvm: Implement hvmemul_write() using real
> mappings
>
An access which crosses a page boundary is performed atomically by x86
hardware, albeit with a severe performance penalty. An important corner case
is when a straddled access hits two pages which differ in whether a
translation exists, or in net access rights.
The use of hvm_copy*() in hvmemul_wr