On 08/16/2014 10:21 PM, Paolo Bonzini wrote: >>> Would it work to just call tb_invalidate_phys_page_range before the >>> helper_ret_stb loop?
I doubt it. >> Maybe. I think there’s another issue, which is that QEMU’s ending up >> in the I/O read/write code instead of the normal memory RW. This could >> be QEMU messing up, it could be PatchGuard doing something weird, or it >> could be me misunderstanding what’s going on. I never really figured out >> how the control flow works here. > > That's okay. Everything that's in the slow path goes down > io_mem_read/write (in this case TLB_NOTDIRTY is set for dirty-page > tracking and causes QEMU to choose that path). > > Try making a self-contained test case using the kvm-unit-tests harness > (git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git). I believe that the proper solution is to force *both* page table entries into the TLB before performing any actual memory operations. We'll have done the page for the first byte at the top of helper_{le,be}_{ld,st}_name. When we discover it's an unaligned access, we should load and check the pte for the second page. We might have to shuffle those two tests around, since in theory the second page could be I/O mapped and we'd want to pass off the whole access to io_mem_*. Since two adjacent pages can't conflict in our direct-mapped TLB, we can then safely pass off the work to secondary helpers without fear the first TLB entry will be flushed. r~