On 5 December 2011 13:40, Avi Kivity <a...@redhat.com> wrote: > On 12/05/2011 01:01 PM, Peter Maydell wrote: >> @@ -2677,7 +2674,11 @@ void >> cpu_register_physical_memory_log(target_phys_addr_t start_addr, >> if (phys_offset == IO_MEM_UNASSIGNED) { >> region_offset = start_addr; >> } >> - region_offset &= TARGET_PAGE_MASK; >> + /* Adjust the region offset to account for the start_addr possibly >> + * not being page aligned, so we end up passing the IO functions >> + * the true offset from the start of the region. >> + */ >> + region_offset -= (start_addr & ~TARGET_PAGE_MASK); >> size = (size + TARGET_PAGE_SIZE - 1) & TARGET_PAGE_MASK; >> end_addr = start_addr + (target_phys_addr_t)size; >> > > region_offset is added to iotlb in tlb_set_page(), smashing the low bits > with your change. It's safe in subpage, since that doesn't happen there.
OK, but we only need to avoid trashing the bottom 5 bits, right? So we could do region_offset -= (start_addr & ~TARGET_PAGE_MASK); if (size >= TARGET_PAGE_SIZE) { region_offset &= ~0x1F; /* can make this a #define IO_MEM_MASK */ } which would allow regions to start on 0x20 granularity, or byte granularity if they're less than a page in size (and so guaranteed to be subpages only). -- PMM