On Mon, Jan 30, 2023 at 06:45:20AM +0100, Juan Quintela wrote:
> Peter Xu <pet...@redhat.com> wrote:
> > When a ramblock is backed by hugetlbfs and the user specified using
> > double-map feature, we trap the faults on these regions using minor mode.
> > Teach QEMU about that.
> >
> > Add some sanity check on the fault flags when receiving a uffd message.
> > For minor fault trapped ranges, we should always see the MINOR flag set,
> > while when using generic missing faults we should never see it.
> >
> > Signed-off-by: Peter Xu <pet...@redhat.com>
> 
> Reviewed-by: Juan Quintela <quint...@redhat.com>
> 
> 
> 
> > -    if (!(reg_struct.ioctls & ((__u64)1 << _UFFDIO_COPY))) {
> 
> Does qemu have a macro to do this bitmap handling?

Not yet that's suitable.  It's open coded like this in many places of
postcopy.  One thing close enough is bitmap_test_and_clear() but too heavy.

> 
> >  {
> >      MigrationIncomingState *mis = opaque;
> >      struct uffd_msg msg;
> > +    uint64_t address;
> >      int ret;
> >      size_t index;
> >      RAMBlock *rb = NULL;
> > @@ -945,6 +980,7 @@ static void *postcopy_ram_fault_thread(void *opaque)
> >      }
> >  
> >      while (true) {
> > +        bool use_minor_fault, minor_flag;
> 
> I think that something on the lines of:
>            bool src_minor_fault, dst_minor_fault;
> 
> will make things simpler.  Reviewing, I have to go back to definition
> place to know which is which.

These two values represents "what we expect" and "what we got from the
message", so the only thing is I'm not sure whether src/dst matches the
best here.

How about "expect_minor_fault" and "has_minor_fault" instead?

> 
> >          ram_addr_t rb_offset;
> >          int poll_result;
> >  
> > @@ -1022,22 +1058,37 @@ static void *postcopy_ram_fault_thread(void *opaque)
> >                  break;
> >              }
> >  
> > -            rb_offset = ROUND_DOWN(rb_offset, migration_ram_pagesize(rb));
> > -            
> > trace_postcopy_ram_fault_thread_request(msg.arg.pagefault.address,
> > -                                                qemu_ram_get_idstr(rb),
> > -                                                rb_offset,
> > -                                                
> > msg.arg.pagefault.feat.ptid);
> > -            mark_postcopy_blocktime_begin(
> > -                    (uintptr_t)(msg.arg.pagefault.address),
> > -                                msg.arg.pagefault.feat.ptid, rb);
> > +            address = ROUND_DOWN(msg.arg.pagefault.address,
> > +                                 migration_ram_pagesize(rb));
> > +            use_minor_fault = postcopy_use_minor_fault(rb);
> > +            minor_flag = !!(msg.arg.pagefault.flags &
> > +                            UFFD_PAGEFAULT_FLAG_MINOR);
> >  
> > +            /*
> > +             * Do sanity check on the message flags to make sure this is
> > +             * the one we expect to receive.  When using minor fault on
> > +             * this ramblock, it should _always_ be set; when not using
> > +             * minor fault, it should _never_ be set.
> > +             */
> > +            if (use_minor_fault ^ minor_flag) {
> > +                error_report("%s: Unexpected page fault flags 
> > (0x%"PRIx64") "
> > +                             "for address 0x%"PRIx64" (mode=%s)", __func__,
> > +                             (uint64_t)msg.arg.pagefault.flags,
> > +                             (uint64_t)msg.arg.pagefault.address,
> > +                             use_minor_fault ? "MINOR" : "MISSING");
> > +            }
> > +
> > +            trace_postcopy_ram_fault_thread_request(
> > +                address, qemu_ram_get_idstr(rb), rb_offset,
> > +                msg.arg.pagefault.feat.ptid);
> > +            mark_postcopy_blocktime_begin(
> > +                    (uintptr_t)(address), msg.arg.pagefault.feat.ptid, rb);
> >  retry:
> >              /*
> >               * Send the request to the source - we want to request one
> >               * of our host page sizes (which is >= TPS)
> >               */
> > -            ret = postcopy_request_page(mis, rb, rb_offset,
> > -                                        msg.arg.pagefault.address);
> > +            ret = postcopy_request_page(mis, rb, rb_offset, address);
> 
> This is the only change that I find 'problematic'.
> On old code, rb_offset has been ROUND_DOWN, on new code it is not.
> On old code we pass msg.arg.pagefault.address, now we use
> ROUND_DOW(msg.arg.pagefault.address, mighration_ram_pagesize(rb)).

Thanks for spotting such a detail even for a RFC series. :)

It's actually rounded down to target psize, here since we're in postcopy we
should require target psize equals to host psize (or I bet it won't really
work at all).  So the relevant rounddown was actually done here:

            rb = qemu_ram_block_from_host(
                     (void *)(uintptr_t)msg.arg.pagefault.address,
                     true, &rb_offset);

In which there's:

    *offset = (host - block->host);
    if (round_offset) {
        *offset &= TARGET_PAGE_MASK;
    }

So when I rework that chunk of code I directly dropped the ROUND_DOWN()
because I find it duplicated.

> 
> >              if (ret) {
> >                  /* May be network failure, try to wait for recovery */
> >                  postcopy_pause_fault_thread(mis);
> > @@ -1694,3 +1745,13 @@ void *postcopy_preempt_thread(void *opaque)
> >  
> >      return NULL;
> >  }
> > +
> > +/*
> > + * Whether we should use MINOR fault to trap page faults?  It will be used
> > + * when doublemap is enabled on hugetlbfs.  The default value will be
> > + * false, which means we'll keep using the legacy MISSING faults.
> > + */
> > +bool postcopy_use_minor_fault(RAMBlock *rb)
> > +{
> > +    return migrate_hugetlb_doublemap() && qemu_ram_is_hugetlb(rb);
> > +}
> 
> Are you planing using this function outside postocpy-ram.c?  Otherwise
> if you move up its definition you can make it static and drop the header
> change.

Yes, it'll be further used in ram.c later in the patch "migration: Rework
ram discard logic for hugetlb double-map" right below.

Thanks,

-- 
Peter Xu


Reply via email to