On Wed, May 13, 2026, Wu Fei wrote:
> On 5/13/26 08:03, Sean Christopherson wrote:
> > On Mon, May 11, 2026, [email protected] wrote:
> > > Currently dirty_log_test hardcodes usleep 1ms in each interval, which
> > > could be too short for guest to write and fault in enough pages, then
> > > there is less chance to test the write protection mechanism, especially
> > > in the case of (log_mode != LOG_MODE_DIRTY_RING).
> > 
> > But when log_mode != LOG_MODE_DIRTY_RING, the individual sleep time is 
> > largely
> > meaningless, because the test won't reap the bitmaps for iterations > 0.
> > 
> >                     if (i && host_log_mode != LOG_MODE_DIRTY_RING)
> >                             continue;
> > 
> The first usleep  matters in the case of KVM_DIRTY_LOG_INITIALLY_SET. The
> dirty bitmap is not precise in the first get_dirty_log, all pages are marked
> as dirty but most of them are not populated in page table, this creates the
> situation I mentioned in the cover letter.

I suspect something is messed up in your workflow, because the actual patches
aren't properly threaded with respect to the cover letter.  E.g. patch 1 has

  In-Reply-To: <[email protected]>

but the cover letter has:

  Message-Id: <[email protected]>

Copy+pasting the entirety of the cover letter for reference:

 : The current gstage range walker unconditionally advances by 'page_size'
 : when a leaf PTE is not found, e.g. when the range to wp is
 : [0xfffff01fc000, 0xfffff023c000) , if found_leaf of 0xfffff01fc000
 : returns false and page_size is 2MB, it skips the whole range, but it's
 : possible to have valid entries in [0xfffff0200000, 0xfffff023c000), so
 : only [0xfffff01fc000, 0xfffff0200000) can be skipped safely. Both
 : wp/unamp have the same pattern.
 : 
 : dirty_log_test intentionally sets up the unaligned guest physical
 : address, after riscv kvm enabling KVM_DIRTY_LOG_INITIALLY_SET, it's easy
 : to trigger this bug if there is a larger window for guest to write more
 : pages before first collect_dirty_pages.

> "when the range to wp is
> [0xfffff01fc000, 0xfffff023c000) , if found_leaf of 0xfffff01fc000
> returns false and page_size is 2MB, it skips the whole range, but it's
> possible to have valid entries in [0xfffff0200000, 0xfffff023c000), so
> only [0xfffff01fc000, 0xfffff0200000) can be skipped safely."
> 
> > > 
> > > Unit is introduced to replace the default 1ms if specified in command
> > > line. The following test can't trigger failure on my riscv vm:
> > 
> > Failure of what?  And does the failure really not reproduce with a higher 
> > interval?
> 
> On riscv, it fails to write protect some pages with valid page table entry
> then loses track of dirty pages. Higher interval doesn't help because only
> the first usleep matters, after the first collect_dirty_pages, all dirty
> pages are tracked precisely then there is no such problem.

Ah, gotcha.  Rather than let (and force) the user to provide a larger sleep 
time,
what if we instead randomize the delay before the initial reaping of the dirty
bitmap/ring?  That should provide a good balance between coverage, complexity 
and
user-friendliness.

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c 
b/tools/testing/selftests/kvm/dirty_log_test.c
index 12446a4b6e8d..74ca096bf976 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -694,7 +694,17 @@ static void run_test(enum vm_guest_mode mode, void *arg)
        pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu);
 
        for (iteration = 1; iteration <= p->iterations; iteration++) {
-               unsigned long i;
+               unsigned long i, reap_i;
+
+               /*
+                * Select a random point in the time interval to reap the dirty
+                * bitmap/ring while the guest is running, i.e. randomize how
+                * long the guest gets to initially run and thus how many pages
+                * it can dirty, before collecting the dirty bitmap/ring.  See
+                * the loop below for details.
+                */
+               reap_i = random() % p->interval;
+               printf("Reaping after a %lu ms delay\n", reap_i);
 
                sync_global_to_guest(vm, iteration);
 
@@ -729,13 +739,17 @@ static void run_test(enum vm_guest_mode mode, void *arg)
                         * that's effectively blocked.  Collecting while the
                         * guest is running also verifies KVM doesn't lose any
                         * state.
-                        *
+                        */
+                       if (i < reap_i)
+                               continue;
+
+                       /*
                         * For bitmap modes, KVM overwrites the entire bitmap,
                         * i.e. collecting the bitmaps is destructive.  Collect
-                        * the bitmap only on the first pass, otherwise this
-                        * test would lose track of dirty pages.
+                        * the bitmap while the guest is running only once,
+                        * otherwise this test would lose track of dirty pages.
                         */
-                       if (i && host_log_mode != LOG_MODE_DIRTY_RING)
+                       if (i > reap_i && host_log_mode != LOG_MODE_DIRTY_RING)
                                continue;
 
                        /*
@@ -745,7 +759,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
                         * the ring on every pass would make it unlikely the
                         * vCPU would ever fill the fing).
                         */
-                       if (i && !READ_ONCE(dirty_ring_vcpu_ring_full))
+                       if (i > reap_i && !READ_ONCE(dirty_ring_vcpu_ring_full))
                                continue;
 
                        log_mode_collect_dirty_pages(vcpu, TEST_MEM_SLOT_INDEX,

Reply via email to