David S. Ahern wrote:
Oh!  Only 45K pages were direct, so the other 45K were shared, with
perhaps many ptes.  We shoud count ptes, not pages.

Can you modify page_referenced() to count the numbers of ptes mapped (1
for direct pages, nr_chains for indirect pages) and print the total
deltas in active_anon_scan?


Here you go. I've shortened the line lengths to get them to squeeze into
80 columns:

anon_scan, all HighMem zone, 187,910 active pages at loop start:
  count[12] 21462 -> 230,   direct 20469, chains 3479,   dj 58
  count[11] 1338  -> 1162,  direct 227,   chains 26144,  dj 59
  count[8] 29397  -> 5410,  direct 26115, chains 27617,  dj 117
  count[4] 35804  -> 25556, direct 31508, chains 82929,  dj 256
  count[3] 2738   -> 2207,  direct 2680,  chains 58,     dj 7
  count[0] 92580  -> 89509, direct 75024, chains 262834, dj 726
(age number is the index in [])


Where do all those ptes come from? that's 180K pages (most of highmem), but with 550K ptes.

The memuser workload doesn't use fork(), so there shouldn't be any indirect ptes.

We might try to unshadow the fixmap page; that means we don't have to do 4 fixmap pte accesses per pte scanned.

The kernel uses two methods for clearing the accessed bit:

For direct pages:

               if (pte_young(*pte) && ptep_test_and_clear_young(pte))
                       referenced++;

(two accesses)

For indirect pages:

                               if (ptep_test_and_clear_young(pte))
                                       referenced++;

(one access)

Which have to be emulated if we don't shadow the fixmap. With the data above, that translates to 700K emulations with your numbers above, vs 2200K emulations, a 3X improvement. I'm not sure it will be sufficient given that we're reducing a 10-second kscand scan into a 3-second scan.

If you sum the direct pages and the chains count for each row, convert
dj into dt (divided by HZ = 100) you get:

( 20469 + 3479 )   / 0.58 = 41289
( 227 + 26144 )    / 0.59 = 44696
( 26115 + 27617 )  / 1.17 = 45924
( 31508 + 82929 )  / 2.56 = 44701
( 2680 + 58 )      / 0.07 = 39114
( 75024 + 262834 ) / 7.26 = 46536
( 499 + 20022 )    / 0.44 = 46638
( 7189 + 9854 )    / 0.37 = 46062
( 5071 + 9388 )    / 0.31 = 46641

For 4 pte writes per direct page or chain entry comes to ~187,000/sec
which is close to the total collected by kvm_stat (data width shrunk to
fit in e-mail; hope this is readable still):


|----------         mmu_          ----------|-----  pf_  -----|
 cache  flood  pde_z    pte_u    pte_w  shado    fixed    guest
   267    271     95    21455    21842    285    22840      165
    66     88      0    12102    12224     88    12458        0
  2042   2133      0   178146   180515   2133   188089      387
  1053   1212      0   187067   188485   1212   193011        8
  4771   4811     88   185129   190998   4825   207490      448
   910    824      7   183066   184050    824   195836       12
   707    785      0   176381   177300    785   180350        6
  1167   1144      0   189618   191014   1144   195902       10
  4238   4193     87   188381   193590   4206   207030      465
  1448   1400      7   187786   189509   1400   198688       21
   982    971      0   187880   189076    971   198405        2
  1165   1208      0   190007   191503   1208   195746       13
  1106   1146      0   189144   190550   1146   195143        0
  4767   4788     96   185802   191704   4802   206362      477
  1388   1431      0   187387   188991   1431   195115        3
   584    551      0    77176    77802    551    84829       10
    12      7      0     3601     3609      7    13497        4
   243    153     91    31085    31333    167    35059      879
    21     18      6     3130     3155     18     3827        2
    21      4      1     4665     4670      4     6825        9

The kvm_stat data for this time period is attached due to line lengths.


Also, I forgot to mention this before, but there is a bug in the
kscand code in the RHEL3U8 kernel. When it scans the cache list it
uses the count from the anonymous list:

            if (need_active_cache_scan(zone)) {
                for (age = MAX_AGE-1; age >= 0; age--)  {
                    scan_active_list(zone, age,
                        &zone->active_cache_list[age],
                        zone->active_anon_count[age]);
                              ^^^^^^^^^^^^^^^^^
                    if (current->need_resched)
                        schedule();
                }
            }

When the anonymous count is higher it is scanning the cache list
repeatedly. An example of that was captured here:

active_cache_scan: HighMem, age 7, count[age] 222 -> 179, count anon
111967, direct 626, dj 3

count anon is active_anon_count[age] which at this moment was 111,967.
There were only 222 entries in the cache list, but the count value
passed to scan_active_list was 111,967. When the cache list has a lot
of direct pages, that causes a larger hit on kvm than needed. That
said, I have to live with the bug in the guest.
For debugging, can you fix it?  It certainly has a large impact.

yes, I have run a few tests with it fixed to get a ballpark on the
impact. The fix is included in the number above.

Perhaps it is fixed in an update kernel.  There's a 2.4.21-50.EL in the
centos 3.8 update repos.


It seems to have been fixed there.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to