I've been instrumenting the guest kernel as well. It's the scanning of
the active lists that triggers a lot of calls to paging64_prefetch_page,
and, as you guys know, correlates with the number of direct pages in the
list. Earlier in this thread I traced the kvm cycles to
paging64_prefetch_page(). See

http://www.mail-archive.com/[EMAIL PROTECTED]/msg16332.html

In the guest I started capturing scans (kscand() loop) that took longer
than a jiffie. Here's an example for 1 trip through the active lists,
both anonymous and cache:

active_anon_scan: HighMem, age 4, count[age] 41863 -> 30194, direct
36234, dj 225

active_anon_scan: HighMem, age 3, count[age] 1772 -> 1450, direct 1249, dj 3

active_anon_scan: HighMem, age 0, count[age] 104078 -> 101685, direct
84829, dj 848

active_cache_scan: HighMem, age 12, count[age] 3397 -> 2640, direct 889,
dj 19

active_cache_scan: HighMem, age 8, count[age] 6105 -> 5884, direct 988,
dj 24

active_cache_scan: HighMem, age 4, count[age] 18923 -> 18400, direct
11141, dj 37

active_cache_scan: HighMem, age 0, count[age] 14283 -> 14283, direct 69,
dj 1


An explanation of the line (using the first one): it's a scan of the
anonymous list, age bucket of 4. Before the scan loop the bucket had
41863 pages and after the loop the bucket had 30194. Of the pages in the
bucket 36234 were direct pages(ie., PageDirect(page) was non-zero) and
for this bucket 225 jiffies passed while running scan_active_list().

On the host side the total times (sum of the dj's/100) in the output
above directly match with kvm_stat output, spikes in pte_writes/updates.

Tracing the RHEL3 code I believe linux-2.4.21-rmap.patch is the patch
that brought in the code that is run during the active list scans for
direct pgaes. In and of itself each trip through the while loop in
scan_active_list does not take a lot of time, but when run say 84,829
times (see age 0 above) the cumulative time is high, 8.48 seconds per
the example above.

I'll pull down the git branch and give it a spin.

david


Avi Kivity wrote:
> Andrea Arcangeli wrote:
>> On Wed, May 28, 2008 at 08:13:44AM -0600, David S. Ahern wrote:
>>  
>>> Weird. Could it be something about the hosts?
>>>     
>>
>> Note that the VM itself will never make use of kmap. The VM is "data"
>> agonistic. The VM has never any idea with the data contained by the
>> pages. kmap/kmap_atomic/kunmap_atomic are only need to access _data_.
>>
>>   
> 
> What about CONFIG_HIGHPTE?
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to