2011/4/5 Artur Grabowski <[email protected]>:
> - Use km_alloc for all backend allocations in pools.
> - Use km_alloc for the emergmency kentry allocations in uvm_mapent_alloc
> - Garbage collect uvm_km_getpage, uvm_km_getpage_pla and uvm_km_putpage
I have some idea related to allocator for (! __HAVE_PMAP_DIRECT) case.
It is how to get rid the "kmthread".
It was related to uvm_km_getpage, now I will use uvm_km_alloc to explain.
I think that there is a problem is that there is a loop related to
"kentries".
uvm_km_alloc
|
+uvm_km_kmemalloc
|
+uvm_map
|
+uvm_mapent_alloc
|
+uvm_km_alloc
The uvm_mapent_alloc() may use last "kentry_free". Next time,
uvm_mapent_alloc() realize that "kentry_free" is NULL and it tries to
uvm_km_alloc() one fresh page.
The uvm_km_alloc() may directly call the uvm_km_kmemalloc()...
But when uvm_km_kmemalloc() will try to uvm_map() physical page, the
the uvm_map() sees that it have not free entry and calls
uvm_mapent_alloc() and here we are.
That is why kmthread exists?
My idea is how to resolve that loop by another way.
We must keep track the number of free kentries. And when we see, in
uvm_mapent_alloc(), that there is only 1 kentry remains then we must
allocate more entries immediately. This last kentry may be used by
uvm_map. So, when it will be done we'll have a page, divide it to new
fresh entries, and now we safe.
Then we can proceed further.
So here is a Quick&Dirty diff that shows the concept. Please, don't
kick me much. I just wanted to prepare it fast because I see high uvm
related activity and If I'll be very accurate it will be too late...
(that is why km_free() case is not implemented too).
Index: uvm.h
===================================================================
RCS file: /cvs/src/sys/uvm/uvm.h,v
retrieving revision 1.41
diff -u -r1.41 uvm.h
--- uvm.h 29 Jun 2010 20:39:27 -0000 1.41
+++ uvm.h 5 Apr 2011 05:13:11 -0000
@@ -128,6 +128,7 @@
/* static kernel map entry pool */
vm_map_entry_t kentry_free; /* free page pool */
+ int numof_free_kentries;
simple_lock_data_t kentry_lock;
/* aio_done is locked by uvm.aiodoned_lock. */
Index: uvm_km.c
===================================================================
RCS file: /cvs/src/sys/uvm/uvm_km.c,v
retrieving revision 1.92
diff -u -r1.92 uvm_km.c
--- uvm_km.c 5 Apr 2011 01:28:05 -0000 1.92
+++ uvm_km.c 5 Apr 2011 05:13:11 -0000
@@ -929,6 +929,7 @@
#ifdef __HAVE_PMAP_DIRECT
panic("km_alloc: DIRECT single page");
#else
+/*
mtx_enter(&uvm_km_pages.mtx);
while (uvm_km_pages.free == 0) {
if (kd->kd_waitok == 0) {
@@ -947,6 +948,9 @@
wakeup(&uvm_km_pages.km_proc);
}
mtx_leave(&uvm_km_pages.mtx);
+*/
+ va = (vaddr_t)uvm_km_kmemalloc(kernel_map,
+ NULL, PAGE_SIZE, UVM_KMF_VALLOC);
#endif
} else {
struct uvm_object *uobj = NULL;
@@ -1000,6 +1004,7 @@
pg = pmap_unmap_direct(va);
uvm_pagefree(pg);
#else
+ /* uvm_km_doputpage's job must be implemented here */
struct uvm_km_free_page *fp = v;
mtx_enter(&uvm_km_pages.mtx);
fp->next = uvm_km_pages.freelist;
Index: uvm_map.c
===================================================================
RCS file: /cvs/src/sys/uvm/uvm_map.c,v
retrieving revision 1.132
diff -u -r1.132 uvm_map.c
--- uvm_map.c 5 Apr 2011 01:28:05 -0000 1.132
+++ uvm_map.c 5 Apr 2011 05:13:11 -0000
@@ -394,7 +394,7 @@
struct vm_map_entry *
uvm_mapent_alloc(struct vm_map *map, int flags)
{
- struct vm_map_entry *me, *ne;
+ struct vm_map_entry *me;
int s, i;
int pool_flags;
UVMHIST_FUNC("uvm_mapent_alloc"); UVMHIST_CALLED(maphist);
@@ -406,25 +406,39 @@
if (map->flags & VM_MAP_INTRSAFE || cold) {
s = splvm();
simple_lock(&uvm.kentry_lock);
- me = uvm.kentry_free;
- if (me == NULL) {
- ne = km_alloc(PAGE_SIZE, &kv_page, &kp_dirty,
+ /*
+ * If there is only one kentry remains we MUST
+ * allocate more (page of) entries.
+ * Because uvm_km_kmemalloc (called by uvm_km_alloc)
+ * may use that last kentry when it tries to uvm_map new
+ * physical page.
+ * Then this virtual address of that mapped physpage
+ * will be returned to us (by uvm_km_alloc) so we can
+ * allocate more kentries from it and proceed
+ */
+ if (uvm.numof_free_kentries == 1 || cold) {
+ me = km_alloc(PAGE_SIZE, &kv_page, &kp_dirty,
&kd_nowait);
- if (ne == NULL)
+ if (me == NULL)
panic("uvm_mapent_alloc: cannot allocate map "
"entry");
for (i = 0;
i < PAGE_SIZE / sizeof(struct vm_map_entry) - 1;
- i++)
- ne[i].next = &ne[i + 1];
- ne[i].next = NULL;
- me = ne;
+ i++) {
+ me[i].next = &me[i + 1];
+ uvm.numof_free_kentries++;
+ }
+ me[i].next = NULL;
+ uvm.kentry_free = me;
+ /* now useless? */
if (ratecheck(&uvm_kmapent_last_warn_time,
&uvm_kmapent_warn_rate))
printf("uvm_mapent_alloc: out of static "
"map entries\n");
}
+ me = uvm.kentry_free;
uvm.kentry_free = me->next;
+ uvm.numof_free_kentries--;
uvmexp.kmapent++;
simple_unlock(&uvm.kentry_lock);
splx(s);
@@ -468,6 +482,7 @@
simple_lock(&uvm.kentry_lock);
me->next = uvm.kentry_free;
uvm.kentry_free = me;
+ uvm.numof_free_kentries++;
uvmexp.kmapent--;
simple_unlock(&uvm.kentry_lock);
splx(s);
@@ -566,6 +581,7 @@
simple_lock_init(&uvm.kentry_lock);
uvm.kentry_free = NULL;
+ uvm.numof_free_kentries = 0;
for (lcv = 0 ; lcv < MAX_KMAPENT ; lcv++) {
kernel_map_entry[lcv].next = uvm.kentry_free;
uvm.kentry_free = &kernel_map_entry[lcv];
--
antonvm