On Fri, 2001/10/12 at 10:07:10 -0700, Matt Dillon wrote:
> 
> :Mark,
> :
> :> I also placed some checks on vm_map_delete
> :
> :I did that also, and as far as I understand everything works fine.
> :The only thing I found was the fact that when contigmalloc() grabs the
> :contig pages it sets the value of pga[i] (for i in allocated pages)
> :note that: vm_page_t pga = vm_page_array;
> :
> :Then contigfree() does a pretty good job, but does not reset the values
> :of pga[i] to pqtype == PQ_FREE (pqtype = pga[i].queue - pga[i].pc)
> :
> :So the next contigmalloc() requiring the same number of pages fails on
> :the previously released pages because they are not PQ_FREE
> :
> :The other thing that puzzled me is the fact that in vm_map_delete()
> :called by contgigfree() has a variable
> :...

I have also looked into this a while ago, but got stuck at some
point. I have just looked at it again, and I think I have found a solution.

>     I think what is going on is that contigmalloc() is wiring the pages
>     but placing them in a pageable container (entry->wired_count == 0),
>     so when contigfree() kmem_free()'s the block the system does not know
>     that it must unwire the pages.  This leaves the pages wired and prevents
>     them from being freed.
> 
>     I haven't found a quick and easy solution to the problem yet.  kmem_alloc()
>     doesn't do what we want either.  I tried calling vm_map_pageable() in
>     contigmalloc1() but it crashed the machine, so there might be something
>     else going on as well.

This is probably because the map entries do have a NULL object
pointer. vm_map_pageable() calls vm_fault_wire(), so this will fail.

I have attached a patch which works for me. It duplicates most of the
logic of kmem_alloc in that it calls vm_map_findspace() first, then
vm_map_insert() (which basically is what is done in
kmem_alloc_pageable() too, but here, kernel_object is passed instead
of a NULL pointer, so that the map entry will have a valid object
pointer). Then, the pages are inserted into the object as before, and
finally, the map entries are marked as wired by using
vm_map_pageable(). Because this will also call vm_fault_wire(), which
will among other things do a vm_page_wire(), contigmalloc does not
need to wire the pages itself. 

The pmap_kenter() calls can also be reomved, since the pages will be
mapped in any case by vm_fault(). 

        - thomas
--- vm_contig.c.orig    Fri Oct 12 20:05:09 2001
+++ vm_contig.c Fri Oct 12 20:44:03 2001
@@ -76,6 +76,8 @@
 #include <vm/vm.h>
 #include <vm/vm_param.h>
 #include <vm/vm_kern.h>
+#include <vm/pmap.h>
+#include <vm/vm_map.h>
 #include <vm/vm_object.h>
 #include <vm/vm_page.h>
 #include <vm/vm_pageout.h>
@@ -232,7 +234,6 @@
                        m->busy = 0;
                        m->queue = PQ_NONE;
                        m->object = NULL;
-                       vm_page_wire(m);
                }
 
                /*
@@ -240,24 +241,31 @@
                 * Allocate kernel VM, unfree and assign the physical pages to it and
                 * return kernel VM pointer.
                 */
-               tmp_addr = addr = kmem_alloc_pageable(map, size);
-               if (addr == 0) {
+               vm_map_lock(map);
+               if (vm_map_findspace(map, vm_map_min(map), size, &addr) !=
+                   KERN_SUCCESS) {
                        /*
                         * XXX We almost never run out of kernel virtual
                         * space, so we don't make the allocated memory
                         * above available.
                         */
+                       vm_map_unlock(map);
                        splx(s);
                        return (NULL);
                }
+               vm_object_reference(kernel_object);
+               vm_map_insert(map, kernel_object, addr - VM_MIN_KERNEL_ADDRESS,
+                   addr, addr + size, VM_PROT_ALL, VM_PROT_ALL, 0);
+               vm_map_unlock(map);
 
+               tmp_addr = addr;
                for (i = start; i < (start + size / PAGE_SIZE); i++) {
                        vm_page_t m = &pga[i];
                        vm_page_insert(m, kernel_object,
                                OFF_TO_IDX(tmp_addr - VM_MIN_KERNEL_ADDRESS));
-                       pmap_kenter(tmp_addr, VM_PAGE_TO_PHYS(m));
                        tmp_addr += PAGE_SIZE;
                }
+               vm_map_pageable(map, addr, addr + size, FALSE);
 
                splx(s);
                return ((void *)addr);

Reply via email to