On 9/29/2010 3:41 PM, Andriy Gapon wrote:
on 27/09/2010 20:54 Andriy Gapon said the following:
It seems that minidump on amd64 is always dumping at least about 1GB of data
regardless of actual memory size and usage and thus can be even larger than
regular dump.

Specifically, I suspect the following code:
for (va = VM_MIN_KERNEL_ADDRESS; va<  MAX(KERNBASE + NKPT * NBPDR,
     kernel_vm_end); va += NBPDR) {
         i = (va>>  PDPSHIFT)&  ((1ul<<  NPDPEPGSHIFT) - 1);
         /*
          * We always write a page, even if it is zero. Each
          * page written corresponds to 2MB of space
          */
         ptesize += PAGE_SIZE;

It seems that difference between KERNBASE and VM_MIN_KERNEL_ADDRESS is already
~500G.  Which means 500G divided by 2M equals 250K iterations/pages.  Which is 
1GB
of data.

Looks like this came from amd64 KVA expansion.
And it seems a little bit wasteful?
So perhaps we need to add another level of indirection?
I.e. first dump contiguous array of "pseudo-pde" entries that would point to
chunks of "pseudo-pte" entries, so that "pseudo-pte" entries could be sparse.
This is instead of dumping 1GB of contiguous "pseudo-ptes" as we do now.


That would be the best approach. That said, for the foreseeable future, the kernel page table on amd64 will have two valid ranges, no more, no less. So, if it's much easier to modify minidump to deal with a page table that is assumed to have two contiguous parts, just do it. That assumption should remain valid for a few years.

Alan

_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to