Kernel crashes after sleep: how to debug?

2013-07-15 Thread Yuri


After sleep/wakeup cycle my 9.1-STABLE r253105 amd64 system has a 
tendency to sometimes randomly crash after a while. It doesn't happen 
every time.
See kgdb log below. I am not sure there is enough information to lead to 
the cause of the issue.


It looks like it crashes near the line:
#7  0x8091a181 in _mtx_trylock (m=0x1, opts=0, 
file=, line=0) at /usr/src/sys/kern/kern_mutex.c:295

295 if (SCHEDULER_STOPPED())
Current language:  auto; currently c
(kgdb) l
290 uint64_t waittime = 0;
291 int contested = 0;
292 #endif
293 int rval;
294
295 if (SCHEDULER_STOPPED())
296 return (1);
297
298 KASSERT(m->mtx_lock != MTX_DESTROYED,
299 ("mtx_trylock() of destroyed mutex @ %s:%d", file, 
line));


Current thread was:
* 67 Thread 100064 (PID=5: pagedaemon)  doadump (textdump=optimized out>) at pcpu.h:234


How to find the cause of the crash?

Yuri


--- kgdb log ---
# kgdb /boot/kernel/kernel vmcore.0
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:


Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0x10018
fault code  = supervisor read data, page not present
instruction pointer = 0x20:0x8091a181
stack pointer   = 0x28:0xff80d51c6ab0
frame pointer   = 0x28:0xff80d51c6ad0
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 5 (pagedaemon)
trap number = 12
panic: page fault
cpuid = 0
KDB: stack backtrace:
#0 0x80968416 at kdb_backtrace+0x66
#1 0x8092e43e at panic+0x1ce
#2 0x80d12940 at trap_fatal+0x290
#3 0x80d12ca1 at trap_pfault+0x211
#4 0x80d13254 at trap+0x344
#5 0x80cfc583 at calltrap+0x8
#6 0x80baea78 at vm_pageout+0x998
#7 0x808fc10f at fork_exit+0x11f
#8 0x80cfcaae at fork_trampoline+0xe
Uptime: 2h21m27s
Dumping 407 out of 2919 MB:..4%..12%..24%..32%..44%..52%..63%..71%..83%..91%

Reading symbols from /boot/modules/cuse4bsd.ko...done.
Loaded symbols for /boot/modules/cuse4bsd.ko
Reading symbols from /boot/kernel/linux.ko...Reading symbols from 
/boot/kernel/linux.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/linux.ko
Reading symbols from /usr/local/libexec/linux_adobe/linux_adobe.ko...done.
Loaded symbols for /usr/local/libexec/linux_adobe/linux_adobe.ko
Reading symbols from /boot/kernel/radeon.ko...Reading symbols from 
/boot/kernel/radeon.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/radeon.ko
Reading symbols from /boot/kernel/drm.ko...Reading symbols from 
/boot/kernel/drm.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/drm.ko
#0  doadump (textdump=) at pcpu.h:234
234 pcpu.h: No such file or directory.
in pcpu.h
(kgdb) bt
#0  doadump (textdump=) at pcpu.h:234
#1  0x8092df16 in kern_reboot (howto=260) at 
/usr/src/sys/kern/kern_shutdown.c:449
#2  0x8092e417 in panic (fmt=0x1 ) at 
/usr/src/sys/kern/kern_shutdown.c:637
#3  0x80d12940 in trap_fatal (frame=0xc, eva=out>) at /usr/src/sys/amd64/amd64/trap.c:879
#4  0x80d12ca1 in trap_pfault (frame=0xff80d51c6a00, 
usermode=0) at /usr/src/sys/amd64/amd64/trap.c:795
#5  0x80d13254 in trap (frame=0xff80d51c6a00) at 
/usr/src/sys/amd64/amd64/trap.c:463
#6  0x80cfc583 in calltrap () at 
/usr/src/sys/amd64/amd64/exception.S:232
#7  0x8091a181 in _mtx_trylock (m=0x1, opts=0, 
file=, line=0) at /usr/src/sys/kern/kern_mutex.c:295

#8  0x80baea78 in vm_pageout () at /usr/src/sys/vm/vm_pageout.c:829
#9  0x808fc10f in fork_exit (callout=0x80bae0e0 
, arg=0x0, frame=0xff80d51c6c40)

at /usr/src/sys/kern/kern_fork.c:988
#10 0x80cfcaae in fork_trampoline () at 
/usr/src/sys/amd64/amd64/exception.S:606

#11 0x in ?? ()

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: expanding amd64 past the 1TB limit

2013-07-15 Thread Chris Torek
(Durn mailing list software, eating attachments... there are just
the two so I will just send them one at a time here.  I took the
individual people off the to/cc since presumably you all got the
attachments already.)

Date: Sun, 14 Jul 2013 19:39:51 -0600
Subject: [PATCH 1/2] create_pagetables: cosmetics

Using local variables with the appropriate types,
eliminate a bunch of casts and shorten the code a bit.
---
 amd64/amd64/pmap.c | 62 +++---
 1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/amd64/amd64/pmap.c b/amd64/amd64/pmap.c
index 8dcf232..46f6940 100644
--- a/amd64/amd64/pmap.c
+++ b/amd64/amd64/pmap.c
@@ -531,6 +531,10 @@ static void
 create_pagetables(vm_paddr_t *firstaddr)
 {
int i, j, ndm1g, nkpdpe;
+   pt_entry_t *pt_p;
+   pd_entry_t *pd_p;
+   pdp_entry_t *pdp_p;
+   pml4_entry_t *p4_p;
 
/* Allocate page table pages for the direct map */
ndmpdp = (ptoa(Maxmem) + NBPDP - 1) >> PDPSHIFT;
@@ -561,32 +565,26 @@ create_pagetables(vm_paddr_t *firstaddr)
KPDphys = allocpages(firstaddr, nkpdpe);
 
/* Fill in the underlying page table pages */
-   /* Read-only from zero to physfree */
+   /* Nominally read-only (but really R/W) from zero to physfree */
/* XXX not fully used, underneath 2M pages */
-   for (i = 0; (i << PAGE_SHIFT) < *firstaddr; i++) {
-   ((pt_entry_t *)KPTphys)[i] = i << PAGE_SHIFT;
-   ((pt_entry_t *)KPTphys)[i] |= PG_RW | PG_V | PG_G;
-   }
+   pt_p = (pt_entry_t *)KPTphys;
+   for (i = 0; ptoa(i) < *firstaddr; i++)
+   pt_p[i] = ptoa(i) | PG_RW | PG_V | PG_G;
 
/* Now map the page tables at their location within PTmap */
-   for (i = 0; i < nkpt; i++) {
-   ((pd_entry_t *)KPDphys)[i] = KPTphys + (i << PAGE_SHIFT);
-   ((pd_entry_t *)KPDphys)[i] |= PG_RW | PG_V;
-   }
+   pd_p = (pd_entry_t *)KPDphys;
+   for (i = 0; i < nkpt; i++)
+   pd_p[i] = (KPTphys + ptoa(i)) | PG_RW | PG_V;
 
/* Map from zero to end of allocations under 2M pages */
/* This replaces some of the KPTphys entries above */
-   for (i = 0; (i << PDRSHIFT) < *firstaddr; i++) {
-   ((pd_entry_t *)KPDphys)[i] = i << PDRSHIFT;
-   ((pd_entry_t *)KPDphys)[i] |= PG_RW | PG_V | PG_PS | PG_G;
-   }
+   for (i = 0; (i << PDRSHIFT) < *firstaddr; i++)
+   pd_p[i] = (i << PDRSHIFT) | PG_RW | PG_V | PG_PS | PG_G;
 
/* And connect up the PD to the PDP */
-   for (i = 0; i < nkpdpe; i++) {
-   ((pdp_entry_t *)KPDPphys)[i + KPDPI] = KPDphys +
-   (i << PAGE_SHIFT);
-   ((pdp_entry_t *)KPDPphys)[i + KPDPI] |= PG_RW | PG_V | PG_U;
-   }
+   pdp_p = (pdp_entry_t *)KPDPphys;
+   for (i = 0; i < nkpdpe; i++)
+   pdp_p[i + KPDPI] = (KPDphys + ptoa(i)) | PG_RW | PG_V | PG_U;
 
/*
 * Now, set up the direct map region using 2MB and/or 1GB pages.  If
@@ -596,37 +594,39 @@ create_pagetables(vm_paddr_t *firstaddr)
 * memory, pmap_change_attr() will demote any 2MB or 1GB page mappings
 * that are partially used. 
 */
+   pd_p = (pd_entry_t *)DMPDphys;
for (i = NPDEPG * ndm1g, j = 0; i < NPDEPG * ndmpdp; i++, j++) {
-   ((pd_entry_t *)DMPDphys)[j] = (vm_paddr_t)i << PDRSHIFT;
+   pd_p[j] = (vm_paddr_t)i << PDRSHIFT;
/* Preset PG_M and PG_A because demotion expects it. */
-   ((pd_entry_t *)DMPDphys)[j] |= PG_RW | PG_V | PG_PS | PG_G |
+   pd_p[j] |= PG_RW | PG_V | PG_PS | PG_G |
PG_M | PG_A;
}
+   pdp_p = (pdp_entry_t *)DMPDPphys;
for (i = 0; i < ndm1g; i++) {
-   ((pdp_entry_t *)DMPDPphys)[i] = (vm_paddr_t)i << PDPSHIFT;
+   pdp_p[i] = (vm_paddr_t)i << PDPSHIFT;
/* Preset PG_M and PG_A because demotion expects it. */
-   ((pdp_entry_t *)DMPDPphys)[i] |= PG_RW | PG_V | PG_PS | PG_G |
+   pdp_p[i] |= PG_RW | PG_V | PG_PS | PG_G |
PG_M | PG_A;
}
for (j = 0; i < ndmpdp; i++, j++) {
-   ((pdp_entry_t *)DMPDPphys)[i] = DMPDphys + (j << PAGE_SHIFT);
-   ((pdp_entry_t *)DMPDPphys)[i] |= PG_RW | PG_V | PG_U;
+   pdp_p[i] = DMPDphys + ptoa(j);
+   pdp_p[i] |= PG_RW | PG_V | PG_U;
}
 
/* And recursively map PML4 to itself in order to get PTmap */
-   ((pdp_entry_t *)KPML4phys)[PML4PML4I] = KPML4phys;
-   ((pdp_entry_t *)KPML4phys)[PML4PML4I] |= PG_RW | PG_V | PG_U;
+   p4_p = (pml4_entry_t *)KPML4phys;
+   p4_p[PML4PML4I] = KPML4phys;
+   p4_p[PML4PML4I] |= PG_RW | PG_V | PG_U;
 
/* Connect the Direct Map slot(s) up to the PML4. */
for (i = 0; i < NDMPML4E; i++) {
-   ((pdp_entry_t *)KPML4phys)[DMPML4I + i] = 

Re: expanding amd64 past the 1TB limit

2013-07-15 Thread Chris Torek
(Durn mailing list software, eating attachments... there are just
the two so I will just send them one at a time here.  I took the
individual people off the to/cc since presumably you all got the 
attachments already.)

Date: Thu, 27 Jun 2013 18:49:29 -0600
Subject: [PATCH 2/2] increase physical and virtual memory limits

Increase kernel VM space: go from .5 TB of KVA and 1 TB of direct
map, to 8 TB of KVA and 16 TB of direct map.  However, we allocate
less direct map space for small physical-memory systems.  Also, if
Maxmem is so large that there is not enough direct map space,
reduce Maxmem to fit, so that the system can boot unassisted.
---
 amd64/amd64/pmap.c  | 44 +---
 amd64/include/pmap.h| 36 +---
 amd64/include/vmparam.h | 13 +++--
 3 files changed, 69 insertions(+), 24 deletions(-)

diff --git a/amd64/amd64/pmap.c b/amd64/amd64/pmap.c
index 46f6940..5e43c93 100644
--- a/amd64/amd64/pmap.c
+++ b/amd64/amd64/pmap.c
@@ -232,6 +232,7 @@ u_int64_t   KPML4phys;  /* phys addr of kernel 
level 4 */
 
 static u_int64_t   DMPDphys;   /* phys addr of direct mapped level 2 */
 static u_int64_t   DMPDPphys;  /* phys addr of direct mapped level 3 */
+static int ndmpdpphys; /* number of DMPDPphys pages */
 
 static struct rwlock_padalign pvh_global_lock;
 
@@ -540,7 +541,18 @@ create_pagetables(vm_paddr_t *firstaddr)
ndmpdp = (ptoa(Maxmem) + NBPDP - 1) >> PDPSHIFT;
if (ndmpdp < 4) /* Minimum 4GB of dirmap */
ndmpdp = 4;
-   DMPDPphys = allocpages(firstaddr, NDMPML4E);
+   ndmpdpphys = howmany(ndmpdp, NPDPEPG);
+   if (ndmpdpphys > NDMPML4E) {
+   /*
+* Each NDMPML4E allows 512 GB, so limit to that,
+* and then readjust ndmpdp and ndmpdpphys.
+*/
+   printf("NDMPML4E limits system to %d GB\n", NDMPML4E * 512);
+   Maxmem = atop(NDMPML4E * NBPML4);
+   ndmpdpphys = NDMPML4E;
+   ndmpdp = NDMPML4E * NPDEPG;
+   }
+   DMPDPphys = allocpages(firstaddr, ndmpdpphys);
ndm1g = 0;
if ((amd_feature & AMDID_PAGE1GB) != 0)
ndm1g = ptoa(Maxmem) >> PDPSHIFT;
@@ -557,6 +569,10 @@ create_pagetables(vm_paddr_t *firstaddr)
 * bootstrap.  We defer this until after all memory-size dependent
 * allocations are done (e.g. direct map), so that we don't have to
 * build in too much slop in our estimate.
+*
+* Note that when NKPML4E > 1, we have an empty page underneath
+* all but the KPML4I'th one, so we need NKPML4E-1 extra (zeroed)
+* pages.  (pmap_enter requires a PD page to exist for each KPML4E.)
 */
nkpt_init(*firstaddr);
nkpdpe = NKPDPE(nkpt);
@@ -581,8 +597,8 @@ create_pagetables(vm_paddr_t *firstaddr)
for (i = 0; (i << PDRSHIFT) < *firstaddr; i++)
pd_p[i] = (i << PDRSHIFT) | PG_RW | PG_V | PG_PS | PG_G;
 
-   /* And connect up the PD to the PDP */
-   pdp_p = (pdp_entry_t *)KPDPphys;
+   /* And connect up the PD to the PDP (leaving room for L4 pages) */
+   pdp_p = (pdp_entry_t *)(KPDPphys + ptoa(KPML4I - KPML4BASE));
for (i = 0; i < nkpdpe; i++)
pdp_p[i + KPDPI] = (KPDphys + ptoa(i)) | PG_RW | PG_V | PG_U;
 
@@ -619,14 +635,16 @@ create_pagetables(vm_paddr_t *firstaddr)
p4_p[PML4PML4I] |= PG_RW | PG_V | PG_U;
 
/* Connect the Direct Map slot(s) up to the PML4. */
-   for (i = 0; i < NDMPML4E; i++) {
+   for (i = 0; i < ndmpdpphys; i++) {
p4_p[DMPML4I + i] = DMPDPphys + ptoa(i);
p4_p[DMPML4I + i] |= PG_RW | PG_V | PG_U;
}
 
-   /* Connect the KVA slot up to the PML4 */
-   p4_p[KPML4I] = KPDPphys;
-   p4_p[KPML4I] |= PG_RW | PG_V | PG_U;
+   /* Connect the KVA slots up to the PML4 */
+   for (i = 0; i < NKPML4E; i++) {
+   p4_p[KPML4BASE + i] = KPDPphys + ptoa(i);
+   p4_p[KPML4BASE + i] |= PG_RW | PG_V | PG_U;
+   }
 }
 
 /*
@@ -1685,8 +1703,11 @@ pmap_pinit(pmap_t pmap)
pagezero(pmap->pm_pml4);
 
/* Wire in kernel global address entries. */
-   pmap->pm_pml4[KPML4I] = KPDPphys | PG_RW | PG_V | PG_U;
-   for (i = 0; i < NDMPML4E; i++) {
+   for (i = 0; i < NKPML4E; i++) {
+   pmap->pm_pml4[KPML4BASE + i] = (KPDPphys + (i << PAGE_SHIFT)) |
+   PG_RW | PG_V | PG_U;
+   }
+   for (i = 0; i < ndmpdpphys; i++) {
pmap->pm_pml4[DMPML4I + i] = (DMPDPphys + (i << PAGE_SHIFT)) |
PG_RW | PG_V | PG_U;
}
@@ -1941,8 +1962,9 @@ pmap_release(pmap_t pmap)
 
m = PHYS_TO_VM_PAGE(pmap->pm_pml4[PML4PML4I] & PG_FRAME);
 
-   pmap->pm_pml4[KPML4I] = 0;  /* KVA */
-   for (i = 0; i < NDMPML4E; i++)  /* Direct Map */
+   for (i = 0; i < 

Re: Kernel crashes after sleep: how to debug?

2013-07-15 Thread Yuri

On 07/15/2013 00:22, Yuri wrote:

How to find the cause of the crash?


I added WITNESS and related options and next crash produced such messages:
Jul 15 03:25:53 satellite kernel: panic: Bad link elm 0xfe00b780d000 
next->prev != elm

Jul 15 03:25:53 satellite kernel: cpuid = 1
Jul 15 03:25:53 satellite kernel: KDB: stack backtrace:
Jul 15 03:25:53 satellite kernel: #0 0x8094b846 at 
kdb_backtrace+0x66

Jul 15 03:25:53 satellite kernel: #1 0x809129c8 at panic+0x1d8
Jul 15 03:25:53 satellite kernel: #2 0x80b83994 at 
vm_page_requeue+0xe4

Jul 15 03:25:53 satellite kernel: #3 0x80b896c4 at vm_pageout+0xb04
Jul 15 03:25:53 satellite kernel: #4 0x808e3f65 at fork_exit+0x135
Jul 15 03:25:53 satellite kernel: #5 0x80cd72de at 
fork_trampoline+0xe



Process was pagedaemon, like before.

Yuri
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


Re: Intel D2500CC serial ports

2013-07-15 Thread Lev Serebryakov
Hello, John.
You wrote 11 июля 2013 г., 18:14:42:

JB> Maybe try this:
JB> --- //depot/user/jhb/acpipci/dev/acpica/acpi_resource.c 2011-07-22 
17:59:31.0 
JB> +++ /home/jhb/work/p4/acpipci/dev/acpica/acpi_resource.c2011-07-22 
17:59:31.0 
JB> @@ -141,6 +141,10 @@
JB>  default:
JB> panic("%s: bad resource type %u", __func__, res->Type);
JB>  }
JB> +#if defined(__amd64__) || defined(__i386__)
JB> +if (irq < 16 && trig == ACPI_EDGE_SENSITIVE && pol == ACPI_ACTIVE_LOW)
JB> +   pol = ACPI_ACTIVE_HIGH;
JB> +#endif
JB>  BUS_CONFIG_INTR(dev, irq, (trig == ACPI_EDGE_SENSITIVE) ?
JB> INTR_TRIGGER_EDGE : INTR_TRIGGER_LEVEL, (pol == ACPI_ACTIVE_HIGH) ?
JB> INTR_POLARITY_HIGH : INTR_POLARITY_LOW);
 This patch helps me too! Could it be integrated?


-- 
// Black Lion AKA Lev Serebryakov 

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Re: GPT issues with device path lengths involving make_dev_physpath_alias

2013-07-15 Thread Alan Somers
It's a compatibility problem.  If you change that constant, then any
binaries built with the old value will break if they rely on it having
a fixed value in a system or library call.  For example, the
MFIIO_QUERY_DISK ioctl in the mfi(4) driver passes a structure with an
array of size SPECNAMELEN + 1.  If you change SPECNAMELEN, then you'll
have to add a compatibility mechanism for this ioctl.  I'm sure there
are other places that would have the same problem.

Happy Hacking.

On Sun, Jul 14, 2013 at 11:50 PM, Selphie Keller
 wrote:
> hello hackers,
>
> I recently ran into a issue with a storage server that has some of the
> drives in gpt vs mbr, tracked it down to a 64 char limit that is preventing
> aliases in function make_dev_physpath_alias. I was curious if there was any
> reason why this couldn't be bumped from 64 to 128 which would make room for
> the device paths of gpt roughly around 94 and 96 chars long.
>
> - #define SPECNAMELEN
>  63
>   */* max length of devicename */
> + *#define SPECNAMELEN
>  127
>*/* max length of devicename */*
>
>
> http://fxr.watson.org/fxr/source/sys/param.h#L106
>
> Jul 14 22:10:17 fbsd9 kernel: make_dev_physpath_alias: WARNING - Unable to
> alias gptid/4d177c56-ce17-26e3-843e-9c8a9faf1e0f to enc@n5003048000ba7d7d
> /type@0/slot@b/elmdesc@Slot_11/gptid/4d177c56-ce17-26e3-843e-9c8a9faf1e0f -
> path too long
> Jul 14 22:10:17 fbsd9 kernel: make_dev_physpath_alias: WARNING - Unable to
> alias gptid/4b1caf38-d967-24ee-c3a0-badff404e7ed to enc@n5003048000ba7d7d
> /type@0/slot@5/elmdesc@Slot_05/gptid/4b1caf38-d967-24ee-c3a0-badff404e7ed -
> path too long
>
> -Selphie (Estella Mystagic)
> ___
> freebsd-hackers@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


am I abusing the UMA allocator?

2013-07-15 Thread Chris Torek
I have been experimenting with using the UMA (slab) allocator for
special-purpose physical address ranges.  (The underlying issue is
that we need zone-like and/or mbuf-like data structures to talk to
hardware that has "special needs" in terms of which physical pages
it can in turn use.  Each device has a limited memory window it
can access.)

For my purposes it's nice that the allocation function receives a
"zone" argument, even though the comment in the call says "zone is
passed for legacy reasons".  However, the free function does not
get the zone argument, or anything other than a single bit -- up
to 4 if you cheat harder.  This is ... less convenient (although
in my case I can use the VA being free'd, instead).

What I'm wondering is what this single bit is really for; whether
the allocation and free might be made more flexible for special-
purpose back-end allocators; and whether this is really using
things as intended.

Details:

In the allocator, there's a per-"keg" uk_allocf and uk_freef
("alloc"ation and "free" "f"unction) pointer, and you can set your
own allocation and free functions for any zone with:

void uma_zone_set_allocf(uma_zone_t zone, uma_alloc allocf);
void uma_zone_set_freef(uma_zone_t zone, uma_free freef);

(Aside: it seems a bit weird that you set these per *zone*
but they're stored in the *kegs*, specifically the special
"first keg", but never mind... :-) )

Each allocf is called as:

/* arguments: uma_zone_t zone, int size, uint8_t *pflag, int wait */
mem = allocf(zone, nbytes, &flags, wait);

where "wait" is made up of malloc flags (M_WAITOK, M_NOWAIT,
M_ZERO, M_USE_RESERVE).  The "flags" argument is not initialized
at this point, so the allocation function must fill it in.  The
filled-in value is stored in the per-slab us_flags and eventually
passed back to each freef function:

/* arguments: void *mem, int size, uint8_t flag */
freef(mem, nbytes, pflag); /* where pflag = us->us_flags */

The flags are defined in sys/vm/uma.h and are the UMA_SLAB_* flags
(BOOT, KMEM, KERNEL, "PRIV", OFFP, MALLOC).  UMA_SLAB_PRIV is
described as "private".  The bit is never tested though, so it
seems that a "private" allocator can set UMA_SLAB_PRIV, or not set
it, freely.  It appears to be the only UMA_SLAB_* bit that has no
other defined meaning in uma_core.c or elsewhere.  (Not entirely
true, there's also UMA_SLAB_OFFP which is never tested or set, and
bits 0x40 and 0x80 are unused.  There's also an unused us_pad
right after that.  It looks like OFFP is a leftover, with "on" vs
"off" page slab management controlled through UMA_ZONE_HASH and
also the PG_SLAB bit in the underlying "struct vm_page".)

There's also a per-keg flag spelled UMA_ZFLAG_PRIVALLOC, along
with UMA_ZONE_NOFREE.  But UMA_ZFLAG_PRIVALLOC is never tested;
and UMA_ZONE_NOFREE is really per-keg, and you can't set it from
outside the UMA code.

When the system gets low on memory, it calls uma_reclaim(), which
does (simplified):

zone_foreach(zone_drain)
| zone_drain(zone)
  | zone_drain_wait(zone)
| bucket_cache_drain()
| zone_foreach_keg()
  | keg_drain()
| test: (UMA_ZONE_NOFREE || keg->uk_freef==NULL)
| if either is the case, return now, can't free

The issue here is that draining these special purpose, special-
physical-page-backed zones is not actually going to help the
system any (though freeing internal bucket data structures
could help slightly).  Of course I can have uk_freef == NULL,
but it is nice to keep some statistics, and maybe be able to
trade pages between various special-purpose physical spaces
(by doing my own zone_drain()s on them -- the one in uma_reclaim()
is not going to help the OS much as the physical pages cannot
be handed out to processes, and they "run out" against themselves,
not the VM system).

All in all, I'm now thinking that I'm abusing the slab allocator
too much here.  But I wonder if perhaps some minor changes to
uma_core might make this more useable, or if this is really within
the intent of the UMA code at all.

Chris
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


***HELP***

2013-07-15 Thread West Side Family
I Need help from all of you guys for this site www.zoo.g   (  Admin.zoo.gr ) to 
broke up the password!!! THANKS 
  ___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Re: ***HELP***

2013-07-15 Thread Jason Hellenthal
Grandma doesn't have strong enough glasses to see the keyboard sorry.

-- 
 Jason Hellenthal
 Inbox: jhellent...@dataix.net
 Voice: +1 (616) 953-0176
 JJH48-ARIN


On Jul 15, 2013, at 19:30, West Side Family  
wrote:

> I Need help from all of you guys for this site www.zoo.g   (  Admin.zoo.gr ) 
> to broke up the password!!! THANKS 
> 
> ___
> freebsd-hackers@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"


smime.p7s
Description: S/MIME cryptographic signature