On Thu, 20 Jun 2013, Jeff Roberson wrote:
On Wed, 19 Jun 2013, Zbyszek Bodek wrote:
Hello,
I've been trying to compile the kernel on my ARMv7 platform using the
sources from the current FreeBSD HEAD.
make buildkernel <.....> -j5
1/2 builds fails in the way described below:
--------------------------------------------------------------------------
ing-include-dirs -fdiagnostics-show-option -nostdinc -I.
-I/root/src/freebsd-arm-superpages/sys
-I/root/src/freebsd-arm-superpages/sys/contrib/altq
-I/root/src/freebsd-arm-superpages/sys/contrib/libfdt -D_KERNEL
-DHAVE_KERNEL_OPTION_HEADERS -include opt_global.h -fno-common
-finline-limit=8000 --param inline-unit-growth=100 --param
large-function-growth=1000 -mno-thumb-interwork -ffreestanding -Werror
/root/src/freebsd-arm-superpages/sys/ufs/ffs/ffs_snapshot.c
Cannot fork: Cannot allocate memory
*** [ffs_snapshot.o] Error code 2
1 error
*** [buildkernel] Error code 2
1 error
*** [buildkernel] Error code 2
1 error
5487.888u 481.569s 7:35.65 1310.0% 1443+167k 1741+5388io 221pf+0w
--------------------------------------------------------------------------
The warning from std err is:
--------------------------------------------------------------------------
vm_thread_new: kstack allocation failed
vm_thread_new: kstack allocation failed
--------------------------------------------------------------------------
I was trying to find out which commit is causing this (because I was
previously working on some older revision) and using bisect I got to:
--------------------------------------------------------------------------
Author: jeff <j...@freebsd.org>
Date: Tue Jun 18 04:50:20 2013 +0000
Refine UMA bucket allocation to reduce space consumption and improve
performance.
- Always free to the alloc bucket if there is space. This gives LIFO
allocation order to improve hot-cache performance. This also allows
for zones with a single bucket per-cpu rather than a pair if the
entire
working set fits in one bucket.
- Enable per-cpu caches of buckets. To prevent recursive bucket
allocation one bucket zone still has per-cpu caches disabled.
- Pick the initial bucket size based on a table driven maximum size
per-bucket rather than the number of items per-page. This gives
more sane initial sizes.
- Only grow the bucket size when we face contention on the zone
lock, this
causes bucket sizes to grow more slowly.
- Adjust the number of items per-bucket to account for the header
space.
This packs the buckets more efficiently per-page while making them
not quite powers of two.
- Eliminate the per-zone free bucket list. Always return buckets back
to the bucket zone. This ensures that as zones grow into larger
bucket sizes they eventually discard the smaller sizes. It persists
fewer buckets in the system. The locking is slightly trickier.
- Only switch buckets in zalloc, not zfree, this eliminates
pathological
cases where we ping-pong between two buckets.
- Ensure that the thread that fills a new bucket gets to allocate from
it to give a better upper bound on allocation time.
Sponsored by: EMC / Isilon Storage Division
--------------------------------------------------------------------------
I checked this several times and this commits seems to be causing this.
Can you tell me how many cores and how much memory you have? And paste the
output of vmstat -z when you see this error.
You can try changing bucket_select() at line 339 in uma_core.c to read:
static int
bucket_select(int size)
{
return (MAX(PAGE_SIZE / size, 1));
}
This will approximate the old bucket sizing behavior.
Just to add some more information; On my machine with 16GB of ram the
handful of recent UMA commits save about 20MB of kmem on boot. There are
30% fewer buckets allocated. And all of the malloc zones have similar
amounts of cached space. Actually the page size malloc bucket is taking
up much less space.
I don't know if the problem is unique to arm but I have tested x86 limited
to 512MB of ram without trouble. I will need the stats I mentioned before
to understand what has happened.
Jeff
Thanks,
Jeff
Does anyone observe similar behavior or have a solution?
Best regards
Zbyszek Bodek
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"