Re: mmap/munmap with zero length
so mmap differs from the POSIX recommendation right. the malloc.conf option seems more like a workaround/hack. imo it's confusing to have mmap und munmap deal differently with len=0. being able to succesfully alocate memory which cannot be removed doesn't seem logical to me. alex Nate Eldredge schrieb am 2009-07-05: > On Sun, 5 Jul 2009, Alexander Best wrote: > >i'm wondering why mmap and munmap behave differently when it comes > >to a length > >argument of zero. allocating memory with mmap for a zero length > >file returns a > >valid pointer to the mapped region. > >munmap however isn't able to remove a mapping with no length. > >wouldn't it be better to either forbid this in mmap or to allow it > >in munmap? > POSIX has an opinion: > http://www.opengroup.org/onlinepubs/9699919799/functions/mmap.html > "If len is zero, mmap() shall fail and no mapping shall be > established." > http://www.opengroup.org/onlinepubs/9699919799/functions/munmap.html > "The munmap() function shall fail if: > ... > [EINVAL] >The len argument is 0." > -- > Nate Eldredge > neldre...@math.ucsd.edu ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
'No buffer space available' messages from ral0 device
After I had 'wget' running for a while ral device became irresponsive and kept printing those messages: Jul 5 00:30:45 eagle dhcpcd[22608]: ral0: timed out Jul 5 00:30:45 eagle dhcpcd[22608]: ral0: lease expired 48949 seconds ago Jul 5 00:30:45 eagle dhcpcd[22608]: ral0: writev: No buffer space available Jul 5 00:31:03 eagle last message repeated 7 times Jul 5 00:31:05 eagle dhcpcd[22608]: ral0: timed out Jul 5 00:31:05 eagle dhcpcd[22608]: ral0: lease expired 48969 seconds ago Jul 5 00:31:05 eagle dhcpcd[22608]: ral0: writev: No buffer space available Jul 5 00:31:23 eagle last message repeated 7 times After I put it down/up and set it up again it worked fine again. Is this a known problem? Any workaround? Yuri ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
carriage return with stdout and stderr
i'm running something similar to this pseudo-code in an app of mine: for (i=0 ) fprintf(stdout,"TEXT %d\r", int); what's really strange is that if i print to stdout the output isn't very clean. the cursor jumps randomly within the output (being 1 line). if i print to stderr however the output looks really nice. the cursor says right at the front of the output all the time. just like in burncd e.g. what's causing this? because i'd rather print to stdout. alex ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
Re: carriage return with stdout and stderr
On Sun, Jul 05, 2009 at 01:42:01PM +0200, Alexander Best wrote: > i'm running something similar to this pseudo-code in an app of mine: > for (i=0 ) > fprintf(stdout,"TEXT %d\r", int); > what's really strange is that if i print to stdout the output isn't very > clean. the cursor jumps randomly within the output (being 1 line). if i print > to stderr however the output looks really nice. the cursor says right at the > front of the output all the time. just like in burncd e.g. > what's causing this? because i'd rather print to stdout. If you are writing to a terminal, stdout is line-buffered. This means that output is flushed when the buffer is full or a '\n' is written. A '\r' is not good enough. You can force a write using fflush(stdout). stderr is always unbuffered, so everything is written immediately. -- Jilles Tjoelker ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
Re: carriage return with stdout and stderr
Alexander Best schrieb: i'm running something similar to this pseudo-code in an app of mine: for (i=0 ) fprintf(stdout,"TEXT %d\r", int); what's really strange is that if i print to stdout the output isn't very clean. the cursor jumps randomly within the output (being 1 line). if i print to stderr however the output looks really nice. the cursor says right at the front of the output all the time. just like in burncd e.g. what's causing this? because i'd rather print to stdout. stdout is buffered, stderr is not. Try fflush(). Christoph ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
Re: carriage return with stdout and stderr
thanks. i remembered fprintf being buffered, but i always thought \r would also empty the buffer. now that explains everything. ;-) alex Jilles Tjoelker schrieb am 2009-07-05: > On Sun, Jul 05, 2009 at 01:42:01PM +0200, Alexander Best wrote: > > i'm running something similar to this pseudo-code in an app of > > mine: > > for (i=0 ) > > fprintf(stdout,"TEXT %d\r", int); > > what's really strange is that if i print to stdout the output isn't > > very > > clean. the cursor jumps randomly within the output (being 1 line). > > if i print > > to stderr however the output looks really nice. the cursor says > > right at the > > front of the output all the time. just like in burncd e.g. > > what's causing this? because i'd rather print to stdout. > If you are writing to a terminal, stdout is line-buffered. This means > that output is flushed when the buffer is full or a '\n' is written. > A > '\r' is not good enough. You can force a write using fflush(stdout). > stderr is always unbuffered, so everything is written immediately. ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
Zero-length allocation with posix_memalign()
I recently submitted a patch to the vlc developers that prevents a crash on FreeBSD 8.0 by not calling posix_memalign() with a size argument of zero. A simplified test case would be: #include int main(int argc, char **argv) { void *ptr; posix_memalign(&ptr, 16, 0); return (0); } which triggers: Assertion failed: (size != 0), function arena_malloc, file /usr/src/lib/libc/stdlib/malloc.c, line 3349. Rémi Denis-Courmont, one of the vlc developers, pointed out that passing a zero size to posix_memalign() should actually work, though: | In principle, while useless, there is no reason why allocating an empty | picture should not be possible. posix_memalign() does support zero-length | allocation anyway: | http://www.opengroup.org/onlinepubs/9699919799/functions/posix_memalign.html | | If the size of the space requested is 0, the behavior is | | implementation-defined; the value returned in memptr shall be either a | | null pointer or a unique pointer. http://mailman.videolan.org/pipermail/vlc-devel/2009-July/062299.html I get the impression that this deviation from the standard could be easily fixed with something similar to the following, which is mostly copy and pasted from malloc(): index 5404798..a078d07 100644 --- a/malloc.c +++ b/malloc.c @@ -5303,6 +5303,15 @@ posix_memalign(void **memptr, size_t alignment, size_t size) int ret; void *result; + if (size == 0) { + if (opt_sysv == false) + size = 1; + else { + ret = 0; + *memptr = result = NULL; + goto RETURN; + } + } if (malloc_init()) result = NULL; else { I assume the "goto RETURN" isn't entirely compliant either as it skips the alignment check, but so does the malloc_init() failure branch. Fabian signature.asc Description: PGP signature
Re: Zero-length allocation with posix_memalign()
Fabian Keil wrote: Rémi Denis-Courmont, one of the vlc developers, pointed out that passing a zero size to posix_memalign() should actually work, though: | In principle, while useless, there is no reason why allocating an empty | picture should not be possible. posix_memalign() does support zero-length | allocation anyway: | http://www.opengroup.org/onlinepubs/9699919799/functions/posix_memalign.html | | If the size of the space requested is 0, the behavior is | | implementation-defined; the value returned in memptr shall be either a | | null pointer or a unique pointer. Standards: So many to choose from. This behavior for posix_memalign was only defined as of the 2008 standard (see the Issue 7 notes for posix_memalign): https://www.opengroup.org/austin/interps/uploads/40/14543/AI-152.txt Such requirements are unfortunate, because they induce a performance penalty for every call, just so that programs can avoid proper handling of edge cases in the rare situations for which such edge cases are a real possibility. I will add the pessimization to posix_memalign once the 8.0 freeze is over. It will be quite some time before this behavior becomes ubiquitous, so in the meanwhile it's probably a good idea to modify vlc to avoid such allocation requests. Thanks, Jason ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
Re: Zero-length allocation with posix_memalign()
On 7/5/09, Fabian Keil wrote: > I recently submitted a patch to the vlc developers that prevents > a crash on FreeBSD 8.0 by not calling posix_memalign() with a > size argument of zero. > > A simplified test case would be: > > #include > int main(int argc, char **argv) { > void *ptr; > posix_memalign(&ptr, 16, 0); > return (0); > } > > which triggers: > Assertion failed: (size != 0), function arena_malloc, file > /usr/src/lib/libc/stdlib/malloc.c, line 3349. Actually that assertion is triggered only if MALLOC_PRODUCTION is undefined. (when it is undefined it considerably slows thing down) 'a' flag for malloc.conf looks broken for me > > Remi Denis-Courmont, one of the vlc developers, pointed out > that passing a zero size to posix_memalign() should actually > work, though: > > | In principle, while useless, there is no reason why allocating an empty > | picture should not be possible. posix_memalign() does support zero-length > | allocation anyway: > | > http://www.opengroup.org/onlinepubs/9699919799/functions/posix_memalign.html > | | If the size of the space requested is 0, the behavior is > | | implementation-defined; the value returned in memptr shall be either a > | | null pointer or a unique pointer. > http://mailman.videolan.org/pipermail/vlc-devel/2009-July/062299.html > > I get the impression that this deviation from the standard could be > easily fixed with something similar to the following, which is mostly > copy and pasted from malloc(): > > index 5404798..a078d07 100644 > --- a/malloc.c > +++ b/malloc.c > @@ -5303,6 +5303,15 @@ posix_memalign(void **memptr, size_t alignment, > size_t size) > int ret; > void *result; > > + if (size == 0) { > + if (opt_sysv == false) > + size = 1; > + else { > + ret = 0; > + *memptr = result = NULL; > + goto RETURN; > + } > + } > if (malloc_init()) > result = NULL; > else { > > I assume the "goto RETURN" isn't entirely compliant either as > it skips the alignment check, but so does the malloc_init() > failure branch. > > Fabian > -- Paul ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
Re: Problem with vm.pmap.shpgperproc and vm.pmap.pv_entry_max
On Fri, Jul 3, 2009 at 8:18 AM, c0re dumped wrote: > So, I never had problem with this server, but recently it starts to > giv me the following messages *every* minute : > > Jul 3 10:04:00 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:05:00 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:06:00 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:07:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:08:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:09:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:10:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > Jul 3 10:11:01 squid kernel: Approaching the limit on PV entries, > consider increasing either the vm.pmap.shpgperproc or the > vm.pmap.pv_entry_max tunable. > > This server is running Squid + dansguardian. The users are complaining > about slow navigation and they are driving me crazy ! > > Have anyone faced this problem before ? > > Some infos: > > # uname -a > FreeBSD squid 7.2-RELEASE FreeBSD 7.2-RELEASE #0: Fri May 1 08:49:13 > UTC 2009 r...@walker.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC > i386 > > # sysctl vm > vm.vmtotal: > System wide totals computed every five seconds: (values in kilobytes) > === > Processes: (RUNQ: 1 Disk Wait: 1 Page Wait: 0 Sleep: 230) > Virtual Memory: (Total: 19174412K, Active 9902152K) > Real Memory:(Total: 1908080K Active 1715908K) > Shared Virtual Memory: (Total: 647372K Active: 10724K) > Shared Real Memory: (Total: 68092K Active: 4436K) > Free Memory Pages: 88372K > > vm.loadavg: { 0.96 0.96 1.13 } > vm.v_free_min: 4896 > vm.v_free_target: 20635 > vm.v_free_reserved: 1051 > vm.v_inactive_target: 30952 > vm.v_cache_min: 20635 > vm.v_cache_max: 41270 > vm.v_pageout_free_min: 34 > vm.pageout_algorithm: 0 > vm.swap_enabled: 1 > vm.kmem_size_scale: 3 > vm.kmem_size_max: 335544320 > vm.kmem_size_min: 0 > vm.kmem_size: 335544320 > vm.nswapdev: 1 > vm.dmmax: 32 > vm.swap_async_max: 4 > vm.zone_count: 84 > vm.swap_idle_threshold2: 10 > vm.swap_idle_threshold1: 2 > vm.exec_map_entries: 16 > vm.stats.misc.zero_page_count: 0 > vm.stats.misc.cnt_prezero: 0 > vm.stats.vm.v_kthreadpages: 0 > vm.stats.vm.v_rforkpages: 0 > vm.stats.vm.v_vforkpages: 340091 > vm.stats.vm.v_forkpages: 3604123 > vm.stats.vm.v_kthreads: 53 > vm.stats.vm.v_rforks: 0 > vm.stats.vm.v_vforks: 2251 > vm.stats.vm.v_forks: 19295 > vm.stats.vm.v_interrupt_free_min: 2 > vm.stats.vm.v_pageout_free_min: 34 > vm.stats.vm.v_cache_max: 41270 > vm.stats.vm.v_cache_min: 20635 > vm.stats.vm.v_cache_count: 5734 > vm.stats.vm.v_inactive_count: 242259 > vm.stats.vm.v_inactive_target: 30952 > vm.stats.vm.v_active_count: 445958 > vm.stats.vm.v_wire_count: 58879 > vm.stats.vm.v_free_count: 16335 > vm.stats.vm.v_free_min: 4896 > vm.stats.vm.v_free_target: 20635 > vm.stats.vm.v_free_reserved: 1051 > vm.stats.vm.v_page_count: 769244 > vm.stats.vm.v_page_size: 4096 > vm.stats.vm.v_tfree: 12442098 > vm.stats.vm.v_pfree: 1657776 > vm.stats.vm.v_dfree: 0 > vm.stats.vm.v_tcached: 253415 > vm.stats.vm.v_pdpages: 254373 > vm.stats.vm.v_pdwakeups: 14 > vm.stats.vm.v_reactivated: 414 > vm.stats.vm.v_intrans: 1912 > vm.stats.vm.v_vnodepgsout: 0 > vm.stats.vm.v_vnodepgsin: 6593 > vm.stats.vm.v_vnodeout: 0 > vm.stats.vm.v_vnodein: 891 > vm.stats.vm.v_swappgsout: 0 > vm.stats.vm.v_swappgsin: 0 > vm.stats.vm.v_swapout: 0 > vm.stats.vm.v_swapin: 0 > vm.stats.vm.v_ozfod: 56314 > vm.stats.vm.v_zfod: 2016628 > vm.stats.vm.v_cow_optim: 1959 > vm.stats.vm.v_cow_faults: 584331 > vm.stats.vm.v_vm_faults: 3661086 > vm.stats.sys.v_soft: 23280645 > vm.stats.sys.v_intr: 18528397 > vm.stats.sys.v_syscall: 1990471112 > vm.stats.sys.v_trap: 8079878 > vm.stats.sys.v_swtch: 105613021 > vm.stats.object.bypasses: 14893 > vm.stats.object.collapses: 55259 > vm.v_free_severe: 2973 > vm.max_proc_mmap: 49344 > vm.old_msync: 0 > vm.msync_flush_flags: 3 > vm.boot_pages: 48 > vm.max_wired: 255475 > vm.pageout_lock_miss: 0 > vm.disable_swapspace_pageouts: 0 > vm.defer_swapspace_pageouts: 0 > vm.swap_idle_enabled: 0 > vm.pageout_stats_interval: 5 > vm.pageout_full_stats_interval: 20 > vm.pageout_stats_max: 20635 > vm.max_launder: 32 > vm.phys_segs: > SEGMENT 0: > > start: 0x1000 > end: 0x9a000