Hi again!

2008/12/24 pluknet <pluk...@gmail.com>:
> 2008/12/24 pluknet <pluk...@gmail.com>:
>> 2008/12/24 pluknet <pluk...@gmail.com>:
>>> 2008/12/24 pluknet <pluk...@gmail.com>:
>>>> Server version: Apache/2.2.11 (Unix) built from sources.
>>>>
>>>> After issuing kill -9 process stuck in vmopar state forever.
>>>> aaa301      2313  0.0  0.0     0     8  ??  DE    3:10PM   0:00.01
>>>> /home/aaa301/myb vmopar
>>>>
>>>> System: FreeBSD 6.2 i386.
>>>>
>>>
>>> One important note.
>>> Kernel is built with options QUOTA, and this problem triggered
>>> only when this user is overquoted (usage > quota and limit).
>>
>> A bit later various processes begin to stuck in "ufs" state.
>
> backtrace of process that stucks in vmopar:
>
> db> bt 1385
> Tracing pid 1385 tid 100181 td 0xc6c19960
> sched_switch(c6c19960,0,1) at sched_switch+0x15b
> mi_switch(1,0) at mi_switch+0x270
> sleepq_switch(c2954ec8,c0a3d0a0,0,c094c4eb,211,...) at sleepq_switch+0xc1
> sleepq_wait(c2954ec8,0,c096897c,709,c096897c,...) at sleepq_wait+0x46
> msleep(c2954ec8,c0ab8e80,244,c0968ca9,0,c9030444,0,c0968eb0,200,c2954ec8,82)
> at
> msleep+0x279
> vm_page_sleep_if_busy(c2954ec8,1,c0968ca9) at vm_page_sleep_if_busy+0x7c
> vm_object_page_remove(c9030444,4,0,8000,0,0) at vm_object_page_remove+0xf9
> vnode_pager_setsize(c903c000,4000,0) at vnode_pager_setsize+0xbd
> ffs_write(f734a78c) at ffs_write+0x264
> VOP_WRITE_APV(c0a09b00,f734a78c) at VOP_WRITE_APV+0x112
> vnode_pager_generic_putpages(c903c000,f734a8d0,9000,5,f734a860,...) at
> vnode_pag
>  er_generic_putpages+0x1ef
> vop_stdputpages(f734a814) at vop_stdputpages+0x1a
> VOP_PUTPAGES_APV(c0a09b00,f734a814) at VOP_PUTPAGES_APV+0x8c
> vnode_pager_putpages(c9030444,f734a8d0,9,5,f734a860) at
> vnode_pager_putpages+0x7
>                e
> vm_pageout_flush(f734a8d0,9,5,0,0,...) at vm_pageout_flush+0x112
> vm_object_page_collect_flush(c9030444,c29505a8,251,5,4a,...) at
> vm_object_page_c
>        ollect_flush+0x2a0
> vm_object_page_clean(c9030444,0,0,0,0,...) at vm_object_page_clean+0x184
> vm_object_terminate(c9030444) at vm_object_terminate+0x60
> vnode_destroy_vobject(c903c000,c6973500,f734aab8,c6c19960,0,...) at
> vnode_destro
>    y_vobject+0x39
> ufs_reclaim(f734aab8) at ufs_reclaim+0x46
> VOP_RECLAIM_APV(c0a09b00,f734aab8) at VOP_RECLAIM_APV+0x7e
> vgonel(c903c000) at vgonel+0x12d
> vrecycle(c903c000,c6c19960) at vrecycle+0x38
> ufs_inactive(f734ab40) at ufs_inactive+0x2af
> VOP_INACTIVE_APV(c0a09b00,f734ab40) at VOP_INACTIVE_APV+0x7e
> vinactive(c903c000,c6c19960) at vinactive+0x72
> vrele(c903c000,c9030444,0,c096897c,1a2,...) at vrele+0x14a
> vm_object_vndeallocate(c9030444) at vm_object_vndeallocate+0xc0
> vm_object_deallocate(c9030444,c9030444,0,c0968016,8e7) at
> vm_object_deallocate+0
>              xb3
> vm_map_entry_delete(c722a000,c7194bf4,f734ac20,c081ca37,c722a000,...)
> at vm_map_
>  entry_delete+0x130
> vm_map_delete(c722a000,0,bfc00000) at vm_map_delete+0x18f
> vmspace_exit(c6c19960,c0a4bde0,0,c09463ea,125,...) at vmspace_exit+0xd5
> exit1(c6c19960,9,2831a4b4,c6c19960,c7234000,...) at exit1+0x496
> sigexit(c6c19960,9,c7234aa8,0,c09499bc,...) at sigexit+0xdf
> postsig(9) at postsig+0x160
> ast(f734ad38) at ast+0x35e
> doreti_ast() at doreti_ast+0x17
>
> db> show alllock
> Process 1385 (httpd) thread 0xc6c19960 (100181)
> exclusive sx user map r = 0 (0xc722a044) locked @
> /usr/src/sys_uvmem_uip.6.2_RELEASE/vm/vm_map.c:307
>

Today I found some interesting details how to reproduce my problem.

Stucking apache2.x in "vmopar" with subsequent stuck
of other various processes in "ufs" is only triggered with
those options enabled in php.ini:

extension="xcache.so"

xcache.size=64M
xcache.count=8
xcache.slot=64K
xcache.var_size=64M
xcache.var_count=8
xcache.var_slots=64K
xcache.mmap_path=/tmp/xcache

Perhaps the problem is related to mmap vs threads interaction.
Any thoughts?

> --
> wbr,
> pluknet
>
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to