On Thu, Apr 08, 2010 at 12:58:41AM +0300, Kostik Belousov wrote:
> On Wed, Apr 07, 2010 at 11:52:56PM +0200, Aurelien Jarno wrote:
> > On Wed, Apr 07, 2010 at 11:05:28PM +0200, Petr Salinger wrote:
> > >>> What have to be logged ?
> > >> Please look at ddb command "show files", implemented in
> >
On Wed, Apr 07, 2010 at 11:05:28PM +0200, Petr Salinger wrote:
>>> What have to be logged ?
>> Please look at ddb command "show files", implemented in kern/kern_descrip.c,
>> lines 3284-3305 on HEAD. Instead of doing full dump, you can manually
>> inspect the output. Or, you can write some code tha
Bellow is leaking recipe tested under GNU/kFreeBSD.
I would expect it leaks vnodes also under plain FreeBSD.
I confirm it is reproducible on plain FreeBSD. Looks like a security
issue, as a normal user can create a local DoS in a few dozen of
seconds.
I already posted the following patch in p
On Wed, Apr 07, 2010 at 11:52:56PM +0200, Aurelien Jarno wrote:
> On Wed, Apr 07, 2010 at 11:05:28PM +0200, Petr Salinger wrote:
> >>> What have to be logged ?
> >> Please look at ddb command "show files", implemented in
> >> kern/kern_descrip.c,
> >> lines 3284-3305 on HEAD. Instead of doing full
What have to be logged ?
Please look at ddb command "show files", implemented in kern/kern_descrip.c,
lines 3284-3305 on HEAD. Instead of doing full dump, you can manually
inspect the output. Or, you can write some code that would search the
suspicious vnodes among the vnodes referenced from the
On Wed, Apr 07, 2010 at 04:25:52PM +0200, Petr Salinger wrote:
> >>I used the attached diff, with hackish snooping
> >>on allocated/freed memory for vnodes. When the vp pointer have been
> >>logged as active1/active2, it is (much) later shown with
> >>dead_vnodeops in DUMP_VP().
> >Is there a lot o
I used the attached diff, with hackish snooping
on allocated/freed memory for vnodes. When the vp pointer have been
logged as active1/active2, it is (much) later shown with
dead_vnodeops in DUMP_VP().
Is there a lot of such /dev/ttyp* vnodes ? This indeed might be
suspicious. See below for descri
On Wed, Apr 07, 2010 at 09:00:44AM +0200, Petr Salinger wrote:
>
>
> On Wed, 7 Apr 2010, Kostik Belousov wrote:
>
> >On Tue, Apr 06, 2010 at 10:01:56PM +0200, Petr Salinger wrote:
> >>>Can you try to get a backtrace at the points you have shown me ?
> >>
> >>All are similar to this, with ptyp5/p
On Wed, 7 Apr 2010, Kostik Belousov wrote:
On Tue, Apr 06, 2010 at 10:01:56PM +0200, Petr Salinger wrote:
Can you try to get a backtrace at the points you have shown me ?
All are similar to this, with ptyp5/ptyp6/ptyp7 name changes.
a vnode 0xff0058978000: tag devfs, type VCHR
usec
Quoting Kostik Belousov (from Tue, 6 Apr 2010
12:24:29 +0300):
Can you try to narrow down the sequence of operations that is needed
to reproduce the leak ?
As already told privately to kib@ and for the benefit of others
reading here: I can reproduce a similar behavior on a recent 9-curren
I would expect that sum of mnt_nvnodelistsize should be vfs.numvnodes.
The sum is at about 3400, but the vfs.numvnodes is at about 38000.
Is my expectation correct ?
Not quite, reclaimed vnode is removed from mp list.
Are they in some other list ?
Can you check
that vmstat -z | grep VNODE out
On Tue, Apr 06, 2010 at 11:56:28AM +0200, Petr Salinger wrote:
> >>I would expect that sum of mnt_nvnodelistsize should be vfs.numvnodes.
> >>The sum is at about 3400, but the vfs.numvnodes is at about 38000.
> >>Is my expectation correct ?
> >Not quite, reclaimed vnode is removed from mp list.
>
On Mon, Apr 05, 2010 at 10:36:19PM +0200, Petr Salinger wrote:
> >>
> BTW, the 7.3 seems be unaffected by this.
> >>
> >>Confirmed, the whole build of gcc-4.3 ends with
> >>
> >>kern.maxvnodes: 10
> >>kern.minvnodes: 25000
> >>vfs.freevnodes: 22070
> >>vfs.wantfreevnodes: 25000
> >>vfs.numv
BTW, the 7.3 seems be unaffected by this.
Confirmed, the whole build of gcc-4.3 ends with
kern.maxvnodes: 10
kern.minvnodes: 25000
vfs.freevnodes: 22070
vfs.wantfreevnodes: 25000
vfs.numvnodes: 39907
debug.vnlru_nowhere: 0
while for 8.0 kernel even 12 in vfs.numvnodes does not suffic
On Sat, 3 Apr 2010 19:52:38 +0300 Kostik Belousov wrote:
> Then, after you determined the problematic mp, reboot the machine,
> redo the procedure causing leak. From ddb prompt, you can do "show mount",
> find the mp, then do "show mount ". The later command shall
> produce really large output, li
On Sat, Apr 03, 2010 at 09:16:54AM +0200, Petr Salinger wrote:
> >>Another possible workaround, if you do not need path resolutions in /proc
> >>or lsof(1), is to set sysctl vfs.vlru_allow_cache_src=1.
> >
> >I will test this.
>
> Does not help.
>
> kern.maxvnodes: 10
> kern.minvnodes: 25000
Another possible workaround, if you do not need path resolutions in /proc
or lsof(1), is to set sysctl vfs.vlru_allow_cache_src=1.
I will test this.
Does not help.
kern.maxvnodes: 10
kern.minvnodes: 25000
vfs.vlru_allow_cache_src: 1
vfs.freevnodes: 199
vfs.wantfreevnodes: 25000
vfs.numvno
You can either increase kern.maxvnodes, the default value is very
conservative on amd64, where a lot of KVA is available. On the other
hand, increase of the value on i386 could easily cause KVA exhaustion.
The increase helps, the system become responsive. In fact I previously
suspected schedule
On Fri, Apr 02, 2010 at 07:45:03PM +0200, Petr Salinger wrote:
> Hi,
>
> I have the same problem as in
> http://lists.freebsd.org/pipermail/freebsd-hackers/2009-August/029227.html
>
> During "make check" of gcc-4.3 the vfs.numvnodes goes up,
> after reaching default limit 10 the machine is st
Hi,
I have the same problem as in
http://lists.freebsd.org/pipermail/freebsd-hackers/2009-August/029227.html
During "make check" of gcc-4.3 the vfs.numvnodes goes up,
after reaching default limit 10 the machine is stuck.
kern.maxvnodes: 10
kern.sigqueue.alloc_fail: 0
kern.sigqueue.overf
20 matches
Mail list logo