> It wasn't a problem in -current due to a different way that things
> were done there.
What things, exactly? I haven't noticed any differences in
vfs_cache.c or vfs_subr.c that should affect the caching behavior, so
it must just be that the system survives a large amount of wired down
memory b
In message <[EMAIL PROTECTED]> Ville-Pertti Keinonen writes:
: It isn't in -current, does this mean that it wasn't considered an
: acceptable long-term solution?
It wasn't a problem in -current due to a different way that things
were done there.
: Really large numbers of hardlinks are probably r
Warner Losh <[EMAIL PROTECTED]> writes:
> In message <[EMAIL PROTECTED]> Kenneth
>Culver writes:
> : Check this out, if anyone is intrested.
> :
> : I found this on packetstorm.securify.com tonight. Any ideas??
> Mycroft sent this out after we had fixed this before the 3.3R
> release. At lea
In message <[EMAIL PROTECTED]> Kenneth
Culver writes:
: Check this out, if anyone is intrested.
:
: I found this on packetstorm.securify.com tonight. Any ideas??
Mycroft sent this out after we had fixed this before the 3.3R
release. At least it appeared in bugtraq after it had been fixed in
Fr
In message <[EMAIL PROTECTED]>, Ville-Pertti Keinonen writ
es:
n>Actual use obviously shouldn't include cached data. Can you say off
>the top of your head whether v_holdcnt applies to anything other than
>v_cache_src and non-VM buffer-cache (struct buf) stuff?
Sorry, no, can't answer without lo
> >If you want to include the other attack I mentioned (I just tried it,
> >got up to > 16 vnodes), then you have to exclude vnodes that are
> >only live because of v_cache_src entries from the count.
>
> It should probably only count vnodes in "actual" use.
There's no counter for that curr
In message <[EMAIL PROTECTED]>, Ville-Pertti Keinonen writ
es:
>> The easiest way to detect this DOS is probably to keep track of the
>>
>> namecache entries
>> -
>> live vnodes
>>
>> ratio, and enforce an upper limit on it.
>
>That seems like a reasonable approac
> I have been mulling over this issue for some time. My current thinking
> is that pending some more well thought out mechanism, the right thing
> to do here is to detect the DOS and react to that, not to handicap
> the caching in general.
>
> The easiest way to detect this DOS is probably to k
In message <[EMAIL PROTECTED]>, Ville-Pertti Keinonen writes:
>
>Replying to myself...
>
>> Looking at the code, it would seem that the number of directories is
>> what's causing problems (one is created for each link). The directory
>> vnodes can't be thrown out because a name cache entry exists
Replying to myself...
> Looking at the code, it would seem that the number of directories is
> what's causing problems (one is created for each link). The directory
> vnodes can't be thrown out because a name cache entry exists in each
> one. All of the name cache entries point to the same vno
Kenneth Culver <[EMAIL PROTECTED]> writes:
> I ran this on a machine running FreeBSD 3.2-RELEASE with 256MB of RAM,
> and it chugged along to about `02/03000' (meaning it created 3 files
> and about 63000 or so links), consuming a whopping 34MB of wired
> kernel memory (according to `top'), befo
this was fixed in the final hours before 3.3-release.
http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/kern/vfs_cache.c
1.38.2.3 Thu Sep 16 2:02:15 1999 UTC by alfred
CVS Tags: RELENG_3_3_0_RELEASE; Branch: RELENG_3
Diffs to 1.38.2.2
Limit aliases to a vnode in the namecache to a sysctl
Check this out, if anyone is intrested.
I found this on packetstorm.securify.com tonight. Any ideas??
[Resending once, since it's been 10.5 days...]
Here's an interesting denial-of-service attack against FreeBSD >=3.0
systems. It abuses a flaw in the `new' FreeBSD vfs_cache.c; it has no
way t
13 matches
Mail list logo