From: Andi Kleen <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 18:32:26 +0200
> One issue I forgot earlier and Kirill pointed out is that the
> reallocation would require vmalloc because memory will be too fragmented
> to get a big piece of physical memory. So it would add TLB pressure.
>
> Can't th
> > There will be some hickup, but as long as the downtime
> > is limited it shouldn't be too bad.
> >
>
> Benchmarks are in order
One issue I forgot earlier and Kirill pointed out is that the
reallocation would require vmalloc because memory will be too fragmented
to get a big piece of phys
On Wed, 9 Aug 2006, Andi Kleen wrote:
But there's alot of state to update (more or less
atomically, too)
Why does it need to be atomic? It might be enough
to just check a flag and poll for it in the readers and then redo the
lookup.
(I qualified "atomic" with "more or less" :-)
Sure,
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 10:53:18 +0200
> If MAX_ORDER = 11, we have a max hash table of 8 MB : 2097152 slots
> But even 2097152 dst need 139810 pages (560 MB of low mem), so 16 times
> needs... too much ram.
>
> Probably a test like this is necessary :
>
> i
On Wednesday 09 August 2006 10:09, Kirill Korotaev wrote:
> >>2) for cases where we haven't implemented dynamic
> >> table growth, specifying a proper limit argument
> >> to the hash table allocation is a sufficient
> >> solution for the time being
> >
> > Agreed, just we don't know what the
2) for cases where we haven't implemented dynamic
table growth, specifying a proper limit argument
to the hash table allocation is a sufficient
solution for the time being
Agreed, just we don't know what the proper limits are.
I guess it would need someone running quite a lot of benchm
1) dynamic table growth is the only reasonable way to
handle this and not waste memory in all cases
Definitely that's the ideal way to go.
But there's alot of state to update (more or less
atomically, too) in the TCP hashes. Looks tricky to
do that without hurting performance, especiall
> But there's alot of state to update (more or less
> atomically, too)
Why does it need to be atomic? It might be enough
to just check a flag and poll for it in the readers and then redo the
lookup.
> in the TCP hashes. Looks tricky to
> do that without hurting performance, especially since
>
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 02:24:04 +0200
> Yes, but even with dynamic growth you still need some upper boundary
> (otherwise a DOS could eat all your memory). And it would need
> to be figured out what it is.
Absolutely. Otherwise the GC'ing of the routing cache
On Wednesday 09 August 2006 02:11, David Miller wrote:
> From: Andi Kleen <[EMAIL PROTECTED]>
> Date: Wed, 9 Aug 2006 01:23:01 +0200
>
> > The problem is to find out what a good boundary is.
>
> The more I think about this the more I lean towards
> two conclusions:
>
> 1) dynamic table growth is
From: [EMAIL PROTECTED]
Date: Tue, 8 Aug 2006 17:11:29 -0700 (PDT)
> But there's alot of state to update (more or less
> atomically, too) in the TCP hashes. Looks tricky to
> do that without hurting performance, especially since
> you'll probably want to resize the tables when you've
> discovered
On Tue, 8 Aug 2006, David Miller wrote:
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 01:23:01 +0200
The problem is to find out what a good boundary is.
The more I think about this the more I lean towards
two conclusions:
1) dynamic table growth is the only reasonable way to
On Wed, 9 Aug 2006, Andi Kleen wrote:
I don't think it makes any sense to continue scaling at all after
some point - you won't get better shorter hash chains anymore and the
large hash tables actually cause problems: e.g. there are situations where we
walk
the complete tables and that take
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Wed, 9 Aug 2006 01:23:01 +0200
> The problem is to find out what a good boundary is.
The more I think about this the more I lean towards
two conclusions:
1) dynamic table growth is the only reasonable way to
handle this and not waste memory in all ca
> >
> > IMHO there needs to be a maximum size (maybe related to the sum of
> > caches of all CPUs in the system?)
> >
> > Best would be to fix this for all large system hashes together.
>
> How about using an algorithm like this: up to a certain "size"
> (memory size, cache size,...), scale the h
On Tue, 8 Aug 2006, Andi Kleen wrote:
The hash sizing code needs far more tweaks. iirc it can still allocate
several GB hash tables on large memory systems (i've seen that once in
the boot log of a 2TB system). Even on smaller systems it is usually
too much.
Yes. Linear growth with memory
> 3) should we limit TCP hashe and hashb size the same way?
Yes - or best in fact all hashes handled by alloc_large_system_hash()
-Andi
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org
David Miller wrote:
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Tue, 8 Aug 2006 08:53:03 +0200
That's still too big. Consider a 2TB machine, with all memory in LOWMEM.
Andi I agree with you, route.c should pass in a suitable limit.
I'm just suggesting a fix for a seperate problem.
So summa
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Tue, 8 Aug 2006 10:57:31 +0200
> I think we had discussion about being able to dynamically resize
> route hash table (or tcp hash table), using RCU. Did someone worked
> on this ? For most current machines (ram size >= 1GB) , the default
> hash table s
On Tuesday 08 August 2006 05:42, David Miller wrote:
> From: Alexey Kuznetsov <[EMAIL PROTECTED]>
> Date: Mon, 7 Aug 2006 20:48:42 +0400
>
> > The patch looks OK. But I am not sure too.
> >
> > To be honest, I do not understand the sense of HASH_HIGHMEM flag.
> > At the first sight, hash table eats
From: Kirill Korotaev <[EMAIL PROTECTED]>
Date: Tue, 08 Aug 2006 12:17:57 +0400
> at least for i686 num_physpages includes highmem, so IMHO this bug
> was there for years:
Correct, I misread the x86 code.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message
David Miller wrote:
we quickly discover this GIT commit:
424c4b70cc4ff3930ee36a2ef7b204e4d704fd26
[IPV4]: Use the fancy alloc_large_system_hash() function for route hash table
- rt hash table allocated using alloc_large_system_hash() function.
Signed-off-by: Eric Dumazet <[EMAIL PROTECTED]>
S
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Tue, 8 Aug 2006 08:53:03 +0200
> That's still too big. Consider a 2TB machine, with all memory in LOWMEM.
Andi I agree with you, route.c should pass in a suitable limit.
I'm just suggesting a fix for a seperate problem.
-
To unsubscribe from this list: s
>
> Whereas it should probably go:
>
> if (max == 0) {
> max = (flags & HASH_HIGHMEM) ? nr_all_pages : nr_kernel_pages;
> max = (max << PAGE_SHIFT) >> 4;
> do_div(max, bucketsize);
> }
>
> or something like that.
That's still too big. Consi
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Tue, 8 Aug 2006 07:11:06 +0200
> The hash sizing code needs far more tweaks. iirc it can still
> allocate several GB hash tables on large memory systems (i've seen
> that once in the boot log of a 2TB system). Even on smaller systems
> it is usually too
> So for now it is probably sufficient to just get rid of the
> HASH_HIGHMEM flag here. Later we can try changing this multiplier
> of "16" to something like "8" or even "4".
The hash sizing code needs far more tweaks. iirc it can still allocate several
GB
hash tables on large memory systems (
From: Alexey Kuznetsov <[EMAIL PROTECTED]>
Date: Mon, 7 Aug 2006 20:48:42 +0400
> The patch looks OK. But I am not sure too.
>
> To be honest, I do not understand the sense of HASH_HIGHMEM flag.
> At the first sight, hash table eats low memory, objects hashed in this table
> also eat low memory.
Hello!
> During OpenVZ stress testing we found that UDP traffic with
> random src can generate too much excessive rt hash growing
> leading finally to OOM and kernel panics.
>
> It was found that for 4GB i686 system (having 1048576 total pages and
> 225280 normal zone pages) kernel allocates the
28 matches
Mail list logo