On Wed, Nov 07, 2007 at 02:41:14AM -0800, David Miller wrote:
> From: Eric Dumazet <[EMAIL PROTECTED]>
> Date: Sun, 04 Nov 2007 12:31:28 +0100
>
> > [PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table
> >
> > As done two years ag
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Sun, 04 Nov 2007 12:31:28 +0100
> [PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table
>
> As done two years ago on IP route cache table (commit
> 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock
From: David Miller <[EMAIL PROTECTED]>
Date: Sun, 04 Nov 2007 20:24:54 -0800 (PST)
> From: Andi Kleen <[EMAIL PROTECTED]>
> Date: Mon, 5 Nov 2007 00:01:03 +0100
>
> > If it's for sparse cpu ids -- x86 handles those with an
> > translation array.
>
> I would rather not do this, so much assembler
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Mon, 5 Nov 2007 00:01:03 +0100
> On Sunday 04 November 2007 22:56:21 David Miller wrote:
> > From: Andi Kleen <[EMAIL PROTECTED]>
>
> > This makes a huge different as we have to set NR_CPUS to 4096
> > in order to handle the cpu numbering of some UltraSP
On Sunday 04 November 2007 22:56:21 David Miller wrote:
> From: Andi Kleen <[EMAIL PROTECTED]>
> This makes a huge different as we have to set NR_CPUS to 4096
> in order to handle the cpu numbering of some UltraSPARC-IV
> machines.
Really? Hopefully you have a large enough stack then. There
are
Eric Dumazet wrote, On 11/04/2007 10:23 PM:
> Jarek Poplawski a écrit :
>> Jarek Poplawski wrote, On 11/04/2007 06:58 PM:
>>
>>> Eric Dumazet wrote, On 11/04/2007 12:31 PM:
>> ...
>>
+static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
+{
>> ...
>>
+ if (sizeof
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Sun, 4 Nov 2007 13:26:38 +0100
>
> > But I suspect distributions kernels enable CONFIG_HOTPLUG_CPU so
> > num_possible_cpus() will be NR_CPUS.
>
> Nope, on x86 num_possible_cpus() is derived from BIOS tables these days.
And similarly on SPARC64 is wil
Jarek Poplawski a écrit :
Jarek Poplawski wrote, On 11/04/2007 06:58 PM:
Eric Dumazet wrote, On 11/04/2007 12:31 PM:
...
+static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
+{
...
+ if (sizeof(rwlock_t) != 0) {
...
+ for (i = 0; i < size; i+
Jarek Poplawski wrote, On 11/04/2007 06:58 PM:
> Eric Dumazet wrote, On 11/04/2007 12:31 PM:
...
>> +static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
>> +{
...
>> +if (sizeof(rwlock_t) != 0) {
...
>> +for (i = 0; i < size; i++)
>> +
as well.)
You know, I can even imagine, there are people, who have K&R around their
beds, instead of some other book, so they could be serious about such
things. (But, don't worry, it's not me - happily I'm not serious!)
This patch looks OK now, but a bit of grumbling shouldn
Andi Kleen a écrit :
But I suspect distributions kernels enable CONFIG_HOTPLUG_CPU so
num_possible_cpus() will be NR_CPUS.
Nope, on x86 num_possible_cpus() is derived from BIOS tables these days.
Good to know, thank you Andi for this clarification.
-
To unsubscribe from this list: send the l
> But I suspect distributions kernels enable CONFIG_HOTPLUG_CPU so
> num_possible_cpus() will be NR_CPUS.
Nope, on x86 num_possible_cpus() is derived from BIOS tables these days.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECT
fini()
[PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical x86_64 platform, this saves
> > Also the EHASH_LOCK_SZ == 0 special case is a little strange. Why did
> > you add that?
>
> He explained this in another reply, because ifdefs are ugly.
I meant why having it at all?
> Any use that makes
> "sense" is a case where the code should be rewritten to decrease the
> lock hold ti
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Sun, 4 Nov 2007 00:18:14 +0100
> On Thursday 01 November 2007 11:16:20 Eric Dumazet wrote:
>
> Some quick comments:
>
> > +#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
> > +/*
> > + * Instead of using one rwlock for each inet_ehash_bucket, w
On Thursday 01 November 2007 11:16:20 Eric Dumazet wrote:
Looks good from a quick look. Thanks for doing that work.
Some quick comments:
> +#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
> +/*
> + * Instead of using one rwlock for each inet_ehash_bucket, we use a table of
> locks
> +
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Thu, 01 Nov 2007 20:17:35 +0100
> But if it was an issue, the spinlock array used in IP route cache
> would have the same problem and *someone* should already have
> complained...
I'd say that having a few MB less of rwlock cache lines to
touch offsets
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Thu, 01 Nov 2007 18:54:24 +0100
> Stephen Hemminger a écrit :
> > Longterm is there any chance of using rcu for this? Seems like
> > it could be a big win.
>
> This was discussed in the past, and I even believe some patch was proposed,
> but some guys
Rick Jones a écrit :
Something is telling me finding a 64 core system with a suitable
workload to try this could be a good thing. Wish I had one at my
disposal.
Maybe on big NUMA machines, we might prefer to spread the rwlock array on
multiple nodes (ie using vmalloc() instead of kmalloc()
Rick Jones a écrit :
Eric Dumazet wrote:
Stephen Hemminger a écrit :
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet <[EMAIL PROTECTED]> wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one
lock per hash bucket for t
Eric Dumazet wrote:
Stephen Hemminger a écrit :
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet <[EMAIL PROTECTED]> wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one
lock per hash bucket for the huge TCP/DCCP hash
Jarek Poplawski a écrit :
Hi,
A few doubts below:
+#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
Probably "|| defined(CONFIG_DEBUG_SPINLOCK)" is needed here.
Not sure, because DEBUG_SPINLOCK only applies to spinlocks.
Here we deal with rwlocks.
+/*
+ * Instead of using one
Stephen Hemminger a écrit :
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet <[EMAIL PROTECTED]> wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet <[EMAIL PROTECTED]> wrote:
> As done two years ago on IP route cache table (commit
> 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
> hash bucket for the huge TCP/DCCP hash tables.
>
> On a typical x86_64 platform, thi
Hi,
A few doubts below:
Eric Dumazet wrote:
> As done two years ago on IP route cache table (commit
> 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
> hash bucket for the huge TCP/DCCP hash tables.
...
> diff --git a/include/net/inet_hashtables.h b/include/net/inet
Em Thu, Nov 01, 2007 at 04:03:40AM -0700, David Miller escreveu:
> From: Eric Dumazet <[EMAIL PROTECTED]>
> Date: Thu, 01 Nov 2007 11:16:20 +0100
>
> > As done two years ago on IP route cache table (commit
> > 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
> > hash b
On Thu, 1 Nov 2007, Eric Dumazet wrote:
> @@ -134,6 +156,13 @@ static inline struct inet_ehash_bucket
> *inet_ehash_bucket(
> return &hashinfo->ehash[hash & (hashinfo->ehash_size - 1)];
> }
>
> +static inline rwlock_t *inet_ehash_lockp(
> + struct inet_hashinfo *hashinfo,
...These t
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Thu, 01 Nov 2007 11:16:20 +0100
> As done two years ago on IP route cache table (commit
> 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
> hash bucket for the huge TCP/DCCP hash tables.
>
> On a typical x86_64 platform, t
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for litle
performance differences. (we hit a differ
29 matches
Mail list logo