"David S. Miller" wrote:
> 
> The soltuion you have chosen is not acceptable, the ordering is
> critical and is why all of this stuff is done this way in the first
> place.  The members must be in _that_ order and the alignment must
> occur at that precise spot.

Yes, I know.  But that patch did introduce a small difference from
the cache line POV: the first four entries of __tcp_listening_hash were
no longer in the same cache line as __tcp_ehash_size and friends.

> So please propose another patch, perhaps option #1 (the easiest) which
> preserves the existing structure layout.

This reorg should preserve things as you originally intended.  I tend to
think that reorganising the structure is less fragile than fully
initialising the struct.

Still, if you prefer option 1 I have included that as well.   Take your
pick. Both patches are tested and I have eyeballed the asm.

Question is: how many more of these are lurking...


Option 2
--------

--- linux-2.4.0-test10-pre3/include/net/tcp.h   Sat Oct 14 17:02:04 2000
+++ linux-akpm/include/net/tcp.h        Sun Oct 15 23:37:23 2000
@@ -90,6 +90,18 @@
 };
 
 extern struct tcp_hashinfo {
+       rwlock_t __tcp_lhash_lock;
+       atomic_t __tcp_lhash_users;
+       wait_queue_head_t __tcp_lhash_wait;
+       spinlock_t __tcp_portalloc_lock;
+
+       /*
+        * All the below members are written once at bootup and are
+        * never written again _or_ are predominantly read-access.
+        * Hence we align to a new cache line as all the preceding members
+        * are often dirty.
+        */
+
        /* This is for sockets with full identity only.  Sockets here will
         * always be without wildcards and will have the following invariant:
         *
@@ -97,8 +109,10 @@
         *
         * First half of the table is for sockets not in TIME_WAIT, second half
         * is for TIME_WAIT sockets only.
+        *
         */
-       struct tcp_ehash_bucket *__tcp_ehash;
+       struct tcp_ehash_bucket *__tcp_ehash
+               __attribute__((__aligned__(SMP_CACHE_BYTES)));
 
        /* Ok, let's try this, I give up, we do need a local binding
         * TCP hash as well as the others for fast bind/connect.
@@ -114,17 +128,6 @@
         */
        struct sock *__tcp_listening_hash[TCP_LHTABLE_SIZE];
 
-       /* All the above members are written once at bootup and
-        * never written again _or_ are predominantly read-access.
-        *
-        * Now align to a new cache line as all the following members
-        * are often dirty.
-        */
-       rwlock_t __tcp_lhash_lock
-               __attribute__((__aligned__(SMP_CACHE_BYTES)));
-       atomic_t __tcp_lhash_users;
-       wait_queue_head_t __tcp_lhash_wait;
-       spinlock_t __tcp_portalloc_lock;
 } tcp_hashinfo;
 
 #define tcp_ehash      (tcp_hashinfo.__tcp_ehash)


Option 1
--------

--- linux-2.4.0-test10-pre3/net/ipv4/tcp_ipv4.c Sat Oct 14 17:02:04 2000
+++ linux-akpm/net/ipv4/tcp_ipv4.c      Sun Oct 15 23:47:57 2000
@@ -75,7 +75,15 @@
 void tcp_v4_send_check(struct sock *sk, struct tcphdr *th, int len, 
                       struct sk_buff *skb);
 
+/*
+ * ALL members must be initialised to prevent gcc-2.7.2.3 miscompilation
+ */
 struct tcp_hashinfo __cacheline_aligned tcp_hashinfo = {
+       __tcp_ehash:          NULL,
+       __tcp_bhash:          NULL,
+       __tcp_bhash_size:     0,
+       __tcp_ehash_size:     0,
+       __tcp_listening_hash: { NULL, },
        __tcp_lhash_lock:     RW_LOCK_UNLOCKED,
        __tcp_lhash_users:    ATOMIC_INIT(0),
        __tcp_lhash_wait:
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to