David Miller a écrit :
From: Hideo AOKI <[EMAIL PROTECTED]>
Date: Sun, 30 Dec 2007 04:01:46 -0500

diff -pruN net-2.6.25-t12t19m-p4/net/ipv4/proc.c 
net-2.6.25-t12t19m-p5/net/ipv4/proc.c
--- net-2.6.25-t12t19m-p4/net/ipv4/proc.c       2007-12-27 10:19:02.000000000 
-0500
+++ net-2.6.25-t12t19m-p5/net/ipv4/proc.c       2007-12-29 21:09:21.000000000 
-0500
@@ -56,7 +56,8 @@ static int sockstat_seq_show(struct seq_
                   sock_prot_inuse(&tcp_prot), atomic_read(&tcp_orphan_count),
                   tcp_death_row.tw_count, atomic_read(&tcp_sockets_allocated),
                   atomic_read(&tcp_memory_allocated));
-       seq_printf(seq, "UDP: inuse %d\n", sock_prot_inuse(&udp_prot));
+       seq_printf(seq, "UDP: inuse %d mem %d\n", sock_prot_inuse(&udp_prot),
+                  atomic_read(&udp_memory_allocated));
        seq_printf(seq, "UDPLITE: inuse %d\n", sock_prot_inuse(&udplite_prot));
        seq_printf(seq, "RAW: inuse %d\n", sock_prot_inuse(&raw_prot));
        seq_printf(seq,  "FRAG: inuse %d memory %d\n",

More careless patch creation.  :-/

This breaks the build because udp_memory_allocated is not added until
patch 2.

Once again I'll combine all three patches into one but I am extremely
angry about how careless and broken these two patch submissions were.

I am a litle bit concerned about performance of IVR servers
using SIP protocol.

On those servers, each active channel typically emits/receives 50 UDP/RTP frames per second. With G729 codec, each packet contains 10 bytes of payload, and about 40 bytes of IP/UDP/RTP encapsulation. (So these messages are very
small)

As I am currently enjoying hollidays at home, I am not able to test on my server farm the performance impact of this new UDP receive accounting.

If I understand well the patch, each time a packet is received (on a socket
with no previous message available in its receive queue), we are going to atomic_inc(&some_global_var). Then the user thread that will transfert this
message to user land will atomic_dec(&some_global_var). (Granted server is
in normal condition, ie each UDP socket holds at most one message in its
receive or transmit queue)

I have some machines with 400 active SIP channels, so that new hot cache line
will probably slow down our SMP servers, because of cache line ping pong.

I will probably setup a test next week and let you know the results.

Maybe I read the patch incorrectly, or we could add some new sysctl so that
we not try to uncharge memory if a socket 'forward_alloc' is beyond a given limit (say 2 pages), so that number of atomic_inc/dec on udp_memory_allocated (or tcp_memory_allocated) is reduced.

Thank you
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to