Hi,
on a T2 (8-core) we see irresponsiveness when having a high number of
network connections, even on interfaces that do not have a high payload.
Logins can take ages....
intrstat shows that almost all interrupts are on one strand for the
payload interface (e1000g1)
device | cpu56 %tim
-------------+----------------------------
e1000g#0 | 0 0.0
e1000g#1 | 5452 0.7
I've checked with netstat if we have any queued connections...:
tcpListenDrop = 0 tcpListenDropQ0 = 0
The system currently has many established connection. We would still
rather go higher as the system is idle.
netstat -n |grep ESTABLISHED |wc -l
5937
Output from mpstat:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 7 204 1 6 0 0 0 0 4 0 0 0 100
1 1 0 18 19 4 17 0 0 0 0 14 0 0 0 100
2 0 0 2 9 1 6 0 0 0 0 2 0 0 0 100
3 0 0 2 6 0 3 0 0 0 0 0 0 0 0 100
4 0 0 2 5 0 3 0 0 0 0 1 0 0 0 100
5 0 0 2 4 0 3 0 0 0 0 1 0 0 0 100
6 0 0 1 4 0 2 0 0 0 0 1 0 0 0 100
7 0 0 2 4 0 2 0 0 0 0 1 0 0 0 100
8 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
9 0 0 4 7 1 5 0 0 0 0 2 0 0 0 100
10 1 0 17 19 4 17 0 0 0 0 14 0 0 0 100
11 0 0 3 9 1 6 0 0 0 0 2 0 0 0 100
12 0 0 2 6 0 4 0 0 0 0 1 0 0 0 100
13 0 0 2 6 0 3 0 0 0 0 1 0 0 0 100
14 0 0 2 5 0 3 0 0 0 0 0 0 0 0 100
15 0 0 1 4 0 2 0 0 0 0 1 0 0 0 100
16 0 0 2 4 0 2 0 0 0 0 0 0 0 0 100
17 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
18 0 0 4 7 1 5 0 0 0 0 2 0 0 0 100
19 1 0 21 20 4 18 0 0 0 0 13 0 0 0 100
20 0 0 3 10 1 7 0 0 0 0 3 0 0 0 100
21 0 0 2 7 0 4 0 0 0 0 1 0 0 0 100
22 0 0 2 5 0 3 0 0 0 0 0 0 0 0 100
23 0 0 2 5 0 3 0 0 0 0 0 0 0 0 100
24 0 0 2 4 0 2 0 0 0 0 0 0 0 0 100
25 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
26 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
27 0 0 4 6 0 4 0 0 0 0 2 0 0 0 100
28 1 0 22 41 2 64 0 0 38 0 8 0 0 0 100
29 0 0 4 36 0 60 0 0 37 0 1 0 0 0 100
30 0 0 2 6 0 3 0 0 0 0 0 0 0 0 100
31 0 0 39 52 47 3 0 0 1 0 0 0 0 0 100
32 0 0 3 7 2 3 0 0 0 0 1 0 0 0 100
33 0 0 5 9 4 4 0 0 0 0 1 0 0 0 100
34 0 0 2 5 0 4 0 0 0 0 1 0 0 0 100
35 0 0 1 4 0 2 0 0 0 0 1 0 0 0 100
36 0 0 4 7 1 5 0 0 0 0 2 0 0 0 100
37 1 0 18 20 4 18 0 0 0 0 13 0 0 0 100
38 0 0 3 10 2 6 0 0 0 0 2 0 0 0 100
39 0 0 2 6 0 4 0 0 0 0 0 0 0 0 100
40 0 0 2 5 0 3 0 0 0 0 1 0 0 0 100
41 0 0 2 5 0 3 0 0 0 0 1 0 0 0 100
42 0 0 2 4 0 3 0 0 0 0 1 0 0 0 100
43 0 0 2 4 0 2 0 0 0 0 1 0 0 0 100
44 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
45 0 0 4 7 1 5 0 0 0 0 2 0 0 0 100
46 1 0 20 26 9 18 0 0 0 0 13 0 0 0 100
47 0 0 3 9 1 6 0 0 0 0 2 0 0 0 100
48 0 0 3 7 0 4 0 0 0 0 1 0 0 0 100
49 0 0 2 5 0 3 0 0 0 0 0 0 0 0 100
50 0 0 1 5 0 2 0 0 0 0 0 0 0 0 100
51 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
52 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
53 0 0 1 4 0 2 0 0 0 0 0 0 0 0 100
54 0 0 4 7 1 5 0 0 0 0 3 0 0 0 100
55 1 0 20 22 5 20 0 0 0 0 15 0 0 0 100
56 2 0 44 46 27 22 0 0 1 0 20 0 0 0 100
57 0 0 3 11 1 8 0 0 0 0 3 0 0 0 100
58 0 0 2 7 0 4 0 0 0 0 1 0 0 0 100
59 0 0 2 5 0 3 0 0 0 0 0 0 0 0 100
60 0 0 2 5 0 3 0 0 0 0 1 0 0 0 100
61 0 0 2 5 0 3 0 0 0 0 1 0 0 0 100
62 0 0 1 4 0 2 0 0 0 0 1 0 0 0 100
63 0 0 3 6 0 4 0 0 0 0 1 0 0 0 100
Currently running on 178 Threads.
13652 root 133M 129M sleep 59 0 0:49:01 0.1% loginproxy/178
13652 root 0.1 0.5 0.0 0.0 0.0 0.0 99 0.0 283 1 937 0
loginproxy/1140
13652 root 0.1 0.2 0.0 0.0 0.0 0.0 100 0.0 124 0 420 0
loginproxy/625
13652 root 0.1 0.1 0.0 0.0 0.0 66 33 0.0 31 0 1K 0
loginproxy/1036
13652 root 0.1 0.1 0.0 0.0 0.0 0.0 100 0.0 102 0 317 0
loginproxy/1138
13652 root 0.1 0.0 0.0 0.0 0.0 100 0.0 0.0 59 0 59 0 loginproxy/3
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 55 0 64 0
loginproxy/282
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 17 0 104 0 loginproxy/4
13652 root 0.0 0.0 0.0 0.0 0.0 97 2.6 0.0 12 0 314 0
loginproxy/548
13652 root 0.0 0.0 0.0 0.0 0.0 87 12 0.0 14 0 317 0
loginproxy/1032
13652 root 0.0 0.0 0.0 0.0 0.0 91 8.5 0.0 11 0 304 0
loginproxy/135
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 52 0 57 0
loginproxy/1132
13652 root 0.0 0.0 0.0 0.0 0.0 82 18 0.0 10 0 292 0
loginproxy/23
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/769
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 1 50 0
loginproxy/758
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1072
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1119
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/633
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1099
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/189
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1120
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/70
13652 root 0.0 0.0 0.0 0.0 0.0 91 8.5 0.0 10 0 303 0
loginproxy/59
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/785
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/796
13652 root 0.0 0.0 0.0 0.0 0.0 68 32 0.0 10 0 304 0
loginproxy/582
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/145
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/143
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/122
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1123
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/641
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/159
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/154
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/144
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/142
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/140
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/288
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/163
13652 root 0.0 0.0 0.0 0.0 0.0 99 1.3 0.0 11 1 299 0
loginproxy/14
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1126
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1055
13652 root 0.0 0.0 0.0 0.0 0.0 98 1.5 0.0 11 0 299 0
loginproxy/977
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/772
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1088
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/172
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/723
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/1097
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/213
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1054
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1114
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/283
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/263
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 1 50 0
loginproxy/1059
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1129
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/195
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1115
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/164
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1136
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/1105
13652 root 0.0 0.0 0.0 0.0 0.0 99 1.3 0.0 11 0 299 0
loginproxy/1044
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 51 0 51 0
loginproxy/250
13652 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 50 0 50 0
loginproxy/653
I haven't found a clue, why the system behaves like this. Does the high
amount of interrupts on just one strand slow the system down? Is there a
way to spread them amongst the other cores/strands?
Thanks
Mika
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org