Hi,

I have FreeBSD 4.4, and I wrote simple TCP and UDP progras bacically the
client sends some bytes and the server echo the messages. I use Ethernet
which MTU is around 1400++ (I forget exactly), and the NIC is "3Com Fast
EtherLink XC PCI3Com Fast EtherLink XC PCI".

Using sample based profiling system I can obtain the timing distribution
for "xl_intr" function (in src/sys/pci/if_xl.c):

Bytes   %-tcp   %-udp
0200    0.01    0.01
1000    0.02    0.01
1200    0.01    0.01
1400    0.03    0.01
1450    18.12   0.01  <----
1500    21.78   0.01
1600    19.07   0.01
3000    16.06   0.01

What the table means is that (for the 1st row),
If I ran the program using TCP, when sending and receiving 200 bytes
messages, the xl_intr routine only spend 0.01 % of the Overhead Processing
Time (not include the transmission time).

--> Now the weird result (for TCP) is that after it hits the MTU (1400+),
the xl_intr suddenly jumps SPENDING MORE THAN 15% of the overhead
processing time. I don't know whether this is the correct result or
not. But it seems a bug to me. I don't think a device protocol should
spend that much time.

The long time spent is not in other functions called from xl_intr, but
in the body of the xl_intr itself.

Can anyone help me find out what's going on here?

or if this is too details, can someone please tell me where I can find the
'xl' functions documentation (very detail one, the structure,
implementation, etc ...).


Thank you very much for your time
appreciate it
Haryadi



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message

Reply via email to