Hi Jamal,

> Would be nice to run three sets - but i think even one would be
> sufficiently revealing.

I will try multiple runs over the weekend. During the week, the systems
are used for other purposes too.

> I expect UDP to overwhelm the receiver. So the receiver needs a lot more
> tuning (like increased rcv socket buffer sizes to keep up, IMO).

I will try that. Also on the receiver, I am using unmodified 2.6.21 bits.

> It seems to me any runs with buffer less than 512B are unable to fill
> the pipe - so will not really benefit (will probably do with nagling).
> However, the < 512 B should show equivalent results before and after the
> changes.

My earlier experiments showed that even small buffers were filling the
E1000
slots and resulting in stop queue very often. In any case, I will also
add 1 or 2 larger packet sizes (1K, 16K in addition to the 4K already
there).

> You can try to turn off _BTX feature in the driver and see if they are
> the same. If they are not, then the suspect change will be easy to find.

I was planning to submit my changes on top of this patch, and since it
includes
a configuration option per device, it will be easy to test with and without
this API. When I ran after setting this config option to 0, the results
were
almost identical to the original code. I will try to post that today for
your
review/comments.

> Sorry, been many moons since i last played with netperf; what does
"service
> demand" mean?

It gives an indication of the amount of CPU cycles to send out a particular
amount of data. Netperf provides it as us/KB. I don't know the internals of
netperf enough to say how this is calculated.

thanks,

- KK

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to