Dear colleagues,
I'am writting my master thesis in a project using the raw API of
lwip-2.0.3. Although my implementation works, I want to understand a
certain behavior between the Nagle algorithm and the way I call (or not)
tcp_output, but I am not quite sure what is happening.
In the case of a T
vr roriz wrote:
>[..]
>Then, I added the send_now control option, letting tcp_output (with
>send_now = 0) to be called by lwip itself.
Ok, so the application *never* calls tcp_output() but you leave this completely
to the stack? That might work somehow, but will lead to totally unpredictive
p
>Ok, so the application *never* calls tcp_output() but you leave this
>completely to the stack? That might work somehow, but will lead to totally
>unpredictive performance, as you have measured.
That's my point, I thought it would be totally unpredective. But after
some certain amount of data is
vr roriz wrote:
>That's my point, I thought it would be totally unpredective. But after
>some certain amount of data is periodically queued, the RTT starts to
>go down again and the throughput is achieved. That is what I would
>like to understand.
I think tcp_output() is called every time an rx
>No. You'll just get threading problems when starting something like that...
In my architecture, when I need an interrupt process (like for
handling Rx) the Interrupt process is just triggering a handler
process. The handler process and any other process from the driver
have the same priority and w