I have performed the testing. I sent segments with size of 32 bytes in
a loop. NuttX source code is based on SHA-1 =
1e2560267898f413c93c5fb616ddc5b1d4d07184.
As a TCP host I used Linux kernel 4.19.
In case of forcing TCP_QUICKACK option to 1 in Linux I added the
following line after recv():
re
> More discussion here:
> https://groups.google.com/g/nuttx/c/bh01LHix7nM/m/bL8242BQCwAJ Johnn
> y is a
> pretty knowledgeable guy a references a couple of other RFCs there.
Thank you for the link to the discussion. There is useful additional
info:
RFC 2581 (4.2) says:
> Therefore, while a spec
I think we should be testing with smaller, more typical user buffer
sizes to verify the performance when the split is disabled or removed.
> What is a full size packet?
RFC 1122 (4.2.3.4) says:
> the TCP can send a full-sized segment (Eff.snd.MSS
> bytes; see Section 4.2.2.6).
There is an algorithm in Section 4.2.2.6 how Eff.snd.MSS is calculated.
> That is another good reason to remove the unbuffered send.
So far I still think we
The algorithm came from uIP 1.0 from around 2001. Might be interesting to
> see what uIP in Contiki did with that in later years.
>
Here is some modified uIP source
https://github.com/adamdunkels/uip/blob/master/uip/uip-split.c
It says it is the original 1.0 source, but it is not. The original
The language in the RFC is not clear. What is a full size packet? Is that
the MSS which could potentially vary from segment to segment depending on
the sizes of the headers. Is that the receive window size which could also
vary? Not clear.
I don't think there is any general way to always send
hi,
> I am not 100% sure of this, but I still think that it would be better to
> remove the unbuffered TCP send logic rather than remove the packet split
> logic.
personally i have no objection against the removal of unbuffered send.
but i can understand if someone objects it.
for some situation
Now I have read RFC 1122 and understood that the existing
CONFIG_NET_TCP_SPLIT algorithm can not help.
RFC 1122 (4.2.3.2) says:
> ...
> A TCP SHOULD implement a delayed ACK, but an ACK should not
> be excessively delayed; in particular, the delay MUST be
> less than 0.5 seconds, and in a stream of
> Concerning buffered send mode yes, however I asked about unbuffered
> send mode with a large user buffer.
>
>
> Sorry, I misread that.
> > Do I understand correctly, that if I use unbuffered mode with a large
> > > user buffer (say 64 KB), then RFC 1122 still may pause NuttX TCP stack
> > > if
Concerning buffered send mode yes, however I asked about unbuffered
send mode with a large user buffer.
On Thu, 2021-10-14 at 15:13 -0600, Gregory Nutt wrote:
> > Do I understand correctly, that if I use unbuffered mode with a
> large
> > user buffer (say 64 KB), then RFC 1122 still may pause Nut
> Do I understand correctly, that if I use unbuffered mode with a large
> user buffer (say 64 KB), then RFC 1122 still may pause NuttX TCP stack
> if an odd number of TCP segments are constructed based on the 64 KB
> buffer? Thus 0.5 second delay may occur at the end of 64 KB buffer
> during the l
When I tested buffered send mode, as I remember, I tried to increase
number of IOBs. It did not affect the performance. Also I observed some
strange spurious changes of receive window size that NuttX TCP side
advertises. As I had better results with unbuffered mode, I started to
use it rather than
Yes, from a user’s point of view the unbuffered send operation is
blocking. I just meant that unbuffered send (the kernel side) does not
wait for each TCP packet to be acknowledged.
E.g. apps/netutils/iperf uses a user buffer with size of 16384 bytes.
While these 16384 bytes are being sent, the per
> > Currently I'm using "unbuffered" send mode as in my case it
> > surprisingly provides twice as high throughput as "buffered" one.
> > Though, I initially expected that "buffered" send mode should have
> > better performance compared to "unbuffered" one
>
> It should not be faster. I suspect t
> Currently I'm using "unbuffered" send mode as in my case it
> surprisingly provides twice as high throughput as "buffered" one.
> Though, I initially expected that "buffered" send mode should have
> better performance compared to "unbuffered" one
It should not be faster. I suspect that is some
> Why does the send operation block
It has to, at least for TCP. The data resides in a user provided buffered
.. that is why it is unbuffered. In TCP, it may need to retransmit if the
data is not ACKed. Hence the user buffer must stay intact until the ACK is
received.
The fully buffered logic
Hi Gregory,
> In the unbuffered send case, the send operation blocks until the
> packet is
> sent and ACKed. Per RFC 1122, the peer may delay sending the ACK for
> up to
> 500 Msec. So the performance of the unbuffered send is abysmal when
> sending to an RFC 1122 client.
Why does the send oper
I think that we should not provide broken features unless they are experimental
-Original Message-
From: Xiang Xiao
Sent: den 14 oktober 2021 10:49
To: dev@nuttx.apache.org
Subject: Re: NET_TCP_SPLIT removal
I agreed with Greg that it's bad to give the user a broken fe
I agreed with Greg that it's bad to give the user a broken feature,
especially for a complex feature like networking.
On Wed, Oct 13, 2021 at 10:51 AM Gregory Nutt wrote:
> Similarly, I have also advocated the option to disable READ AHEAD
> buffering. A stack that cannot buffer a packet is not
Similarly, I have also advocated the option to disable READ AHEAD
buffering. A stack that cannot buffer a packet is not reliable: If the
read operation is not in place and waiting for an incoming packet, then the
packet will be dropped dropped in that case. Pretty hard to design a
reliable stack
For people that need some background, this Wikipedia article may be
helpful: https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment. The
SPLIT packet change is intended to work around issues when the unbuffered
send is used with a peer that supports RFC 1122.
In the unbuffered send case, the s
21 matches
Mail list logo