[email protected] wrote:
Lee wrote:
On 3/2/19, Dirk Munk <[email protected]> wrote:
Lee wrote:
On 3/1/19, Dirk Munk <[email protected]> wrote:
The next set of parameters I've adjusted are network buffers.

1. network.buffer.cache.size = 262144 (256 kB)
the default setting is 32 kB, and that corresponds with the buffer size
of very old TCP/IP stacks.

2. network.buffer.cache.count = 128
The number of buffers is increased from 24 to 128.

The total network buffer space has increased from 24 x 32 kB = 768 kB,
to 128 x 256 kB = 32,768 kB. The result is that the CPU activity for
Seamonkey has dropped dramatically, by about half.

You might be right.

I tried setting network.buffer.cache.size set to 262144, exit (i have
SM set to clear everything at exit), start SM again, start logging
(about:networking / logging), goto

https://upload.wikimedia.org/wikipedia/commons/3/3f/Fronalpstock_big.jpg

grep -i "Http2Stream::WriteSegments" log.txt-main.5488
     and get lots of
2019-03-02 00:11:26.676000 UTC - [Socket Thread]: I/nsHttp
Http2Stream::WriteSegments 29b8117cd10 count=262144 state=4
   ... snip ...
exit SM, edit prefs.js to remove the network.buffer.cache.size line,
start SM, start logging, goto the same url

grep -i "Http2Stream::WriteSegments" log.txt-main.5064

    and many lines of
2019-03-02 00:24:25.317000 UTC - [Socket Thread]: I/nsHttp
Http2Stream::WriteSegments 1db0255ca30 count=32768 state=4
    ... snip ...

Can somebody than understands the code verify that SeaMonkey uses the
network.buffer.cache.size setting for how much to read/write at a
time?

Thanks,
Lee

It's very simple. Seamonkey tells TCP the buffer size it has reserved
for receiving data, and TCP scales the maximum window size accordingly.
When the connection with the other site is set up, the window size is
negotiated. If both size can handle this window size, data will be send
in 256 kB packets.

That is clearly not how it works.

Indeed. The maximum size of packets is a property of the networks between the two ends of the connection. The MTU (maximum transmission unit) is typically no more than 1500 bytes.
I know, I wasn't referring to Ethernet TCP packets.

That's basically the maximum payload of a network-layer frame, i.e. the IP packet including its headers. The overall Path MTU is the smallest MTU of all networks between the two ends, and there's probably nothing you can do to increase that if sending packets across the Internet - data cannot be sent in 256 kB packets, regardless of what the endpoints might negotiate.

True, the WAN connections in the Internet have been set up in such a way that they can accommodate for an MTU of 1500 byte, without the need for packet fragmentation.


The window size is the mechanism by which the receiver notifies the sender of how much space it has left in its receive buffer, so that the sender can back-off or stop sending if necessary. I don't think there is any negotiation involved, and it doesn't affect the size of packets - just potentially the rate at which they're sent.

There is a windows size negotiation. Let's say that a windows size of 256 kB has been negotiated, then the sender can send 256 kB of data without having to wait for an acknowledgement from the receiver. The receiver can send an acknowledgement for more than one packet, he can send an acknowledgement for let's say the first 64 kB, and then the sender knows he can send 64 kB more.

A big window size is important for fast connections with a relatively high latency.


Start wireshark, start seamonkey & download a file.  Stop the capture
& find the initial tcp syn to the download site.  What I get is
   Window size value: 64240
   Options: (12 bytes), Maximum segment size, No-Operation (NOP),
Window scale, No-Operation (NOP), No-Operation (NOP), SACK permitted
         TCP Option - Maximum segment size: 1460 bytes
         TCP Option - No-Operation (NOP)
         TCP Option - Window scale: 8 (multiply by 256)
         TCP Option - No-Operation (NOP)
         TCP Option - No-Operation (NOP)
         TCP Option - SACK permitted

with network.buffer.cache.size set to 32768 or 262144

The advantage of using big packets is that it takes far less overhead.

Except you're not using big packets.  Refer back to the packet
capture; the packet size is negotiated during the initial handshake
with the
   TCP Option - Maximum segment size: 1460 bytes

Yep. The individual packets will be no bigger. The TCP maximum segment size is the maximum size of the TCP payload in a single segment. It will be less than the MTU, since it is set to a value which attempts to keep the overall packet within the path MTU (so that one TCP segment fits entirely in one IP packet without fragmentation), and the IP and TCP headers take up some of the bytes allowed by the MTU.

When sending larger payloads via TCP, the payload is split into multiple segments, each of which can fit in a single packet. These segments are then reassembled at the other end.

Not the data in the packet is the problem, handling the packet itself is
the problem. That's why the processor has far less to do when you
increase network.buffer.cache.size .

Right, the processor has less to do when you handle data in larger
chunks - which is why I was asking for someone who understands the
code to look & see if SeaMonkey uses the network.buffer.cache.size
setting for how much to read/write at a time.

I don't know about SeaMonkey's internals, but I'd interpret this as referring to the buffer for transferring data between the application (SeaMonkey) and the network driver. This is separate from the packet size or TCP maximum segment size, and typically much larger. There are overheads in transferring each block of data between application and driver. Using a larger buffer requires fewer transfers for a given payload (assuming the payload is larger than the buffer), so less overhead. This data still needs to be segmented/reassembled for sending over the network, but that can be done by the network driver. Many network interfaces can perform segmentation in the interface, in which case the driver doesn't need to use the main CPU to do that, further reducing CPU usage.

The whole TCP handling is done by the NIC these days. My laptop has a Realtek NIC, which is a very standard cheap NIC. It has 128 receive buffers, but with Receive Side Scaling that can be changed to 128 buffers per CPU. Since I have a quad core processor, this results in 512 buffers in total. As far as I'm aware, these are 2 kB buffers, so that means 1 MB in total. I assume that the IP stack is responsible for transferring the data from the NIC receive buffers to the Seamonkey network cache buffers.


In other words, is setting network.buffer.cache.size the best that can
be done or are there more memory-usage/speed trade-offs that can be
made?

Thanks,
Lee


_______________________________________________
support-seamonkey mailing list
[email protected]
https://lists.mozilla.org/listinfo/support-seamonkey

Reply via email to