On 3/2/19, Dirk Munk <[email protected]> wrote:
> Lee wrote:
>> On 3/1/19, Dirk Munk <[email protected]> wrote:
>>> The next set of parameters I've adjusted are network buffers.
>>>
>>> 1. network.buffer.cache.size = 262144 (256 kB)
>>> the default setting is 32 kB, and that corresponds with the buffer size
>>> of very old TCP/IP stacks.
>>>
>>> 2. network.buffer.cache.count = 128
>>> The number of buffers is increased from 24 to 128.
>>>
>>> The total network buffer space has increased from 24 x 32 kB = 768 kB,
>>> to 128 x 256 kB = 32,768 kB. The result is that the CPU activity for
>>> Seamonkey has dropped dramatically, by about half.
>>
>> You might be right.
>>
>> I tried setting network.buffer.cache.size set to 262144, exit (i have
>> SM set to clear everything at exit), start SM again, start logging
>> (about:networking / logging), goto
>>
>> https://upload.wikimedia.org/wikipedia/commons/3/3f/Fronalpstock_big.jpg
>>
>> grep -i "Http2Stream::WriteSegments" log.txt-main.5488
>> and get lots of
>> 2019-03-02 00:11:26.676000 UTC - [Socket Thread]: I/nsHttp
>> Http2Stream::WriteSegments 29b8117cd10 count=262144 state=4
... snip ...
>> exit SM, edit prefs.js to remove the network.buffer.cache.size line,
>> start SM, start logging, goto the same url
>>
>> grep -i "Http2Stream::WriteSegments" log.txt-main.5064
>>
>> and many lines of
>> 2019-03-02 00:24:25.317000 UTC - [Socket Thread]: I/nsHttp
>> Http2Stream::WriteSegments 1db0255ca30 count=32768 state=4
... snip ...
>>
>> Can somebody than understands the code verify that SeaMonkey uses the
>> network.buffer.cache.size setting for how much to read/write at a
>> time?
>>
>> Thanks,
>> Lee
>
> It's very simple. Seamonkey tells TCP the buffer size it has reserved
> for receiving data, and TCP scales the maximum window size accordingly.
> When the connection with the other site is set up, the window size is
> negotiated. If both size can handle this window size, data will be send
> in 256 kB packets.
That is clearly not how it works.
Start wireshark, start seamonkey & download a file. Stop the capture
& find the initial tcp syn to the download site. What I get is
Window size value: 64240
Options: (12 bytes), Maximum segment size, No-Operation (NOP),
Window scale, No-Operation (NOP), No-Operation (NOP), SACK permitted
TCP Option - Maximum segment size: 1460 bytes
TCP Option - No-Operation (NOP)
TCP Option - Window scale: 8 (multiply by 256)
TCP Option - No-Operation (NOP)
TCP Option - No-Operation (NOP)
TCP Option - SACK permitted
with network.buffer.cache.size set to 32768 or 262144
> The advantage of using big packets is that it takes far less overhead.
Except you're not using big packets. Refer back to the packet
capture; the packet size is negotiated during the initial handshake
with the
TCP Option - Maximum segment size: 1460 bytes
> Not the data in the packet is the problem, handling the packet itself is
> the problem. That's why the processor has far less to do when you
> increase network.buffer.cache.size .
Right, the processor has less to do when you handle data in larger
chunks - which is why I was asking for someone who understands the
code to look & see if SeaMonkey uses the network.buffer.cache.size
setting for how much to read/write at a time.
In other words, is setting network.buffer.cache.size the best that can
be done or are there more memory-usage/speed trade-offs that can be
made?
Thanks,
Lee
_______________________________________________
support-seamonkey mailing list
[email protected]
https://lists.mozilla.org/listinfo/support-seamonkey