On Tue, 2015-08-11 at 11:03 -0400, Jason Baron wrote:

> 
> Yes, so the test case I'm using to test against is somewhat contrived.
> In that I am simply allocating around 40,000 sockets that are idle to
> create a 'permanent' memory pressure in the background. Then, I have
> just 1 flow that sets SO_SNDBUF, which results in the: poll(), write() loop.
> 
> That said, we encountered this issue initially where we had 10,000+
> flows and whenever the system would get into memory pressure, we would
> see all the cpus spin at 100%.
> 
> So the testcase I wrote, was just a simplistic version for testing. But
> I am going to try and test against the more realistic workload where
> this issue was initially observed.
> 

Note that I am still trying to understand why we need to increase socket
structure, for something which is inherently a problem of sharing memory
with an unknown (potentially big) number of sockets.

I suggested to use a flag (one bit).

If set, then we should fallback to tcp_wmem[0] (each socket has 4096
bytes, so that we can avoid starvation)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to