On 24 August 2010 21:00, Andre Oppermann wrote:
>
> Try "netstat -n -p tcp -x" to see whether one socket is holding on to
> too much data.
ok.
> Testing with a different network card would help to narrow down the
> area to look for the bug as well.
I don't have this option, unfortunately. The
On Tue, Aug 24, 2010 at 08:37:52PM +0800, Adrian Chadd wrote:
> On 23 August 2010 18:18, Andre Oppermann wrote:
> > It seems the 4k clusters do not get freed back to the pool after they've
> > been sent by the NIC and dropped from the socket buffer after the ACK has
> > arrived. ?The leak must occ
On 24.08.2010 14:37, Adrian Chadd wrote:
On 23 August 2010 18:18, Andre Oppermann wrote:
It seems the 4k clusters do not get freed back to the pool after they've
been sent by the NIC and dropped from the socket buffer after the ACK has
arrived. The leak must occur in one of these two places.
On 23 August 2010 18:18, Andre Oppermann wrote:
> It seems the 4k clusters do not get freed back to the pool after they've
> been sent by the NIC and dropped from the socket buffer after the ACK has
> arrived. The leak must occur in one of these two places. The socket
> buffer is unlikely as it
On Mon, Aug 23, 2010 at 12:16:34PM -0700, Pyun YongHyeon wrote:
> On Mon, Aug 23, 2010 at 09:04:02PM +0200, Andre Oppermann wrote:
> > On 23.08.2010 19:52, Pyun YongHyeon wrote:
> > >On Mon, Aug 23, 2010 at 12:18:01PM +0200, Andre Oppermann wrote:
> > >>On 23.08.2010 11:26, Adrian Chadd wrote:
> >
On Mon, Aug 23, 2010 at 09:45:20PM +0200, Andre Oppermann wrote:
> On 23.08.2010 21:16, Pyun YongHyeon wrote:
> >On Mon, Aug 23, 2010 at 09:04:02PM +0200, Andre Oppermann wrote:
> >>On 23.08.2010 19:52, Pyun YongHyeon wrote:
> >>>On Mon, Aug 23, 2010 at 12:18:01PM +0200, Andre Oppermann wrote:
> >>
On 23.08.2010 21:16, Pyun YongHyeon wrote:
On Mon, Aug 23, 2010 at 09:04:02PM +0200, Andre Oppermann wrote:
On 23.08.2010 19:52, Pyun YongHyeon wrote:
On Mon, Aug 23, 2010 at 12:18:01PM +0200, Andre Oppermann wrote:
The function that is called on a socket write is sosend_generic() which
makes
On Mon, Aug 23, 2010 at 09:04:02PM +0200, Andre Oppermann wrote:
> On 23.08.2010 19:52, Pyun YongHyeon wrote:
> >On Mon, Aug 23, 2010 at 12:18:01PM +0200, Andre Oppermann wrote:
> >>On 23.08.2010 11:26, Adrian Chadd wrote:
> >>>On 23 August 2010 06:27, Pyun YongHyeon wrote:
> >>>
> I recall t
On 23.08.2010 19:52, Pyun YongHyeon wrote:
On Mon, Aug 23, 2010 at 12:18:01PM +0200, Andre Oppermann wrote:
On 23.08.2010 11:26, Adrian Chadd wrote:
On 23 August 2010 06:27, Pyun YongHyeon wrote:
I recall there was SIOCSIFCAP ioctl handling bug in bce(4) on 8.0 so
it might also disable IFCA
On Mon, Aug 23, 2010 at 12:18:01PM +0200, Andre Oppermann wrote:
> On 23.08.2010 11:26, Adrian Chadd wrote:
> >On 23 August 2010 06:27, Pyun YongHyeon wrote:
> >
> >>I recall there was SIOCSIFCAP ioctl handling bug in bce(4) on 8.0 so
> >>it might also disable IFCAP_TSO4/IFCAP_TXCSUM/IFCAP_RXCSUM
On 23.08.2010 11:26, Adrian Chadd wrote:
On 23 August 2010 06:27, Pyun YongHyeon wrote:
I recall there was SIOCSIFCAP ioctl handling bug in bce(4) on 8.0 so
it might also disable IFCAP_TSO4/IFCAP_TXCSUM/IFCAP_RXCSUM when yo
disabled RX checksum offloading. But I can't explain how checksum
offl
On 23 August 2010 06:27, Pyun YongHyeon wrote:
> I recall there was SIOCSIFCAP ioctl handling bug in bce(4) on 8.0 so
> it might also disable IFCAP_TSO4/IFCAP_TXCSUM/IFCAP_RXCSUM when yo
> disabled RX checksum offloading. But I can't explain how checksum
> offloading could be related with the gro
On Sun, Aug 22, 2010 at 05:40:30PM +0800, Adrian Chadd wrote:
> I disabled tso, tx chksum and rx chksum. This fixed the 4k jumbo
> allocation growth.
>
I recall there was SIOCSIFCAP ioctl handling bug in bce(4) on 8.0 so
it might also disable IFCAP_TSO4/IFCAP_TXCSUM/IFCAP_RXCSUM when yo
disabled
I disabled tso, tx chksum and rx chksum. This fixed the 4k jumbo
allocation growth.
Turning on tso on a live proxy didn't affect jumbo allocations.
Turning on txcsum caused jumbo allocations to begin growing again.
DIsabling txcsum again caused jumbo allocations to stop increasing,
but it doesn't
Hi,
I've got a Squid/Lusca server on 8.0-RELEASE-p3 which is exhibiting
some very strange behaviour.
After a few minutes uptime, the 4k mbuf cluster zone fills up and
Squid/Lusca spends almost all of it's time sleeping in "keglimit".
I've bumped kern.ipc.nmbclusters to 262144 and kern.ipc.jumbop
15 matches
Mail list logo