On 2/06/2016 6:13 p.m., Dan Charlesworth wrote:
> No worries—thanks for following up on it!
>
> That’s very interesting, about the concurrent requests, because the “normal”
> report does around 80% more requests per day than the “leaky” one — a few
> hundred thousand vs a couple of million.
>
>
No worries—thanks for following up on it!
That’s very interesting, about the concurrent requests, because the “normal”
report does around 80% more requests per day than the “leaky” one — a few
hundred thousand vs a couple of million.
Does this CLOSE_WAIT sockets issue have a bug being tracked o
On 24/05/2016 5:44 p.m., Dan Charlesworth wrote:
> Gentle bump 😁
>
>
Hi Dan,
sorry RL getting in the way these weeks.
Two things stand out for me.
Its a bit odd that exteral ACL entries shodul be so high. But your
"normal" report has more allocated than the "leaky" report. So thats
just a sig
Gentle bump 😁
Pool Obj SizeChunksAllocatedIn UseIdleAllocations SavedRate
(bytes)KB/ch obj/ch(#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot(#) (KB) high (KB) high (hrs) %alloc(#) (KB) high (KB)(#) %cnt %vol(#)/sec
mem_node 413664944 262313 265205 2.87 25.807 648
I’ve now got mgr:mem output from a leaky box for comparison but I’m having a
hard time spotting where the problem might be.
Would anyone more experienced mind taking at these and seeing if anything jumps
out as a source of the high memory usage?
- The leaky example has 8GB of server memory an
On 11/05/2016 4:37 p.m., Dan Charlesworth wrote:
> Thanks Amos -
>
> Not sure how self-explanatory the output is, though.
>
> I’ve attached the output from a site with a 12GB server where top was showing
> 2.9GB allocated to squid (this is normal e.g. “the control"). But the mem
> output shows
Thanks Amos -
Not sure how self-explanatory the output is, though.
I’ve attached the output from a site with a 12GB server where top was showing
2.9GB allocated to squid (this is normal e.g. “the control"). But the mem
output shows the allocated total as ~1GB, apparently?
Maybe things will bec
On 10/05/2016 2:35 p.m., Dan Charlesworth wrote:
> A small percentage of deployments of our squid-based product are using oodles
> of memory—there doesn’t seem to be a limit to it.
>
> I’m wondering what the best way might be to analyse what squid is reserving
> it all for in the latest 3.5 rele
A small percentage of deployments of our squid-based product are using oodles
of memory—there doesn’t seem to be a limit to it.
I’m wondering what the best way might be to analyse what squid is reserving it
all for in the latest 3.5 release?
The output of squidclient mgr:cache_mem is completely