More testing. This time with 4.0.21. Disabled all caching, only
enabled ssl bumping. Same config as last time. Still leaking memory.
I took two snapshots of info & mem usage and honestly I don't see a
smoking gun pointing to why my squid processes were getting as large
as 1.4GB.
I've attached
On 10/25/2017 10:41 AM, Aaron Turner wrote:
> More testing. This time with 4.0.21. Disabled all caching, only
> enabled ssl bumping. Same config as last time. Still leaking memory.
> I took two snapshots of info & mem usage and honestly I don't see a
> smoking gun pointing to why my squid proce
So more testing. I haven't found the line in the info:mem logs which
is the red flag, but additional testing proves that the memleak has
something to do with ssl bumping. Once I turn that off, the memory
leaks stop.
this was the ssl related config options:
http_port 10.0.0.1:3128 ssl-bump gener
On 10/02/2017 09:37 PM, Aaron Turner wrote:
> So it's leaking memory and not tracking it?
That combination (or, to be more precise, its implication) is possible
but relatively unlikely in your specific case -- when GBs are leaked,
there is usually something tracked related to those GBs. Please no
So it's leaking memory and not tracking it? Clearly 'top' is showing
it is using a lot of memory and growing over time. I'm happy to do
more tests/etc, but right now I can't go into production with this
memory leak. Should I try squid4?
--
Aaron Turner
https://synfin.net/ Twitter: @synfi
On 03/10/17 04:39, Aaron Turner wrote:
Anyone see anything useful?
The numbers in those reports all seem reasonable to me. Nothing is
showing up with GB of RAM used.
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.s
Anyone see anything useful?
--
Aaron Turner
https://synfin.net/ Twitter: @synfinatic
My father once told me that respect for the truth comes close to being
the basis for all morality. "Something cannot emerge from nothing,"
he said. This is profound thinking if you understand how unstable
One more update before I restart squid:
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
3188 squid 20 0 4821844 4.337g 1.008g R 53.0 30.4 349:31.24 squid
3187 squid 20 0 3539696 3.153g 1.008g R 31.9 22.1 259:15.31 squid
3190 squid 20 0 3198228 2.8
So this is smelling like a mem leak to me. First after running for a few hours:
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
3188 squid 20 0 3586264 3.175g 1.007g R 63.4 22.2 162:59.38 squid
3187 squid 20 0 2941332 2.585g 1.005g S 45.5 18.1 129:36.40
On 29/09/17 12:35, Aaron Turner wrote:
Ok, i'll work on that. One other thing, is that if I let it run long
enough, squid will crash with errors like the following:
FATAL: Received Bus Error...dying.
2017/09/28 23:28:09 kid4| Closing HTTP port 10.93.3.4:3128
2017/09/28 23:28:09 kid4| Closing HT
Ok, i'll work on that. One other thing, is that if I let it run long
enough, squid will crash with errors like the following:
FATAL: Received Bus Error...dying.
2017/09/28 23:28:09 kid4| Closing HTTP port 10.93.3.4:3128
2017/09/28 23:28:09 kid4| Closing HTTP port 127.0.0.1:3128
2017/09/28 23:28:0
On 29/09/17 09:19, Aaron Turner wrote:
Ok, so did some research and what I'm finding is that:
If I set sslflags=NO_DEFAULT_CA for http_port and disable both mem and
disk cache then memory is very stable. It goes up for a little bit
and then pretty much stabilizes (it actually goes up and down a
Ok, so did some research and what I'm finding is that:
If I set sslflags=NO_DEFAULT_CA for http_port and disable both mem and
disk cache then memory is very stable. It goes up for a little bit
and then pretty much stabilizes (it actually goes up and down a
little, but doesn't seem to be growing o
On 09/25/2017 05:23 PM, Aaron Turner wrote:
> So I'm testing squid 3.5.26 on an m3.xlarge w/ 14GB of RAM. Squid is
> the only "real" service running (sshd and the like). I'm running 4
> workers, and 2 rock cache. The workers seem to be growing unbounded
> and given ~30min or so will cause the ke
So I'm testing squid 3.5.26 on an m3.xlarge w/ 14GB of RAM. Squid is
the only "real" service running (sshd and the like). I'm running 4
workers, and 2 rock cache. The workers seem to be growing unbounded
and given ~30min or so will cause the kernel to start killing off
processes until memory is
15 matches
Mail list logo