If you trust the software which creates these requests you can bypass the proxy
for the ip addresses of this system.
If you do not trust this software then it's better left passed to the proxy.
This URL should have Host header and if not then it's probably something that
should be blocked.. or fi
Hey,
How about using a local bind\unbound DNS server that has a forwarding zone
defined only for the local domains?
For me it's a bit hard to understand the root cause for the issue but this is
the best solution I can think about.
If you need some help about with bind\unbound DNS configurations
Yeah, sounds like I need to prove that ssl-bump is not eating memory
before I start worrying about caching.Then slowly add features
until I find the smoking gun and focus on that.
I'm curious, does anyone have a suggestion of what modern high traffic
volume squid deployments look like? Seems l
Hey Aaron,
Consider the comments from Amos and Alex first before moving forward.
And again we need to clear out the current doubt's for both you and us.
We don't know if the issue is related to rock cache_dir or to squid-cache in
general.
Currently for SMP aware caches the best disk cache is rock
On 09/25/2017 08:19 PM, Amos Jeffries wrote:
> On 26/09/17 08:56, Aaron Turner wrote:
>> I can disable rock cache, but I need some disk cache- is there a
>> better option?
> Possibly a smaller rock cache, and a UFS/AUFS/diskd cache - rock can
> share disk with another cache, its just the UFS/* cac
On 09/25/2017 05:23 PM, Aaron Turner wrote:
> So I'm testing squid 3.5.26 on an m3.xlarge w/ 14GB of RAM. Squid is
> the only "real" service running (sshd and the like). I'm running 4
> workers, and 2 rock cache. The workers seem to be growing unbounded
> and given ~30min or so will cause the ke
On 26/09/17 08:56, Aaron Turner wrote:
So is v4 stable? I was the impression it was beta? That said, if v4
has better memory tuning options then I'm all ears.
Yes it is beta. Some bugs still to work out in the ssl-bump code, but
that is all.
Overall the v4 ssl-bump code is far better behav
So I'm testing squid 3.5.26 on an m3.xlarge w/ 14GB of RAM. Squid is
the only "real" service running (sshd and the like). I'm running 4
workers, and 2 rock cache. The workers seem to be growing unbounded
and given ~30min or so will cause the kernel to start killing off
processes until memory is
On 09/25/2017 12:42 PM, ppmart...@unah.edu.cu wrote:
> The designed regex:
> /((?=.*\biphone\b)|(?=.*\bprod\b)).*\.facebook\.com(\:|\d|)/
AFAICT, for the basic purpose of matching strings, the above mind
boggling regular expression can be simplified to:
/\b(iphone|prod)\b.*\.facebook\.com/
Ple
So is v4 stable? I was the impression it was beta? That said, if v4
has better memory tuning options then I'm all ears. Right now I'm
fighting OOM errors (and the kernel OOM reaper) under sustained load.
I've come to realize 6GB is way way too much for my 14GB RAM systems,
but finding even 1GB i
Hey Aaron,
Just to clear out the doubt's, what happen when you use squid-cache without
rock cache_dir? Is the problem appearing again?
Also, there is a possibility of a bug which is related to squid ssl-bump
termination code on 3.5.X.
Testing 4.0.21 would be the best to understand if the issue i
I was asked to block Facebook access from 8:00am to 3:00pm for almost all users
but them are using **alternative Facebook URLs** to access the social network
anyway. This is consuming a lot of our low bandwidth and we can't even work. I
decided to design a **regular expression (regex) to parse t
I have this since forever.. 3.5.27 with one cache_peer and 4 rockstores
2017/09/21 11:19:45 kid1| Bug: Missing MemObject::storeId value
2017/09/21 11:19:45 kid1| mem_hdr: 0x1902d240 nodes.start() 0x552baa0
2017/09/21 11:19:45 kid1| mem_hdr: 0x1902d240 nodes.finish() 0x552baa0
2017/09/21 11:19:45
On 09/22/2017 09:27 AM, Eric Lackey wrote:
> This is all working well except for the fact that we don’t have a
> good way to determine what is being blocked.
All transactions, including blocked ones, must be logged to access.log.
Squid had several bugs in this area. All known bugs (within this
di
14 matches
Mail list logo