Hello,
I have this error on squid 3.5.5:
(squid-1): tcp logger buffer overflowed then the process exit with status 1
and (squid -1) restart.
and some minutes after (squid -1) crashes again.
What can I do to solve problem?
Thanks,
Paul
___
squid-users m
On 3/09/2015 1:44 p.m., PSA wrote:
> Hi Amos, thanks for the prompt reply.
>
> So I could follow that example, but use this ACL instead:
>
> acl aclname req_header header-name [-i] any\.regex\.here
> # regex match against any of the known request headers. May be
> # thought o
Hi Amos, thanks for the prompt reply.
So I could follow that example, but use this ACL instead:
acl aclname req_header header-name [-i] any\.regex\.here
# regex match against any of the known request headers. May be
# thought of as a superset of "browser", "referer" and "mime
On 3/09/2015 11:45 a.m., Jason Enzer wrote:
> is this possible?
>
> i have src acl working fine. i can control the outgoing address/port
> and incoming address with no issues.
>
> when i introduce ncsa auth it breaks everything.
>
Order is important. Read the http_access rules carefully top-to-
On 3/09/2015 11:53 a.m., Sima Yi wrote:
> We run several web servers behind a squid reverse proxy. Requests are
> directed to a different web server depending on the domain name.
> A new requirement has come up to temporarily redirect traffic with a
> specific http header to a specific web server.
On 3/09/2015 7:48 a.m., jake driscoll wrote:
> Thanks a lot for the reply Amos.
> I tried the following:
>
> acl station-ip src 192.168.1.0/24
> acl station-domain dstdomain /usr/local/squid/station-domain.acl
> http_access allow station-ip station-domain
> http_access deny kiosk-ip
>
> This ord
On 3/09/2015 7:47 a.m., Oliver Webb wrote:
> Currently the rewriter is only being sent ":443" and at no
> point gets sent the URL starting https.
> Any ideas why this might be happening?
The "bump" part is not happening. You will have to look into why not.
Though be aware that ssl-bump is an MIT
We run several web servers behind a squid reverse proxy. Requests are
directed to a different web server depending on the domain name.
A new requirement has come up to temporarily redirect traffic with a
specific http header to a specific web server.
Is squid capable of doing this? Could I have
is this possible?
i have src acl working fine. i can control the outgoing address/port
and incoming address with no issues.
when i introduce ncsa auth it breaks everything.
acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
acl src3171 src 23.240
acl port3171 myportname 3171
tcp_o
Thanks a lot for the reply Amos.
I tried the following:
acl station-ip src 192.168.1.0/24
acl station-domain dstdomain /usr/local/squid/station-domain.acl
http_access allow station-ip station-domain
http_access deny kiosk-ip
This order of rules only denies everything instead of allowing atleast
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
This is only example. It is obvious we need investigate every case
separately and write/correct rules if it is needed.
Big mistake to assume that there is a magic set of rules that is
suitable for all occasions. Which allows to achieve a high hit
On 3/09/2015 5:22 a.m., Juan Porter wrote:
>
> Hello there! :)
>
> Can you tell me what it means? The following line in my cache.log file:
>
> nf getsockopt(so_original_dst) failed on local=192.168.1.1:3128
> remote=192.168.1.120 FD 518 flags=33: (2) No such file or directory
>
> When this kin
On 3/09/2015 3:04 a.m., Yuri Voinov wrote:
>
> Here is another case with the same image:
>
> http://i.imgur.com/qM52aPQ.png
>
> The same, right?
>
> So, I proposed to leave thousands of copies of the same image, even
> within a single user session, just because someone is afraid once again
> to
On 3/09/2015 2:58 a.m., Yuri Voinov wrote:
>
> Here is an example.
>
> Look at this three screenshots.
>
> First. Two images requested by one client at the same time.
>
> http://i.imgur.com/JbMhTQ4.png
>
> This is the same image:
> http://i.imgur.com/4khcCOT.png
> http://i.imgur.com/Ya58kfG.pn
Hello there! :)
Can you tell me what it means? The following line in my cache.log file:
nf getsockopt(so_original_dst) failed on local=192.168.1.1:3128
remote=192.168.1.120 FD 518 flags=33: (2) No such file or directory
When this kind of lines appear in my log, also the CPU goes to 100 % with
On 3/09/2015 12:23 a.m., Yuri Voinov wrote:
>
> Look at this:
>
> http://i.imgur.com/gbkU20r.png
>
> Pay your attention to reply times. With hit ratio not above 30% will
> also occurs unacceptable delays on clients.
>
> So, I see no reasons to have cache with low hit ratio in any case. IMHO
> n
I turned
*forwarded_for on*
and deleted
*visible_hostname*
That did the job.
Thank you for your advise.
Maybe this helps someone someday, when facing the same error.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-MISS-429-tp4672839p4673054.html
Sent fro
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Here is another case with the same image:
http://i.imgur.com/qM52aPQ.png
The same, right?
So, I proposed to leave thousands of copies of the same image, even
within a single user session, just because someone is afraid once again
to cache? And I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Here is an example.
Look at this three screenshots.
First. Two images requested by one client at the same time.
http://i.imgur.com/JbMhTQ4.png
This is the same image:
http://i.imgur.com/4khcCOT.png
http://i.imgur.com/Ya58kfG.png
Agree?
And -
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Look at this:
http://i.imgur.com/gbkU20r.png
Pay your attention to reply times. With hit ratio not above 30% will
also occurs unacceptable delays on clients.
So, I see no reasons to have cache with low hit ratio in any case. IMHO
need to tune ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
30% is too low hit ratio to have cached proxy in infrastructure. There
is simple no reason to cache anything with low hit. It's enough to buy
more external throuthput. Agree?
Yes, I use 3.4.x version with custom settings. It seems safe enough for
On 02/09/2015 13:00, Yuri Voinov wrote:
I'm getting a very high hit ratio in my cache.And I do not intend to
lower its with myself. Enough and that on the opposite side of the
thousands of webmasters counteract caching their content on its own
grounds. Beginning from YouTube.
Well, Most sane s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm getting a very high hit ratio in my cache.And I do not intend to
lower its with myself. Enough and that on the opposite side of the
thousands of webmasters counteract caching their content on its own
grounds. Beginning from YouTube.
02.09.15 1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Not to use ignore-must-revalidate refresh_pattern for content.
So far, my approach has not caused a single problem with customers. And,
in my opinion, you're too insure fearing cache more aggressively. If I
complain about problems with the site -
On 02/09/2015 12:46, Yuri Voinov wrote:
all, but I assume that you do not want innocent victims, like the few
gifs that actually have a different image depending on the parameter.
May be, may be not. Most often I deal with unscrupulous webmasters who
deliberately do the same unfriendly content ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
02.09.15 4:57, Marcus Kool пишет:
>
>
> On 09/01/2015 03:57 PM, Yuri Voinov wrote:
>>
> This is bad idea - to cache the same gifs with unique parameters. They
keeps unchanged for one HTTP-session in best case. You cache will
overloads with this s
# ###
# Negotiate
# ###
# http://wiki.squid-cache.org/Features/Authentication
# http://wiki.squid-cache.org/Features/NegotiateAuthentication
auth_param negotiate program /usr/bin/ntlm_auth
--helper-protocol=gss-spnego --configfile /etc/samba/smb.conf-squid
auth_param negotiate children 10
27 matches
Mail list logo