-users@lists.squid-cache.org
Subject: [squid-users] Squid 4 Migration - balance_on_multiple_ip
Hi squid users!
Hope you are all well. I'm attempting to migrate from Squid 3.5 to 4, and in my
conf file I used to have balance_on_multiple_ip toggled on as to reduce chance
of brownouts on endpoin
Hi squid users!
Hope you are all well. I'm attempting to migrate from Squid 3.5 to 4, and in my
conf file I used to have balance_on_multiple_ip toggled on as to reduce chance
of brownouts on endpoints. I noticed this is not available in Squid 4, is it on
by default? Or is there some alternative
On 1/6/21 3:24 PM, Conner Bean wrote:
> Hope you are all well. I'm attempting to migrate from Squid 3.5 to 4,
> and in my conf file I used to have balance_on_multiple_ip toggled on
> as to reduce chance of brownouts on endpoints.
FYI: Enabling balance_on_multiple_ip does nothing in Squid v3.5.
Thanks Amos
You means using "login=PASS" in peer settings and in Proxy parent B and
C use the "basic_fake_auth" helper to "simulate" the requested auth ?
Le 17/11/2020 à 11:43, Amos Jeffries a écrit :
On 17/11/20 9:27 pm, David Touzeau wrote:
Hi,
We a first Squid using Kerberos + Active
On 17/11/20 9:27 pm, David Touzeau wrote:
Hi,
We a first Squid using Kerberos + Active Directory authentication.
This first squid is used to limit access using ACls and Active Directory
groups.
This first squid using parents as peer in order to access to internet in
this way:
Hi,
We a first Squid using Kerberos + Active Directory authentication.
This first squid is used to limit access using ACls and Active Directory
groups.
This first squid using parents as peer in order to access to internet in
this way:
| > SQUID B --
On Tuesday, June 30, 2020, 1:41:57 PM GMT+2, Eliezer Croitor
wrote:
> ^(w[0-9]+|[a-z]+\.)?web\.whatsapp\.com$
Yes, it does. I should have seen that... Thanks for your help!
Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://
On Tuesday, June 30, 2020, 8:50:09 AM GMT+2, Eliezer Croitoru
wrote:
>
> I can try to re-produce this setup locally to make sure that it works as
> described in the docs.
Thanks!
> So couple details:
> * PC Windows(What OS?) client with firefox
Windows 10, Windows 7
Firefox ESR 68.5.0
s; Eliezer Croitoru
Subject: Re: [squid-users] Squid 4 and on_unsupported_protocol
On Tuesday, June 30, 2020, 8:50:09 AM GMT+2, Eliezer Croitoru
wrote:
>
> I can try to re-produce this setup locally to make sure that it works as
> described in the docs.
Thanks!
> So couple det
squid.conf (cleaned of personal details) to make sure where and how these rules should be applied? Thanks,Eliezer Eliezer CroitoruTech SupportMobile: +972-5-28704261Email: ngtech1...@gmail.com From: VieriSent: Tuesday, June 30, 2020 1:08 AMTo: Squid Users; Eliezer CroitoruSubject: Re: [squid-users
On Monday, June 29, 2020, 6:41:41 PM GMT+2, Eliezer Croitoru
wrote:
>
>
> I believe what you are looking for is at:
> https://wiki.squid-cache.org/ConfigExamples/Chat/Whatsapp
Thanks, but the article doesn't work for me.
I still see Firefox complaining (console) about not being able to conne
Hi,
I'd like to allow whatsapp web through a transparent tproxy sslbump Squid setup.
The target site is not loading:
wss://web.whatsapp.com/ws
I get TCP_MISS/400 305 GET https://web.whatsapp.com/ws in Squid cache log.
I'm not sure I know how to use the on_unsupported_protocol diective.
I have
...@gmail.com From: VieriSent: Monday, June 29, 2020 7:14 PMTo: Squid UsersSubject: [squid-users] Squid 4 and on_unsupported_protocol Hi, I'd like to allow whatsapp web through a transparent tproxy sslbump Squid setup. The target site is not loading: wss://web.whatsapp.com/ws I get TCP_MISS/400 305 GET
On 5/15/20 3:28 AM, David Touzeau wrote:
> acl TestFinger server_cert_fingerprint
> 77:F6:8D:C1:0A:DF:94:8B:43:1F:8E:0E:91:5E:0C:32:42:8B:99:C9
> ssl_bump peek ssl_step2
> ssl_bump splice ssl_step3 TestFinger
> ssl_bump stare ssl_step2 all
> ssl_bump bump all
> But no luck, website still decrypt
On 15/05/20 7:28 pm, David Touzeau wrote:
>
> Thanks alex, made this one on squid 4.10
>
>
> acl TestFinger server_cert_fingerprint
> 77:F6:8D:C1:0A:DF:94:8B:43:1F:8E:0E:91:5E:0C:32:42:8B:99:C9
Is that a SHA1 fingerprint or a newer algorithm?
AFAIK only SHA1 is supported by Squid currently.
A
Thanks alex, made this one on squid 4.10
acl TestFinger server_cert_fingerprint
77:F6:8D:C1:0A:DF:94:8B:43:1F:8E:0E:91:5E:0C:32:42:8B:99:C9
acl ssl_step1 at_step SslBump1
acl ssl_step2 at_step SslBump2
acl ssl_step3 at_step SslBump3
ssl_bump peek ssl_step2
ssl_bump splice ssl_step3 TestFinger
Hi, i'm trying to play with acl "server_cert_fingerprint" for splicing
websites.
First, get the fingerprint :
openssl s_client -host www.clubic.com -port 443 2> /dev/null | openssl
x509 -fingerprint -noout
# Build the acl
acl TestFinger server_cert_fingerprint
77:F6:8D:C1:0A:DF:94:8B:43
tum: 23. 7. 2019 18:24:13
Předmět: Re: [squid-users] squid 4 fails to authenticate using NTLM
"
I found one more thing in the cache.log:
Got user=[user1] domain=[DOM1] workstation=[machine1] len1=24 len2=334
Login for user [DOM1\[user1]@[machine1 failed due to [Reading winbi
x0)"
It should fail if not allowed to read from winbind, I suppose.
Thanks.
Zb
-- Původní e-mail --
Od: Amos Jeffries
Komu: squid-users@lists.squid-cache.org
Datum: 23. 7. 2019 11:03:37
Předmět: Re: [squid-users] squid 4 fails to authenticate using NTLM
"
uth scheme.
--
## /var/lib/samba:
drwxr-x--- 2 root winbindd_priv 4096 Jul 23 15:30 winbindd_privileged
Zbynek
-- Původní e-mail --
Od: Amos Jeffries
Komu: squid-users@lists.squid-cache.org
Datum: 23. 7. 2019 11:03:37
Předmět: Re: [squid-users] squid 4 fails to authenti
On 23/07/19 7:53 am, zby wrote:
> My problem: my browser keeps on prompting for authentication.
> Facts:
>
> Debian 10 x86_64
> squid-4.6 + samba-4.9
> joined AD using "net ads join -U ...". OK.
> wbinfo -t : OK
> wbinfo -P or -p : OK
> wbinfo -i userXYZ : returns data (OK)
> wbinfo -g (well, fai
My problem: my browser keeps on prompting for authentication.
Facts:
Debian 10 x86_64
squid-4.6 + samba-4.9
joined AD using "net ads join -U ...". OK.
wbinfo -t : OK
wbinfo -P or -p : OK
wbinfo -i userXYZ : returns data (OK)
wbinfo -g (well, fails to "deliver", too many users?)
smb
On 5/14/19 8:07 AM, Alex Rousskov wrote:
> On 5/14/19 12:25 AM, johnr wrote:
>> how do the pconn_lifetime and client_idle_pconn_timeout interact?
> There should be virtually no interaction: The former limit is checked
> just when a connection becomes idle and Squid decides whether to pool
> the co
On 5/14/19 12:25 AM, johnr wrote:
> If in the context of this directive became idle means "done processing the
> previous request" then how is the pconn_lifetime directive different than
> the client_idle_pconn_timeout and server_idle_pconn_timeout (other than
> affecting both at the same time)?
On 14/05/19 6:25 pm, johnr wrote:
> Alex - thank you for the reply.
>
> If in the context of this directive became idle means "done processing the
> previous request" then how is the pconn_lifetime directive different than
> the client_idle_pconn_timeout and server_idle_pconn_timeout (other than
>
Alex - thank you for the reply.
If in the context of this directive became idle means "done processing the
previous request" then how is the pconn_lifetime directive different than
the client_idle_pconn_timeout and server_idle_pconn_timeout (other than
affecting both at the same time)? If my quest
On 5/10/19 12:18 PM, johnr wrote:
> The configuration directive pconn_lifetime
> (http://www.squid-cache.org/Doc/config/pconn_lifetime/), seems to give the
> squid admin control over whether squid closes idle connections or moves them
> into the 'idle connection pool'...
Correct. I would say "now
Hi,
The configuration directive pconn_lifetime
(http://www.squid-cache.org/Doc/config/pconn_lifetime/), seems to give the
squid admin control over whether squid closes idle connections or moves them
into the 'idle connection pool'... I am curious if in squid3, the connection
was automatically term
Ok,
here we are: https://bugs.squid-cache.org/show_bug.cgi?id=4938
On Thu, 11 Apr 2019 at 07:27, Amos Jeffries wrote:
> On 10/04/19 9:14 pm, Davide Belloni wrote:
> > Hi,
> > in the request scope of cache log I cannot find nothing that suggest to
> > a configuration issue
> >
>
> It may be relat
On 10/04/19 9:14 pm, Davide Belloni wrote:
> Hi,
> in the request scope of cache log I cannot find nothing that suggest to
> a configuration issue
>
It may be related to the session cache being set to '0'. The log shows
the server context being created, then destroyed just before the code
produce
Hi,
in the request scope of cache log I cannot find nothing that suggest to a
configuration issue
Maybe is the time to open a bug?
On Tue, 9 Apr 2019 at 09:58, Davide Belloni
wrote:
> Hi,
> here the cache log in debug mode with the single request (previous email
> was blocked because the atachm
Hi,
here the cache log in debug mode with the single request (previous email
was blocked because the atachment was too big)
Thanks
On Mon, 8 Apr 2019 at 08:26, Davide Belloni
wrote:
> Hi,
> here the cache log in debug mode with the single request
>
> Thanks
>
> On Fri, 5 Apr 2019 at 09:06, Amos
On 5/04/19 7:54 pm, Davide Belloni wrote:
> Hi,
> the setup is exactly what you suggested but still the ERROR shows up.
> Here the startup sequence about context creation:
>
Okay that looked reasonable.
>
> If you want I can attach all the cache log with startup and one request
> with error
>
Hi,
the setup is exactly what you suggested but still the ERROR shows up.
Here the startup sequence about context creation:
2019/04/05 06:29:48.050| Initializing https:// proxy context
2019/04/05 06:29:48.050| 24,8| SBuf.cc(38) SBuf: SBuf950 created from id
SBuf110
2019/04/05 06:29:48.050| 24,8| T
On 5/04/19 12:37 am, Davide Belloni wrote:
> Hi,
> this is the certificate that I'm using at the moment:
>
AFAICS the pieces Squid-4 needs for your config and checks for are all
there.
Are the pieces correctly ordered in the .pem file? key first, then CA cert.
>
> On Thu, 4 Apr 2019 at 12:57,
Hi,
this is the certificate that I'm using at the moment:
Certificate:
> Data:
> Version: 3 (0x2)
> Serial Number:
> a3:49:9a:ee:ac:75:66:da
> Signature Algorithm: sha256WithRSAEncryption
> Issuer: CN = nobody
> Validity
> Not Before:
Hi, thanks very much for all the advices!
About the action to generate the certificate I've followed the squid wiki,
that doesn't modify (if I remember correctly) openssl conf to create it .
Do you have some link to a good howto about that?
Thanjs
Il gio 4 apr 2019, 12:35 Amos Jeffries ha scrit
On 4/04/19 10:11 pm, Davide Belloni wrote:
> Hi,
> I've a problem in Ubuntu 18.04.2 with Squid 4.6 compiled with OpenSSL
> 1.1 about ssl_bump. The same configuration works in Squid 3.5 and
> OpenSSL 1.0
>
> Here the relevant conf :
>
> ...
> http_port 3128 ssl-bump options=ALL:NO_SSLv3 co
Hi,
I've a problem in Ubuntu 18.04.2 with Squid 4.6 compiled with OpenSSL 1.1
about ssl_bump. The same configuration works in Squid 3.5 and OpenSSL 1.0
Here the relevant conf :
...
http_port 3128 ssl-bump options=ALL:NO_SSLv3 connection-auth=off
generate-host-certificates=off cert=/etc/squid/squi
Many thanks for the explanation
There is a miss configuration in config file:
"cache deny all"
It's a shame...
-Message d'origine-
De : squid-users De la part de Alex
Rousskov
Envoyé : samedi 23 février 2019 23:16
À : squid-users@lists.squid-cache.org
Objet : Re:
@lists.squid-cache.org
Subject: Re: [squid-users] Squid 4.x: cache_peer PROXY_PROTOCOL support with
squid parents
Currently we are working on Kerberos with Active Directory with Ha-proxy
that
sends requests to squid using proxy_protocol.
Everything works great but we want to replace the ha-proxy
On 2/23/19 10:17 AM, Amos Jeffries wrote:
> On 24/02/19 5:33 am, David Touzeau wrote:
>> http.cc(982) haveParsedReplyHeaders: decided: do not cache but share
>> because the entry has been released; HTTP status 200
>> What “but share because the entry has been released” event means ?
> 'do not cac
On 24/02/19 5:33 am, David Touzeau wrote:
> Hi
>
> I’m trying to store in cache an Internet file
>
>
> Run the squid in debug mode says:
>
> http.cc(982) haveParsedReplyHeaders: decided: do not cache but share
> because the entry has been released; HTTP status 200
>
> What “but share because t
On 24/02/19 5:30 am, David Touzeau wrote:
>
> Currently we are working on Kerberos with Active Directory with Ha-proxy
> that
> sends requests to squid using proxy_protocol.
> Everything works great but we want to replace the ha-proxy with a squid.
> In fact, we want to the squid client send the
Hi
I'm trying to store in cache an Internet file
Run the squid in debug mode says:
http.cc(982) haveParsedReplyHeaders: decided: do not cache but share because
the entry has been released; HTTP status 200
What "but share because the entry has been released" event means ?
centralize ACLs on the parent proxy according to
the user's login name.
If you have any suggestion ?
Best regards
-Message d'origine-
De : squid-users De la part de
Amos Jeffries
Envoyé : samedi 23 février 2019 04:07
À : squid-users@lists.squid-cache.org
Objet : Re: [s
On 23/02/19 2:45 am, David Touzeau wrote:
> Hi,
>
>
>
> We would like to use this infrastructure:
>
>
>
> Squid-cache client authentication 1
>
>
> | > Squid Parent with ACLs per user/LDAP groups
Hi,
We would like to use this infrastructure:
Squid-cache client authentication 1
| > Squid Parent with ACLs per user/LDAP groups/Web filtering --->
INTERNET
Squid-cache client authentication 2
Currently this kind of infrastructure cannot be done because t
On Sat, Apr 28, 2018 at 4:20 PM, Amos Jeffries wrote:
>
> That seems to be a bug. I think I can already see what is causing it, if
> you open a bug report I'll attach a test patch there for you.
>
Thanks! That fixed it.
___
squid-users mailing list
squ
On 29/04/18 09:57, Rick Ellis wrote:
> Before trying squid 4 this worked as intended:
>
> acl PORT80 myport 80
FYI: That ACL type is deprecated because it is so unreliable. Use
myportname (Squid listening host/IP:port or name= parameter) or
localport (the src-port from TCP) , depending on which i
Before trying squid 4 this worked as intended:
acl PORT80 myport 80
acl MYSITE dstdomain www.domain.com
http_access deny PORT80 MYSITE
deny_info 301:https://www.domain.com%R MYSITE
For 4.0.24 the %R is always blank. So all redirected go to the root of the
website. Is there something else I should
On 22/03/18 11:34, Dan Charlesworth wrote:
> Hello all,
>
> I'm wondering if anyone can point to a Squid 4 RPM package for CentOS /
> RHEL 6.
>
IIRC it is not a simple proposition. Squid-4 requires minimum compiler
versions that are not available in those ancient OS. It is often a
simpler propos
Hello all,
I'm wondering if anyone can point to a Squid 4 RPM package for CentOS /
RHEL 6.
I've had a search around, but it seems people are only packaging it for EL7.
I did try compiling an EL6 RPM myself, based on an EL7 source RPM, but I'm
not adept in this area and couldn't get past certain
On 25/02/18 06:26, Chase Wright wrote:
> It's been nearly 2 years since there was a blog post about Squid 4.x and
> I've noticed that the daily auto-generated Squid 3.5 stable branch
> release last updated on "08 Dec 2017"
>
> I've also noticed that the last two CVEs were only fixed in the 4.x
> b
It's been nearly 2 years since there was a blog post about Squid 4.x and
I've noticed that the daily auto-generated Squid 3.5 stable branch release
last updated on "08 Dec 2017"
I've also noticed that the last two CVEs were only fixed in the 4.x branch
(2018)
Is the squid team planning to move 4.
On 29/01/18 22:48, Alex Crow wrote:
>
> Thanks very much Alex. I thought it might be something like that. I'm
> guessing it's most likely #3 or #4 as the site works direct from the
> browser.
>
That does not preclude #1 or #2 from being possibilities.
It is very common to have a server with out
On 26/01/18 17:50, Alex Rousskov wrote:
On 01/26/2018 02:30 AM, Alex Crow wrote:
I've just set up a new SSL interception proxy using peek/splice/bump
using squid 4.0.22 and I'm getting SSL errors on some site indicating
missing intermediate certs as described here:
https://blog.diladele.com/20
On 01/26/2018 02:30 AM, Alex Crow wrote:
> I've just set up a new SSL interception proxy using peek/splice/bump
> using squid 4.0.22 and I'm getting SSL errors on some site indicating
> missing intermediate certs as described here:
>
> https://blog.diladele.com/2015/04/21/fixing-x509_v_err_unable
Hi List,
I've just set up a new SSL interception proxy using peek/splice/bump
using squid 4.0.22 and I'm getting SSL errors on some site indicating
missing intermediate certs as described here:
https://blog.diladele.com/2015/04/21/fixing-x509_v_err_unable_to_get_issuer_cert_locally-on-ssl-bum
25.01.2017 5:25, Alex Rousskov пишет:
> On 01/24/2017 02:11 PM, Yuri Voinov wrote:
>> 25.01.2017 2:50, Alex Rousskov пишет:
>>> A short-term hack: I have seen folks successfully solving somewhat
>>> similar problems using a localport ACL with an "impossible" value of
>>> zero. Please try this hac
On 01/24/2017 02:11 PM, Yuri Voinov wrote:
> 25.01.2017 2:50, Alex Rousskov пишет:
>> A short-term hack: I have seen folks successfully solving somewhat
>> similar problems using a localport ACL with an "impossible" value of
>> zero. Please try this hack and update this thread if it works for you:
25.01.2017 2:50, Alex Rousskov пишет:
> On 01/24/2017 12:20 PM, Yuri Voinov wrote:
>> 25.01.2017 1:10, Alex Rousskov пишет:
>>> On 01/24/2017 11:33 AM, Yuri Voinov wrote:
http_access deny to_localhost
>>> Does not match. The destination is not localhost.
>> Yes, destination is squid itself.
On 01/24/2017 12:20 PM, Yuri Voinov wrote:
> 25.01.2017 1:10, Alex Rousskov пишет:
>> On 01/24/2017 11:33 AM, Yuri Voinov wrote:
>>> http_access deny to_localhost
>> Does not match. The destination is not localhost.
> Yes, destination is squid itself. From squid to squid.
No, not "to squid": Th
On my setup it is easy to reproduce.
It is enough to execute with wget:
wget -S https://yandex.com/company/
access.log immediately shows
0 - TCP_DENIED/403 3574 GET http://repository.certum.pl/ca.cer -
HIER_NONE/- text/html;charset=utf-8
before request to Yandex destination.
However it execut
Under detailed ACL debug got this transaction:
2017/01/25 01:36:35.772 kid1| 28,3| DomainData.cc(110) match:
aclMatchDomainList: checking 'repository.certum.pl'
2017/01/25 01:36:35.772 kid1| 28,3| DomainData.cc(115) match:
aclMatchDomainList: 'repository.certum.pl' NOT found
2017/01/25 01:36:35.77
25.01.2017 1:10, Alex Rousskov пишет:
> On 01/24/2017 11:33 AM, Yuri Voinov wrote:
>
>>> 1485279884.648 0 - TCP_DENIED/403 3574 GET
>>> http://repository.certum.pl/ca.cer - HIER_NONE/- text/html;charset=utf-8
>
>> http_access deny !Safe_ports
> Probably does not match -- 80 is a safe port.
>
On 01/24/2017 11:33 AM, Yuri Voinov wrote:
>> 1485279884.648 0 - TCP_DENIED/403 3574 GET
>> http://repository.certum.pl/ca.cer - HIER_NONE/- text/html;charset=utf-8
> http_access deny !Safe_ports
Probably does not match -- 80 is a safe port.
> # Instant messengers include
> include "/usr
This is working production server. I've checked configuration twice. See
no problem.
Here:
# -
# Access parameters
# -
# Deny requests to unsafe ports
http_access deny !Safe_ports
# Instant messengers include
include "/usr/
On 01/24/2017 11:19 AM, Yuri Voinov wrote:
> It is downloads directly via proxy from localhost:
> As I understand, downloader also access via localhost, right?
This is incorrect. Downloader does not have a concept of an HTTP client
which sends the request to Squid so "via localhost" or "via any
May be, this feature is mutually exclusive with
sslproxy_foreign_intermediate_certs option?
25.01.2017 0:19, Yuri Voinov пишет:
> Mm, hardly.
>
> It is downloads directly via proxy from localhost:
>
> root @ khorne /patch # http_proxy=localhost:3128 curl
> http://repository.certum.pl/ca.cer
>
Mm, hardly.
It is downloads directly via proxy from localhost:
root @ khorne /patch # http_proxy=localhost:3128 curl
http://repository.certum.pl/ca.cer
0
0>1 *H
0UPL1U
270611104639Z0>1o.10U Certum CA0
0 UPL1U
0 *H. z o.o.10U Certum CA0"0
AK°jk̘gŭ&_O
On 01/24/2017 10:48 AM, Yuri Voinov wrote:
> It seems 4.0.17 tries to download certs but gives deny somewhere.
> However, same URL with wget via same proxy works
> Why?
Most likely, your http_access or similar rules deny internal download
transactions but allow external ones. This is possible, fo
Hm. Another question.
It seems 4.0.17 tries to download certs:
1485279884.648 0 - TCP_DENIED/403 3574 GET
http://repository.certum.pl/ca.cer - HIER_NONE/- text/html;charset=utf-8
but gives deny somewhere.
However, same URL with wget via same proxy works:
root @ khorne /patch # wget -S htt
On 01/23/2017 03:59 PM, Amos Jeffries wrote:
> On 24/01/2017 8:22 a.m., Yuri Voinov wrote:
>> 24.01.2017 0:06, Alex Rousskov пишет:
>>> FWIW, IMO, storing the generated fake certificates in the regular Squid
>>> cache would also be better than using an OpenSSL-administered database.
>> Exactly.
>
On 24/01/2017 7:06 a.m., Marcus Kool wrote:
>
>
> On 23/01/17 15:31, Alex Rousskov wrote:
>> On 01/23/2017 04:28 AM, Yuri wrote:
>>
>>> 1. How does it work?
>>
>> My response below and the following commit message might answer some of
>> your questions:
>>
>> http://bazaar.launchpad.net/~squi
On 24/01/2017 8:22 a.m., Yuri Voinov wrote:
>
>
> 24.01.2017 0:06, Alex Rousskov пишет:
>> On 01/23/2017 10:41 AM, Yuri Voinov wrote:
>>> 23.01.2017 23:31, Alex Rousskov пишет:
On 01/23/2017 04:28 AM, Yuri wrote:
>>
> 2. How this feature is related to sslproxy_foreign_intermediate_certs,
24.01.2017 2:25, Marcus Kool пишет:
>
>
> On 23/01/17 17:23, Yuri Voinov wrote:
> [snip]
>
>>> I created bug report http://bugs.squid-cache.org/show_bug.cgi?id=4659
>>> a week ago but there has not been any activity.
>>> Is there someone who has sslproxy_foreign_intermediate_certs
>>> working in
On 23/01/17 17:23, Yuri Voinov wrote:
[snip]
I created bug report http://bugs.squid-cache.org/show_bug.cgi?id=4659
a week ago but there has not been any activity.
Is there someone who has sslproxy_foreign_intermediate_certs
working in Squid 4.0.17 ?
Seems works as by as in 3.5.x. As I can see
24.01.2017 0:06, Marcus Kool пишет:
>
>
> On 23/01/17 15:31, Alex Rousskov wrote:
>> On 01/23/2017 04:28 AM, Yuri wrote:
>>
>>> 1. How does it work?
>>
>> My response below and the following commit message might answer some of
>> your questions:
>>
>> http://bazaar.launchpad.net/~squid/squid/
24.01.2017 0:06, Alex Rousskov пишет:
> On 01/23/2017 10:41 AM, Yuri Voinov wrote:
>> 23.01.2017 23:31, Alex Rousskov пишет:
>>> On 01/23/2017 04:28 AM, Yuri wrote:
I.e., where downloaded certs stored, how it
handles, does it saves anywhere to disk?
>>> Missing certificates are fetched
On 01/23/2017 10:41 AM, Yuri Voinov wrote:
> 23.01.2017 23:31, Alex Rousskov пишет:
>> On 01/23/2017 04:28 AM, Yuri wrote:
>>> I.e., where downloaded certs stored, how it
>>> handles, does it saves anywhere to disk?
>> Missing certificates are fetched using HTTP[S]. Certificate responses
>> should
On 23/01/17 15:31, Alex Rousskov wrote:
On 01/23/2017 04:28 AM, Yuri wrote:
1. How does it work?
My response below and the following commit message might answer some of
your questions:
http://bazaar.launchpad.net/~squid/squid/5/revision/14769
This seems that the feature only goes to
23.01.2017 23:31, Alex Rousskov пишет:
> On 01/23/2017 04:28 AM, Yuri wrote:
>
>> 1. How does it work?
> My response below and the following commit message might answer some of
> your questions:
>
> http://bazaar.launchpad.net/~squid/squid/5/revision/14769
>
>> I.e., where downloaded certs s
On 01/23/2017 04:28 AM, Yuri wrote:
> 1. How does it work?
My response below and the following commit message might answer some of
your questions:
http://bazaar.launchpad.net/~squid/squid/5/revision/14769
> I.e., where downloaded certs stored, how it
> handles, does it saves anywhere to di
Hi, gents.
I have some stupid questions about subject.
1. How does it work? I.e., where downloaded certs stored, how it
handles, does it saves anywhere to disk? Because of this feature is
completely undocumented and it did not follow from the source code.
2. How this feature is related to ss
Also here is an example showing the issues when pushing to S3 as well as
the same error with some google url's.
2016/10/17 18:33:32 kid1| SECURITY ALERT: Host header forgery detected on
local=209.85.144.113:443 remote=x.x.x.x:62402 FD 49 flags=33 (local IP does
not match any domain IP)
2016/10/17
In response to it not being a false positive , maybe its not specifically
the TTL but in this other article on the mailing lists someone else had the
same issue
Here is the response Amos gave, this is a known issue and apparently there
is no way to "ignore host header forgery issues" or bypass th
On 2016-10-18 22:42, John Wright wrote:
Hi
Replying to the list
Yes i get that error on many different sites same exact error about
host headers.
Also if you watch the TTL on the amazonaws url i provided it changes
from 3 to 5 to 10 seconds to 60 to 10 back and forth.
If you go online to an dns
Hi
Replying to the list
Yes i get that error on many different sites same exact error about host
headers.
Also if you watch the TTL on the amazonaws url i provided it changes from 3
to 5 to 10 seconds to 60 to 10 back and forth.
If you go online to an dns lookup site like kloth i see via kloth 5
On 2016-10-18 18:32, John Wright wrote:
Hi,
I have a constant problem with Host header forgery detection on squid
doing peek and splice.
I see this most commonly with CDN, Amazon and microsoft due to the
fact there TTL is only 5 seconds on certain dns entries im connecting
to. So when my clien
Hi,
I have a constant problem with Host header forgery detection on squid doing
peek and splice.
I see this most commonly with CDN, Amazon and microsoft due to the fact
there TTL is only 5 seconds on certain dns entries im connecting to. So
when my client connects through my squid i get host hea
The latest tests shows that Squid for unknown reasons do outgoing
connection using IPv6 only.
Which leads to "Network unreacheble" with my ISP - it does not support IPv6.
Full wireshark dumps for single outgoing transaction attached to bug
already.
20.04.16 17:14, Eliezer Croitoru пишет:
He
Hey Yuri,
I think that the bug solution or identification is requiring a full
tcpdump trace for a single request as was mentioned on the bug
report:
http://bugs.squid-cache.org/show_bug.cgi?id=4497#c39
http://bugs.squid-cache.org/show_bug.cgi?id=4497#c40
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
18.04.16 22:11, Guy Helmer пишет:
>
>> On Apr 17, 2016, at 5:50 AM, Yuri Voinov wrote:
>>
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> *NIX means UNIX. Solaris is AT&T UNIX. Linux is not UNIX (C) Linus
Torvalds. :) We are not
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
18.04.16 22:11, Guy Helmer пишет:
>
>> On Apr 17, 2016, at 5:50 AM, Yuri Voinov wrote:
>>
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> *NIX means UNIX. Solaris is AT&T UNIX. Linux is not UNIX (C) Linus
Torvalds. :) We are not
> On Apr 17, 2016, at 5:50 AM, Yuri Voinov wrote:
>
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> *NIX means UNIX. Solaris is AT&T UNIX. Linux is not UNIX (C) Linus Torvalds.
> :) We are not speaking about all possible OS'es. I suggests the matter in
> SSL/TLS, not OS or hands
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
http://bugs.squid-cache.org/show_bug.cgi?id=4497
Debul logs are here:
https://drive.google.com/file/d/0B4nS4FYXsqTfdlpqeHJSRWtmcFE/view?usp=sharing
Here is one transaction done from wget on separated testing setup.
17.04.16 20:41, Alex Roussko
On 04/17/2016 06:59 AM, Yuri Voinov wrote:
> IDK whats happening.
The answer is probably in the ALL,9 log. Since you can reproduce this
problem on an isolated system with a single transaction, you may be able
to analyze that log to pinpoint the failure. If you cannot or will not
perform that analy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
17.04.16 15:16, Amos Jeffries пишет:
> On 17/04/2016 4:55 a.m., Yuri Voinov wrote:
>>
>> So.
>>
>> Still has no ideas?
>>
>
> Only things I assume you probably already looked at:
>
> Maybe churn in the CA certificates. Linux and Windows distros h
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
*NIX means UNIX. Solaris is AT&T UNIX. Linux is not UNIX (C) Linus
Torvalds. :) We are not speaking about all possible OS'es. I suggests
the matter in SSL/TLS, not OS or hands or something similar.
The problem is in CF, I think. As a maximum in pe
1 - 100 of 133 matches
Mail list logo