Hi all
I was trying to configure the ssl-bump feature. I forgot to allow the
initial CONNECT (or the fake CONNECT, in case of intercepting proxy). This
led me to some strange results which I'd like to point out. I am using
CentOS 8 with squid 6.13 recompiled from the Fedora RPM.
First
ivate page? I just want to only aloow
the internal IPs and cut everyone else off.
I've tried taking out the deny_info, but that sends the user and tool to a
squid error page which basically fails the test as well since it's on the same
site.
I've also tried doing a TCP_RESET i
ibly be sent to .4
and .5 as well.
How do I ensure that www.example.com/tst/map1/. and map2 only go to .4 and
.5 while still correctly being consistent with the domain was you suggested.
Thanks.
On Fri, Aug 30, 2019, at 11:41 AM, Alex Rousskov wrote:
> On 8/30/19 11:44 AM, cred...@e
idden
Server: squid
Mime-Version: 1.0
Date: Wed, 27 Mar 2019 20:36:20 GMT
Content-Type: text/html
Content-Length: 5
X-Squid-Error: TCP_RESET 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from www.example.com
X-Cache-Lookup: NONE from www.example.com:80
Via: 1.0 www.example.com (
Slightly off topic but am I correct in thinking TLS supersedes SSL?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Could you
A – forward to different ports
B – Use Network address translation?
Thoughts…
From: squid-users On Behalf Of
Patrick Chemla
Sent: 19 December 2018 18:29
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Multiple SSL certificates on same IP
Hi all,
Thanks for the great
> So, Squid is installed on an Ubuntu VM, which runs on your laptop?
Correct
> So, the phone is either - direct connection via mobile Internet access, or
> via Squid and your home Internet connection - no way for the phone to use the
> Internet connection without going via Squid?
Ye
Hi,
Re network diagram - Mish Mash / blended / spaghetti I think :p
Squid is installed on the Ubuntu virtual machine. Sorry forgot to draw that on.
The phone connects to mobile internet when out of the house, then reverts back
to going via squid proxy when my laptop wifi is turned on. The
hide / obscure actual sites visited?
Can anyone point out any flaws or issues.
Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I asked this some time ago and am bringing it up again to see if there are any
suggestions since we haven't been able to fix it.
We are using squid as reverse proxy and we have disabled SSLv3 :
https_port XXX.XXX.XXX.XXX:443 accel defaultsite=www.example.com vhost
cert=/etc/cert.pe
Hello squid users,
I'm trying to understand a strange problem with requests to edge.apple.com,
which I think may be related to IPv6 DNS resolution.
To set the scene - we operate a large (1,000+) fleet of Squid 3.5.25 caches.
Each runs on a separate LAN, connected to the internet via an
does this require and other sslproxy_* options. Our goal is to just stop
Nessus from flagging for sslv3. Thanks
On Fri, Mar 30, 2018, at 8:29 PM, Amos Jeffries wrote:
> On 31/03/18 11:41, squid wrote:
> > We are using squid as reverse proxy and we have disabled SSLv3 :
> >
> &
We are using squid as reverse proxy and we have disabled SSLv3 :
https_port XXX.XXX.XXX.XXX:443 accel defaultsite=www.example.com vhost
cert=/etc/cert.pem key=/etc/privkey.pem
options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,CIPHER_SERVER_PREFERENCE
cipher=ECDHE-ECDSA . . .. dhparams=/etc
Hi, I have just installed squid on windows 10, open the port 3128 in the
firewall and configured FF to use the proxy on localhost:3128 for all the
requests, but every request ends with the following page
Hi Squid users,
I'm having some trouble understanding Squid's peer selection algorithms, in
a configuration where multiple cache_peer lines reference the same host.
The background to this is that we wish to present cache service using
multiple accounts at an upstream provider, wi
m getting the same issue a few times a day. I suspect it's
mainly due to clients accessing Windows Updates, but difficult to tell.
I am automatically restarting squid, but the delays for other users
while all this is happening can generate a poor browsing experience.
Thanks
Mark
Thanks Garry and Amos! My problem is solved.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Can anyone point out what I'm doing wrong in my config?
Squid config:
https://bpaste.net/show/796dda70860d
I'm trying to use ACLs to direct incoming traffic on assigned ports to
assigned outgoing addresses. But, squid uses the first IP address
assigned to the interface not listed in
You are not allowed to post to this mailing list, and your message has
been automatically rejected. If you think that your messages are
being rejected in error, contact the mailing list owner at
squid-users-ow...@lists.squid-cache.org.
--- Begin Message ---
Can anyone point out what I'm
efox x" ipv4-1 ->
tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-1
acl ipv4-2 myportname 3129 src xxx.xxx.xxx.xxx/24 -> http_access allow ipv4-2
-> request_header_replace User Agent "Internet Explorer x" ipv4-2 ->
tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-2
Thanks!
______
, it certainly
> creates problems for those who want to [ab]use http_reply_access as a
> delay hook. FWIW, Squid had this exception since 2007:
Thanks, makes sense. It would be great if there was a way to slow down 407
responses; at the moment the only workaround I can think of is to write a
l
ternally generated responses get http_reply_access applied to them.
> Yet no sign of that in your log.
>
> Is this a very old Squid version?
It's a recent Squid version - 3.5.20 on CentOS 6, built from the SRPM kindly
provided by Eliezer.
> Or are the "checking http_reply_access&qu
access#4
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking manager
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(51) match:
aclRegexData::match: checking 'http://www.theage.com.au/'
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(62) match:
aclRegexData::match: loo
response. Running strace across the
pid of each child helper doesn't show any activity across those processes
either.
I also tried the approach suggested by Amos:
> The outcome of that was a 'ext_delayer_acl helper in Squid-3.5
>
> <http://www.squid-cache.org/Versions/v3
Thank you. Just want to make sure I understand before we dive in.
On Thu, Sep 22, 2016, at 09:03 PM, Amos Jeffries wrote:
> On 23/09/2016 12:45 p.m., creditu wrote:
> > We have been using squid in accelerator mode for a number of years. In
> > the current setup we have the squid
>
>
> > If you input http://www.yahoo.com/page.html, this will be transformed
> > to http://192.168.1.1/www.google.com/page.html.
>
> I got the impression that the OP wanted the rewrite to work the other way
> around.
My apologies, that does seem to be t
hoo.com
> the proxy can pickup the the host "http://www.yahoo.com"; from the URI, and
> retrieve the info for me,
> so it need to get the new $host from $location, and remove the $host from the
> $location before proxy pass it.
> it is doable via squid?
Yes it is doable
Hi Squid users,
Seeking advice on how to slow down 407 responses to broken Apple & MS
clients, which seem to retry at very short intervals and quickly fill the
access.log with garbage. The problem is very similar to this:
http://www.squid-cache.org/mail-archive/squid-users/201404/0326.
g else I'm missing? I'm sort of out of my
league here
so I may just quit and wait for v4. ;)
Thanks,
Jamie
>Sadly, that is kind of expected at present for any single client
>connection. We have some evidence that Squid is artificially lowering
>packet sizes in a few annoyin
My squid server has 1Gbps connectivity to the internet and it routinely gets
600 Mbps up/down to speedtest.net.
When a client computer on the same network has a direct connection to the
internet it, too, gets 600 Mbps up/down.
However, when that client computer connects through the squid
Hi,
Trying to use Squid 3.5 to filter a white list on wifi hotspot. Got
http support without issue.
Tried lots of things to get https to work but always kills http, all
the http requests time out.
So I am starting to think that maybe
http 3128 transparent
is not compatible with ssl_bump
is
> Thanks. The current maximum_object_size_in_memory is 19 MB.
>
>>
>> In summary, dealing with in-RAM objects significantly larger than 1MB
>> bigger the object, the longer Squid takes to scan its nodes.
>>
>> Short term, try limit
t_connections off
This option didn't fix the problem. The CPU usage went wild again after
about a day.
I've changed the maximum_object_size_in_memory setting as suggested by
Alex, and I'll report back on that.
Mark
___
squid-users maili
On 2016-03-31 18:44, Alex Rousskov wrote:
>
> My working theory is that the longer you let your Squid run, the bigger
> objects it might store in RAM, increasing the severity of the linear
> search delays mentioned below. A similar pattern may also be caused by
> larger object
On 2016-03-31 16:07, Yuri Voinov wrote:
>
> Looks like permanently running clients, which is exausted network
> resources and then initiating connection abort.
>
> Try to add
>
> client_persistent_connections off
>
> to squid.conf.
>
> Then observe.
Tha
Hi,
I'm running:
Squid Cache: Version 3.5.15 (including patches up to revision 14000)
on FreeBSD 9.3-STABLE (recently updated)
Every week or so I run into a problem where squid's CPU usage starts
growing slowly, reaching 100% over the course of a day or so. When
running normal
On 2016-03-15 09:40, sq...@peralex.com wrote:
> On 2016-03-15 09:05, Amos Jeffries wrote:
>> On 15/03/2016 7:34 p.m., squid wrote:
>>
>> This is bug 4447. Please update to a build from the 3.5 snapshot.
>>
>
> Thanks. I'll give that a try.
>
Looks like
Hi,
I've installed a Squid reverse proxy for a MS-Exchange Test-Installation to
reach OWA from the outside.
My current environment is as follows:
Squid Version 3.4.8 with ssl on a Debian Jessie (self compiled)
The Squid and the exchange system are in the internal network with privat
On 2016-03-15 09:05, Amos Jeffries wrote:
> On 15/03/2016 7:34 p.m., squid wrote:
>>
>> I'm running FreeBSD 9.3-STABLE and Squid 3.5.15 and I'm getting regular
>> core dumps with the following stack. Note that I have disabled caching.
>> Any suggestions
I'm running FreeBSD 9.3-STABLE and Squid 3.5.15 and I'm getting regular
core dumps with the following stack. Note that I have disabled caching.
Any suggestions? I've logged a bug (4467):
#0 0x000801b8c96c in thr_kill () from /lib/libc.so.7
#1 0x000801c55fcb in abo
n alone, without
making changes to a live config.
Luke
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hi Squid users,
I'm seeking some guidance regarding the best way to debug the http_access
and http_reply_access configuration statements on a moderately busy Squid
3.5 cache. In cases where a number (say, 5 or more) of http_access lines
are present, the goal is to find which configur
Hello together,
I am using Squid 3.5.12 with Kerberos Authentication only and ClamAV
on Debian Jessie.
My Proxy is working very nice, but now I've found an issue with just
one SSL Website.
It would be nice to know if others can reproduce this Issue.
Target website is: https://www
Dear Alex,
using squid 3.5.10 with patch the upload speed problem seems to be fixed.
Now I get 112Mbit upload speed from a possible maximum of 115Mbit.
Squid 4.0.1 still has a performance problem on unencrypted POST upload ...
BR, Toni
(TSO off)
12:10:16.343559 IP 10.1.1.210.49388
Dear Alex,
unfortunately not really fixed.
The upload speed using squid 4.0.1 with this patch has bettered significant
but is far away from squid 3.4.x performance.
The used test client can reach a maximum upload speed of 115 MBIT if the
apache server is directly reachable.
If a SQUID 3.4.X
Dear squid team,
first of all thanks for developing such a great product!
Unfortunately on uploading a big test file (unencrypted POST) to
apache webserver using a squid proxy (V 3.5.10 or 4.0.1) the upstream
pakets get slized into thousands of small 39 byte sized pakets.
Excerpt from
Thanks for your valuable information Amos.
Regards,
Nithi
On Friday 26 June 2015 10:48 AM, Amos Jeffries wrote:
On 26/06/2015 4:36 p.m., Squid List wrote:
Hi,
Is the Squid can cache Microsoft Updates and IOS Updates?
If its cache means, please help me out for cache Chrome OS updates in
Hi,
Is the Squid can cache Microsoft Updates and IOS Updates?
If its cache means, please help me out for cache Chrome OS updates in
latest squid version that is installed in CentOS 6.6.
Thanks & Regards,
Nithi
___
squid-users mailing list
s
Hi,
You can use the following in squid configuration to have access log for
log time.
*logfile_rotate 10*
It will keep last 10 access log of squid. If you wish to have log for
month, use it as 30. You may rotate squid log using crontab. Following
will rotate log at every morning 6.
00
website in particular column in db, you can store it
in separate txt file and can control the site access of the users.
Squid will support user defined helper. If it necessary to verify site
from db, you can create your own helper as per you requirement and you
can use it. If you need any cus
http_access deny google
but I suspect maybe you might not actually like the results of what
you are asking for.
What's the best directive to use to make sure that google doesn't go
through the proxy at all?
acl google dstdom_regex -i google
?
___
How long these problems by Google?
For a couple months now.
Also, possible you use collapsing forwarding, or something like in your
Squid configuration?
squid.conf is in the original email. Should I reattach?
May be, you usr HTTP headers manipulation, which can confusing Google
gle.
It's only since the upgrade of squid so must be something in the config.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
acl google dstdom_regex -i google
?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
an issue.
auth_param basic realm AAA proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
Hi Amos,
The configuration I post last time still cannot accomplish the tasks. So, you
mean the "CONNECT" ACL and must pair with normal "GET" command ACL to be
evaluated by squid ?
Best,
Kelvin Yip
-Original Message-
From: squid-users [mailto:squid-users-boun...@l
auth_param basic children 5
auth_param basic realm Welcome to Our Website!
auth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/squid_user
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
acl my_auth proxy_auth REQUIRED
acl SSL_ports port 443
acl
Hi,
*
**"Access to google maps(https://www.google.com/maps) should prevent any
authentication need"*
I could understand that all users should be able to access the google
maps link without any authentication. For this you could add the site
acl before the authentication part in
e authentication information is not displayed.
Seems squid do not use the authentication information when matching the this
rule: "http_access allow CONNECT google".
The "CONNECT" method is success. Then, the squid will continue use no
authentication info
Hi,
I think the error is not with squid version, it might be the connection
problem between proxy server and LDAP server. Please, check the
reachability of LDAP server from the proxy server and also Just check
the DNS configuration whether you have correctly configured it in proxy
server
Hi,
Now the URL "/cgi-bin/swish-query.cgi " was able to reach.
Please, check it "_http://www.squid-cache.org/cgi-bin/swish-query.cgi_";
Regards,
ViSolve Squid
On 10/17/2014 8:12 AM, James Harper wrote:
Doing a search on the main squid page gives me this:
The requeste
Hi,
Yes, we can redirect the ports to squid through our firewall rules.
Check below lines to redirect the ports.
We have some different methods to do.
1. In first Method:
First, we need to machine that squid will be running on, You do not
need iptables or any special kernel options on this
Hi,
Check the below acl rule in your squid configuration file to Block the
particular Domain URLs and also block keywords itself.
# ACL block sites
acl blocksites dstdomain .youtube.com
# ACL block keywords
acl blockkeywords url_regex -i .youtube.com
#Deny access to block keywords ACL
.
Content preview: Hi, Yes, we can redirect the ports to squid through our
firewall
rules. Check below lines to redirect the ports. We have some different
methods
to do. 1. In first Method: First, we need to machine that squid will be
running
on, You do not need iptables or any spec
.
Content preview: Hi, Yes, we can redirect the ports to squid through our
firewall
rules. Check below lines to redirect the ports. We have some different
methods
to do. 1. In first Method: First, we need to machine that squid will be
running
on, You do not need iptables or any spec
.
Content preview: Hi, Yes, we can redirect the ports to squid through our
firewall
rules. Check below lines to redirect the ports. We have some different
methods
to do. 1. In first Method: First, we need to machine that squid will be
running
on, You do not need iptables or any spec
Hi Team,
We have go with following commands in the squid configuration for
blocking particular sites(youtube.com) and also blocking keywords.
Blocking both the website and keyword.
# ACL block sites
acl blocksites dstdomain .youtube.com
# ACL block keywords
acl blockkeywords url_regex -i
.
Content preview: Hi Team, We have go with following commands in the squid
configuration
for blocking particular sites(youtube.com) and also blocking keywords.
Blocking
both the website and keyword. [...]
Content analysis details: (5.9 points, 5.0 required)
pts
.
Content preview: Hi Team, We have go with following commands in the squid
configuration
for blocking particular sites(www.youtube.com) and also blocking keywords.
Blocking both the website and keyword. [...]
Content analysis details: (5.9 points, 5.0 required)
pts
Hi,
The http://www.squid-cache.org/ domain web site is working fine.
We have accessed the site a min ago.
Regards,
ViSolve Squid
On 9/30/2014 1:47 PM, Neddy, NH. Nam wrote:
Hi,
I "accidentally" access squid-cache.org and get 403 Forbidden error,
and am wondering why NOT r
Hello Team,
We can inhibit X-Forwarded-For with *"header_access X-Forwarded-For deny
all*" in squid configuration (*squid.conf*) file.
#Add below Commands in squid conf:
via off
forwarded_for off
follow_X_forwarded_for deny all
Since need to build squid from source for these limi
71 matches
Mail list logo