[squid-users] Difference between cache manager http request count and access.log entries

2022-03-22 Thread admin
Hi!

 

I’ve recently stumbled across all the information that is returned from the 
squid cache manager. Now after analysing the request count data a bit on the 
cache manager information responses I noticed that they do not match up with 
the amount of logs in access.log. There might be 300 requests/s (as shown by 
the cache manager by comparing the values and calculating the average per 
second) and only 100 access.log entries in the somewhat similar time (given a 
few seconds to make sure it was written). 

I am wondering: what causes that or rather how do I have to interpret those 
different numbers? Which of those can I trust?

I am aware that a CONNECT only shows up once and requests routed through the 
tunnel do not show up in access.log, however I thought that you can not see 
single requests on squids side as it is encrypted and thus not count it. That 
would be my only guess on what it could be.

 

Using squid 5 on Debian 11 and 99.9% of the traffic is HTTPS.

 

I would appreciate any help.

Thanks!  

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Difference between cache manager http request count and access.log entries

2022-03-26 Thread admin
I am looking at the "counters" report, so the total requests which I then 
calculate into rates with Prometheus/Grafana. I will double check if there is 
something wrong there if there is no other explanation for this behaviour.
I do not run a MITM or root certificate so I am unable to read HTTP Requests on 
their own on all traffic, so that can not be it really.

-Ursprüngliche Nachricht-
Von: squid-users  Im Auftrag von 
Amos Jeffries
Gesendet: Wednesday, 23 March 2022 10:43
An: squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Difference between cache manager http request count 
and access.log entries

On 23/03/22 09:04, admin wrote:
> Hi!
> 
> I’ve recently stumbled across all the information that is returned 
> from the squid cache manager. Now after analysing the request count 
> data a bit on the cache manager information responses I noticed that 
> they do not match up with the amount of logs in access.log. There 
> might be 300 requests/s (as shown by the cache manager by comparing 
> the values and calculating the average per second) and only 100 
> access.log entries in the somewhat similar time (given a few seconds 
> to make sure it was written).
> 
> I am wondering: what causes that or rather how do I have to interpret 
> those different numbers? Which of those can I trust?

Which manager report are you looking at?

Some reports show average across the entire Squid lifetime across all workers.

> 
> I am aware that a CONNECT only shows up once and requests routed 
> through the tunnel do not show up in access.log, however I thought 
> that you can not see single requests on squids side as it is encrypted 
> and thus not count it.

That is normally correct, unless you are decrypting the traffic. In which case 
the decrypted requests and several transactions used to do the decryption are 
logged (and counted).


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.0.10 https intercept

2016-05-11 Thread admin

I create cert:

openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout 
squidCA.pem -out squidCA.pem


And export it:

openssl x509 -in squidCA.pem -outform DER -out squidCA.crt

Wrong?



Amos Jeffries писал 2016-05-11 17:18:


On 11/05/2016 11:59 p.m., admin wrote:


I just thought! I runs the

openssl x509 -in squidCA.pem -outform DER -out squidCA.crt

import cert and now get ERR_CERT_COMMON_NAME_INVALID

where did I go wrong?


Hmm. I'm not sure that one is you. If it is getting past the CA trust
check then what you did earlier was okay.

This one sounds like either the CA was generated with something for CN
field that was not right. Or that the cert generated by Squid is broken
in that way.

There are two reasons the Squid generated cert might be broken. In this
order of relevance:

1) the server the client was tryign to contact had a broken cert. Mimic
feature in Squid will copy cert breakages so the client can make its
security decisions on as fully accurate information as possible.

2) a bug in Squid.

Some more research to find out what exactly is being identified as
invalid, and where it comes from will be needed to discover whch case 
is

relevant.

Amos

Amos Jeffries писал 2016-05-11 16:43:

On 11/05/2016 6:35 p.m., Компания АйТи Крауд wrote:

hi!

I use squid 4.0.10 in INTERCEPT mode. If I deny some users
(ip-addresses) with

acl users_no_inet src "/etc/squid/ip-groups/no-inet"
http_access deny users_no_inet

ERR_ACCESS_DENIED is displayed then go to HTTP. If go to HTTPS then
first I see browser's NET::ERR_CERT_AUTHORITY_INVALID, and then click
"unsecure" see ERR_ACCESS_DENIED.

How to make that right display ERR_ACCESS_DENIED on HTTPS for deny user
in Squid 4.0 ?
What you describe above is correct behaviour. The browser does not 
trust

your proxy's CA.

The only way to get around the browser warning about TLS security issue
is to install the CA used by the proxy into the browser trusted CA set.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there any distros with SSL Bump compiled by default?

2016-05-15 Thread admin
I make deb's compiled squid in Debian 8: 

3.5.8 

3.5.17 

4.0.10

Tim Bates писал 2016-05-14 14:36:

> Are there any Linux distros with pre-compiled versions of Squid with SSL Bump 
> support compiled in?
> 
> Alternatively, does anyone reputable do a 3rd party repo for Debian/Ubuntu 
> that includes SSL Bump?
> 
> TB
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.17 SSL-Bump Step1

2016-05-15 Thread admin

Hi!

Squid 3.5.17 with SSL, intercept.

I use SSL-Bump only step1 that get SNI and terminate HTTPS sites by 
domain name. The certificate's is not replaced !


acl blocked_https ssl::server_name  "/etc/squid/urls/block-url"
https_port 3129 intercept ssl-bump options=ALL:NO_SSLv3:NO_SSLv2 
connection-auth=off cert=/etc/squid/squidCA.pem

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump terminate blocked_https

It works.

But if I use

acl users_no_inet src "/etc/squid/ip-groups/no-inet"
http_access deny users_no_inet

I see NET::ERR_CERT_AUTHORITY_INVALID in browser. I import my squid 
cert, but I see NET::ERR_CERT_COMMON_NAME_INVALID


Why in this case, the squid trying to replace the certificate?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there any distros with SSL Bump compiled by default?

2016-05-16 Thread admin
Yes 

Can send to email if needed 

Matus UHLAR - fantomas писал 2016-05-16 11:55:

> On 16.05.16 10:36, admin wrote: 
> 
>> I make deb's compiled squid in Debian 8:
>> 
>> 3.5.8
>> 
>> 3.5.17
>> 
>> 4.0.10
> 
> OpenSSL?
> 
> Tim Bates писал 2016-05-14 14:36:
> 
> Are there any Linux distros with pre-compiled versions of Squid with SSL Bump 
> support compiled in?
> 
> Alternatively, does anyone reputable do a 3rd party repo for Debian/Ubuntu 
> that includes SSL Bump?___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there any distros with SSL Bump compiled by default?

2016-05-16 Thread admin

https://itcrowd72.ru/cloud/index.php/s/W4Sv8ojnf5dVKvc

squid 3.5.19 with SSL. Compiled and build deb in Debian 8. Enjoy :)



Amos Jeffries писал 2016-05-16 14:25:


Please update those to 3.5.19. A dozen CVE's went out these past few
months. :-(

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.17 SSL-Bump Step1

2016-05-16 Thread admin
Amos Jeffries писал 2016-05-16 13:34:

> Please upgrade to 3.5.19.

Upgrade to 3.5.19

>> acl blocked_https ssl::server_name  "/etc/squid/urls/block-url"
>> https_port 3129 intercept ssl-bump options=ALL:NO_SSLv3:NO_SSLv2
>> connection-auth=off cert=/etc/squid/squidCA.pem
>> acl step1 at_step SslBump1
>> ssl_bump peek step1
>> ssl_bump terminate blocked_https
>> 
>> It works.
> 
> Obviously not. There is no instruction what to do other than terminate.
> Squid is left to other circumstances to decide what is needed...

it works! :) if you have the opportunity to check on the virtual machine

>> But if I use
>> 
>> acl users_no_inet src "/etc/squid/ip-groups/no-inet"
>> http_access deny users_no_inet
> 
> ... you force bumping to happen in order to deliver the HTTP error message.
> 
> Try adding this rule above the peek (and the ACL line too):
> ssl_bump terminate users_no_inet

trying, no success :(

I just do not understand the reason for such behavior. Why, if access is
allowed everything works, and if the ban on access to HTTP, you must
first see a message stating that my certificate has not been able to
match, and then later ERR_ACCESS_DENIED. Sorry for my English___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.17 SSL-Bump Step1

2016-05-16 Thread admin
Thanks for answer, Alex! 

Alex Rousskov писал 2016-05-17 00:24:

> When access is prohibited via http_access deny, Squid needs to send an
> "Access Denied" error response to the user (this is how http_access
> works). To send that error to the user, Squid needs to establish a
> secure connection with the user (this is how HTTPS works). To do that,
> Squid has to use its own SSL certificate (this is how SSL works).
> 
> If you want to use a splice-or-terminate design, do not deny access via
> http_access. Limit yourself to "ssl_bump terminate" rules.

Is feature planned to squid gave when ERR_ACCESS_DENIED then terminate?

What are some other ways to deny HTTPS in intercept mode?___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Peek and splice

2016-05-17 Thread admin
get your blocked_https.txt 

Reet Vyas писал 2016-05-17 14:47:

> Hi 
> 
> Below is my squid configuration  
> 
> Squid : 3.5.13 
> OS ubuntu 14.04 
> 
> http_port 3128 
> http_port 3127 intercept 
> https_port 3129 intercept ssl-bump generate-host-certificates=on 
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_certs/squid.crt 
> key=/etc/squid/ssl_certs/squid.key 
> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>  
> 
> always_direct allow all 
> sslproxy_cert_error allow all 
> sslproxy_flags DONT_VERIFY_PEER 
> acl blocked ssl::server_name  "/etc/squid/blocked_https.txt" 
> acl step1 at_step SslBump1 
> ssl_bump peek step1 
> ssl_bump terminate blocked 
> ssl_bump splice all 
> sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB 
> sslcrtd_children 16 startup=1 idle=1 
> sslproxy_capath /etc/ssl/certs 
> sslproxy_cert_error allow all 
> ssl_unclean_shutdown on 
> 
> I want to block facebook.com [1] so I have added url in .txt file. 
> 
> Its not blocking anything. 
> 
> Please let me know what I have to change in this configuration 
> 
> I getting below logs in squid 
> 
> 1463478160.585551 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [2] - HIER_NONE/- - 
> 1463478160.585550 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [3] - HIER_NONE/- - 
> 1463478161.147562 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [2] - HIER_NONE/- - 
> 1463478161.147561 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [3] - HIER_NONE/- - 
> 1463478163.982553 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [2] - HIER_NONE/- - 
> 1463478163.982552 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [3] - HIER_NONE/- - 
> 1463478163.994565 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [2] - HIER_NONE/- - 
> 1463478163.994564 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [3] - HIER_NONE/- - 
> 1463478184.338 182900 192.168.0.66 TAG_NONE/200 0 CONNECT 106.10.137.175:443 
> [4] - HIER_NONE/- - 
> 1463478184.338 182898 192.168.0.66 TCP_TUNNEL/200 6040 CONNECT 
> geo.query.yahoo.com:443 [5] - ORIGINAL_DST/106.10.137.175 [6] - 
> 
> 1463478194.373 61 192.168.0.66 TCP_MISS/204 233 GET 
> http://www.gstatic.com/generate_204 - ORIGINAL_DST/216.58.199.163 [7] - 
> 1463478209.166 240232 192.168.0.66 TAG_NONE/200 0 CONNECT 74.125.200.239:443 
> [8] - HIER_NONE/- - 
> 1463478209.166 240231 192.168.0.66 TCP_TUNNEL/200 5603 CONNECT 
> translate.googleapis.com:443 [9] - ORIGINAL_DST/74.125.200.239 [10] - 
> 1463478209.200 240267 192.168.0.66 TAG_NONE/200 0 CONNECT 216.58.199.142:443 
> [11] - HIER_NONE/- - 
> 1463478209.200 240266 192.168.0.66 TCP_TUNNEL/200 4962 CONNECT 
> clients4.google.com:443 [12] - ORIGINAL_DST/216.58.199.142 [13] - 
> 1463478213.443 181611 192.168.0.66 TAG_NONE/200 0 CONNECT 31.13.79.246:443 
> [14] - HIER_NONE/- - 
> 1463478213.443 181611 192.168.0.66 TCP_TUNNEL/200 8547 CONNECT 
> graph.facebook.com:443 [15] - ORIGINAL_DST/31.13.79.246 [16] - 
> 1463478224.432 33 192.168.0.66 TCP_MISS/204 233 GET 
> http://www.gstatic.com/generate_204 - ORIGINAL_DST/216.58.199.131 [17] - 
> 1463478231.727555 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [2] - HIER_NONE/- - 
> 1463478231.727555 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [3] - HIER_NONE/- - 
> 1463478232.311572 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [2] - HIER_NONE/- - 
> 1463478232.311571 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [3] - HIER_NONE/- - 
> 1463478246.369  13073 192.168.0.66 TAG_NONE/200 0 CONNECT 74.125.200.189:443 
> [18] - HIER_NONE/- - 
> 1463478246.369  13072 192.168.0.66 TCP_TUNNEL/200 4546 CONNECT 
> 0.client-channel.google.com:443 [19] - ORIGINAL_DST/74.125.200.189 [20] - 
> 1463478246.369  13806 192.168.0.66 TAG_NONE/200 0 CONNECT 216.58.199.142:443 
> [11] - HIER_NONE/- - 
> 1463478246.369  13805 192.168.0.66 TCP_TUNNEL/200 4604 CONNECT 
> clients5.google.com:443 [21] - ORIGINAL_DST/216.58.199.142 [13] - 
> 1463478265.935 119576 192.168.0.66 TAG_NONE/200 0 CONNECT 106.10.199.11:443 
> [22] - HIER_NONE/- - 
> 1463478265.935 119576 192.168.0.66 TCP_TUNNEL/200 8586 CONNECT 
> geo.yahoo.com:443 [23] - ORIGINAL_DST/106.10.199.11 [24] - 
> 1463478327.555 41 192.168.0.66 TCP_MISS/200 2323 GET 
> http://www.gstatic.com/chrome/crlset/3006/crl-set-delta-3005-260733898557562236.crx.data
>  - ORIGINAL_DST/216.58.220.3 [25] text/html 
> 
> On Fri, May 13, 2016 at 4:37 PM, Amos Jeffries  wrote:
> 
>> On 13/05/2016 5:58 p.m., Reet Vyas wrote:
>>> Hi Amos/Yuri,
>>> 
>>> Currently my squid is configured with ssl bump, now I want to use peek and
>>> splice. I read in some forum that we don't need to install certificate on
>>> client's machine.
>>> 
>> 
>> Splice does not require it. But what you want to do w

Re: [squid-users] Squid Peek and splice

2016-05-17 Thread admin
0 177618 CONNECT 
> 216.58.199.129:443 [28] - ORIGINAL_DST/216.58.199.129 [29] - 
> 1463481762.241 276758 192.168.0.11 TCP_TUNNEL/200 1451680 CONNECT 
> 216.58.199.165:443 [24] - ORIGINAL_DST/216.58.199.16 [38] 
> 
> On Tue, May 17, 2016 at 3:33 PM, Reet Vyas  wrote:
> 
> Here is my txt file, as of now its working but I am getting secure connection 
> failed, I want to know if we can customize error message like Access Denied . 
> 
> In logs I am not getting  full URL PFA logs for same. What I have to change  
> in peek and splice  ssl bump to get full URL ? 
> 
> On Tue, May 17, 2016 at 3:21 PM, admin  wrote:
> 
> get your blocked_https.txt 
> 
> Reet Vyas писал 2016-05-17 14:47:
> 
> Hi 
> 
> Below is my squid configuration  
> 
> Squid : 3.5.13 
> OS ubuntu 14.04 
> 
> http_port 3128 
> http_port 3127 intercept 
> https_port 3129 intercept ssl-bump generate-host-certificates=on 
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_certs/squid.crt 
> key=/etc/squid/ssl_certs/squid.key 
> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
>  
> 
> always_direct allow all 
> sslproxy_cert_error allow all 
> sslproxy_flags DONT_VERIFY_PEER 
> acl blocked ssl::server_name  "/etc/squid/blocked_https.txt" 
> acl step1 at_step SslBump1 
> ssl_bump peek step1 
> ssl_bump terminate blocked 
> ssl_bump splice all 
> sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 4MB 
> sslcrtd_children 16 startup=1 idle=1 
> sslproxy_capath /etc/ssl/certs 
> sslproxy_cert_error allow all 
> ssl_unclean_shutdown on 
> 
> I want to block facebook.com [39] so I have added url in .txt file. 
> 
> Its not blocking anything. 
> 
> Please let me know what I have to change in this configuration 
> 
> I getting below logs in squid 
> 
> 1463478160.585551 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [40] - HIER_NONE/- - 
> 1463478160.585550 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [41] - HIER_NONE/- - 
> 1463478161.147562 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [40] - HIER_NONE/- - 
> 1463478161.147561 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [41] - HIER_NONE/- - 
> 1463478163.982553 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [40] - HIER_NONE/- - 
> 1463478163.982552 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [41] - HIER_NONE/- - 
> 1463478163.994565 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [40] - HIER_NONE/- - 
> 1463478163.994564 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [41] - HIER_NONE/- - 
> 1463478184.338 182900 192.168.0.66 TAG_NONE/200 0 CONNECT 106.10.137.175:443 
> [42] - HIER_NONE/- - 
> 1463478184.338 182898 192.168.0.66 TCP_TUNNEL/200 6040 CONNECT 
> geo.query.yahoo.com:443 [43] - ORIGINAL_DST/106.10.137.175 [44] - 
> 
> 1463478194.373 61 192.168.0.66 TCP_MISS/204 233 GET 
> http://www.gstatic.com/generate_204 - ORIGINAL_DST/216.58.199.163 [45] - 
> 1463478209.166 240232 192.168.0.66 TAG_NONE/200 0 CONNECT 74.125.200.239:443 
> [46] - HIER_NONE/- - 
> 1463478209.166 240231 192.168.0.66 TCP_TUNNEL/200 5603 CONNECT 
> translate.googleapis.com:443 [47] - ORIGINAL_DST/74.125.200.239 [48] - 
> 1463478209.200 240267 192.168.0.66 TAG_NONE/200 0 CONNECT 216.58.199.142:443 
> [20] - HIER_NONE/- - 
> 1463478209.200 240266 192.168.0.66 TCP_TUNNEL/200 4962 CONNECT 
> clients4.google.com:443 [49] - ORIGINAL_DST/216.58.199.142 [21] - 
> 1463478213.443 181611 192.168.0.66 TAG_NONE/200 0 CONNECT 31.13.79.246:443 
> [50] - HIER_NONE/- - 
> 1463478213.443 181611 192.168.0.66 TCP_TUNNEL/200 8547 CONNECT 
> graph.facebook.com:443 [51] - ORIGINAL_DST/31.13.79.246 [52] - 
> 1463478224.432 33 192.168.0.66 TCP_MISS/204 233 GET 
> http://www.gstatic.com/generate_204 - ORIGINAL_DST/216.58.199.131 [19] - 
> 1463478231.727555 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [40] - HIER_NONE/- - 
> 1463478231.727555 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [41] - HIER_NONE/- - 
> 1463478232.311572 192.168.0.66 TAG_NONE/200 0 CONNECT 107.170.47.181:443 
> [40] - HIER_NONE/- - 
> 1463478232.311571 192.168.0.66 TAG_NONE/503 0 CONNECT 
> freevideodownloader.co:443 [41] - HIER_NONE/- - 
> 1463478246.369  13073 192.168.0.66 TAG_NONE/200 0 CONNECT 74.125.200.189:443 
> [36] - HIER_NONE/- - 
> 1463478246.369  13072 192.168.0.66 TCP_TUNNEL/200 4546 CONNECT 
> 0.client-channel.google.com:443 [53] - ORIGINAL_DST/74.125.200.189 [37] - 
> 1463478246.369  13806 192.168.0.66 TAG_NONE/200 0 CONNECT 216.

Re: [squid-users] how to connect machine linux to squid proxy, not in browser?

2016-07-07 Thread admin
It is transparent (intercept) mode

james82 писал 2016-07-07 12:26:

> In normal, people away connect squid proxy with browser. But I want method
> work with whole computer, like VPN, is mean connect machine linux, window or
> Mac to squid proxy installed on it? How to do that?
> 
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-connect-machine-linux-to-squid-proxy-not-in-browser-tp4678416.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] POST upload splits tcp stream in many small 39byte sized pakets

2015-10-20 Thread Squid admin

Dear squid team,

first of all thanks for developing such a great product!

Unfortunately on uploading a big test file (unencrypted POST) to  
apache webserver using a squid proxy (V 3.5.10 or 4.0.1) the upstream  
pakets get slized into thousands of small 39 byte sized pakets.


Excerpt from cache.log:

2015/10/20 13:51:08.201 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 583:  
asynCall 0x244b670*1
2015/10/20 13:51:08.201 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 583.
2015/10/20 13:51:08.203 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 16422:  
asynCall 0x2447d40*1
2015/10/20 13:51:08.203 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz  
16422.
2015/10/20 13:51:08.204 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 39:  
asynCall 0x2448ec0*1
2015/10/20 13:51:08.205 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 39.
2015/10/20 13:51:08.206 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 39:  
asynCall 0x2464bb0*1
2015/10/20 13:51:08.207 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 39.
2015/10/20 13:51:08.208 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 39:  
asynCall 0x2448ec0*1
2015/10/20 13:51:08.209 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 39.

...



Attached you can find a tar file containing squid configuration,
test network topology, network trace from traffic from client to squid,
network trace from squid to webserver and a full debug log from squid

One incoming paket of size ~ 1500 bytes gets sliced into more as 40 pakets.
On the target webserver the squid upstream traffic therefore looks  
like a DOS attack.


The problem can be reproduced using squid 3.5.x and squid 4.0.x (32bit  
and 64bit variants)

The where no such problems using squid 3.2.x

Hopefully you can help me to fix this problem as this is a showstopper  
for me to upgrade to squid 3.5.x and higher.


Best regards,

Toni



squid_upload_splits_tcp_traffic_into_39byte_packets.tar.gz
Description: application/compressed-tar
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] POST upload splits tcp stream in many small 39byte sized pakets

2015-10-21 Thread Squid admin
:24.967413 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
90027:91475, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967419 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
91475:92923, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967421 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
92923:94371, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967423 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
94371:95819, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967425 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
95819:97267, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967788 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 97267,
win 192, options [nop,nop,TS val 1398719125 ecr 104477843], length 0
11:28:24.967812 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
97267:98715, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967815 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
98715:100163, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448
11:28:24.967816 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
100163:101611, ack 1, win 229, options [nop,nop,TS val 104477843 ecr
1398719125], length 1448

Today I will test also 3.5.10 with patch.

BR, Toni

Zitat von Alex Rousskov :


On 10/20/2015 07:49 AM, Squid admin wrote:


Unfortunately on uploading a big test file (unencrypted POST) to apache
webserver using a squid proxy (V 3.5.10 or 4.0.1) the upstream pakets
get slized into thousands of small 39 byte sized pakets.


Does bug 4353 patch help in your case?

http://bugs.squid-cache.org/show_bug.cgi?id=4353
Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] POST upload splits tcp stream in many small 39byte sized pakets

2015-10-21 Thread Squid admin
349378 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
68127:69536, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 1409
12:10:16.349450 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
69536:69575, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 39
12:10:16.349472 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
69575:70984, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 1409
12:10:16.349540 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
70984:71023, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 39
12:10:16.349566 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
71023:72432, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 1409
12:10:16.350744 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
72432:72471, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 39
12:10:16.350781 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
72471:73880, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 1409
12:10:16.350846 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
73880:73919, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 39
12:10:16.350870 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
73919:75328, ack 1, win 229, options [nop,nop,TS val 105105689 ecr  
1399346971], length 1409




Zitat von Alex Rousskov :


On 10/20/2015 07:49 AM, Squid admin wrote:


Unfortunately on uploading a big test file (unencrypted POST) to apache
webserver using a squid proxy (V 3.5.10 or 4.0.1) the upstream pakets
get slized into thousands of small 39 byte sized pakets.


Does bug 4353 patch help in your case?

  http://bugs.squid-cache.org/show_bug.cgi?id=4353

Alex.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] how to cache youtube videos

2015-11-03 Thread linux admin
Can anyone please tell me how to cache youtube videos.??
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] HTTPS Content Filtering without de-crypting traffic?

2016-01-26 Thread Panda Admin
Hello,

I attempting to terminate https traffic based on ACLs using ssl_bumping
WITHOUT de-crypting the traffic in intercept/transparent mode.  Has anyone
got this to work before? I have copied my configuration and what my
iptables nat rules look like.

 I am using squid 3.5.13 with the following compile options:
Squid Cache: Version 3.5.12
Service Name: squid
configure options:  '--prefix=/usr' '--localstatedir=/var'
'--libexecdir=/lib/squid3' '--datadir=/share/squid3' '--sysconfdir=/etc/
squid3' '--with-default-user=proxy' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-openssl' '-enable-ssl-crtd'
'--enable-icap-client' '--with-large-files' --enable-ltdl-convenience

squid.conf:
acl social dstdomain .google.com .facebook.com .reddit.com
acl step1 at_step SslBump1
acl step2 at_step SslBump2
ssl_bump stare step2 all
ssl_bump terminate social
acl localnet src 192.168.50.0/24
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access allow all
http_port 3128 transparent
https_port 3129 intercept ssl-bump cert=/etc/squid3/ssl_cert/squidSSL.pem
cache_dir ufs /cache/squid3/spool 100 16 256
access_log syslog:local5.info squid
coredump_dir /var/spool/squid3
url_rewrite_program /usr/bin/squidGuard -c
/cache/config/daemons/squidguard/squidGuard.conf
url_rewrite_children 15
url_rewrite_access allow all
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1 icap://
127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=1 icap://
127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all

iptables -L -v -t nat(only relevant rules):
Chain PREROUTING (policy ACCEPT 1083 packets, 233K bytes)
 pkts bytes target prot opt in out source
destination
  157  9420 DNAT   tcp  --  eth1   any anywhere
anywhere tcp dpt:https to:192.168.11.1:3129


Chain PREROUTING-daemon-tcp (1 references)
 pkts bytes target prot opt in out source
destination
  443 26580 DNAT   tcp  --  eth1   any anywhere
anywhere tcp dpt:http /* 7:PFD::CF-3128 */ to:192.168.11.1:3128
0 0 DNAT   tcp  --  eth2   any anywhere
anywhere tcp dpt:http /* 8:PFD::CF-3128 */ to:172.17.0.1:3128


Right now I can't get it to terminate ANY https traffic. All it does is
allow it through.
Any and all help would be greatly appreciated!

~ Extremely Confused Squid User ~
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid -z not exiting?

2016-01-29 Thread Panda Admin
I'm running squid3.5.13 and running the command 'squid -z" says it creates
the directories but doesn't exit. Ever.

Any idea what's going on with that?

Thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Crashing

2016-02-09 Thread Panda Admin
Hello,

I am running squid 3.5.13 and it crashes with these errors:

2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
x86_64-pc-linux-gnu...
2016/02/09 15:43:24 kid1| Service Name: squid
2016/02/09 15:43:24 kid1| Process ID 7279
2016/02/09 15:43:24 kid1| Process Roles: worker
2016/02/09 15:43:24 kid1| With 1024 file descriptors available
2016/02/09 15:43:24 kid1| Initializing IP Cache...
2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from /etc/resolv.conf
2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from /etc/resolv.conf
2016/02/09 15:43:24 kid1| Adding domain nuspire.com from /etc/resolv.conf
2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
processes
2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
process.
2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
process.
2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
process.
2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
process.
2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
process.
2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
processes
2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
needed.
2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
FATAL: Failed to create unlinkd subprocess
Squid Cache (Version 3.5.13): Terminated abnormally.
CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
Maximum Resident Size: 4019840 KB
Page faults with physical i/o: 0


Anybody have an idea why?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Panda Admin
I see that, but that's not possible. I still have system memory available.
I just did a top while running squid, never went over 30% memory usage.  It
maxed out the CPU but not the memory. So, yeah...still confused.

On Tue, Feb 9, 2016 at 10:55 AM, Kinkie  wrote:

> Hi,
>   it's all in the logs you posted:
>
> ipcCreate: fork: (12) Cannot allocate memory
> WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
> ...
> FATAL: Failed to create unlinkd subprocess
>
> You've run of system memory during startup.
>
>
> On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin 
> wrote:
> > Hello,
> >
> > I am running squid 3.5.13 and it crashes with these errors:
> >
> > 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
> > 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
> > x86_64-pc-linux-gnu...
> > 2016/02/09 15:43:24 kid1| Service Name: squid
> > 2016/02/09 15:43:24 kid1| Process ID 7279
> > 2016/02/09 15:43:24 kid1| Process Roles: worker
> > 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
> > 2016/02/09 15:43:24 kid1| Initializing IP Cache...
> > 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
> > 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from
> /etc/resolv.conf
> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from
> /etc/resolv.conf
> > 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from
> /etc/resolv.conf
> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
> > processes
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
> > processes
> > 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
> > needed.
> > 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > FATAL: Failed to create unlinkd subprocess
> > Squid Cache (Version 3.5.13): Terminated abnormally.
> > CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
> > Maximum Resident Size: 4019840 KB
> > Page faults with physical i/o: 0
> >
> >
> > Anybody have an idea why?
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
>
>
> --
> Francesco
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Panda Admin
Adding a swap directory fixed it for now.  I think it's because my ACL
files are so large.

On Tue, Feb 9, 2016 at 11:00 AM, Panda Admin 
wrote:

> I see that, but that's not possible. I still have system memory available.
> I just did a top while running squid, never went over 30% memory usage.
> It maxed out the CPU but not the memory. So, yeah...still confused.
>
> On Tue, Feb 9, 2016 at 10:55 AM, Kinkie  wrote:
>
>> Hi,
>>   it's all in the logs you posted:
>>
>> ipcCreate: fork: (12) Cannot allocate memory
>> WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
>> ...
>> FATAL: Failed to create unlinkd subprocess
>>
>> You've run of system memory during startup.
>>
>>
>> On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin 
>> wrote:
>> > Hello,
>> >
>> > I am running squid 3.5.13 and it crashes with these errors:
>> >
>> > 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
>> > 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
>> > x86_64-pc-linux-gnu...
>> > 2016/02/09 15:43:24 kid1| Service Name: squid
>> > 2016/02/09 15:43:24 kid1| Process ID 7279
>> > 2016/02/09 15:43:24 kid1| Process Roles: worker
>> > 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
>> > 2016/02/09 15:43:24 kid1| Initializing IP Cache...
>> > 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
>> > 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
>> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from
>> /etc/resolv.conf
>> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from
>> /etc/resolv.conf
>> > 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from
>> /etc/resolv.conf
>> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
>> > processes
>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>> > process.
>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>> > process.
>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>> > process.
>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>> > process.
>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
>> > process.
>> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
>> > processes
>> > 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
>> > needed.
>> > 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
>> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> > FATAL: Failed to create unlinkd subprocess
>> > Squid Cache (Version 3.5.13): Terminated abnormally.
>> > CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
>> > Maximum Resident Size: 4019840 KB
>> > Page faults with physical i/o: 0
>> >
>> >
>> > Anybody have an idea why?
>> >
>> > ___
>> > squid-users mailing list
>> > squid-users@lists.squid-cache.org
>> > http://lists.squid-cache.org/listinfo/squid-users
>> >
>>
>>
>>
>> --
>> Francesco
>>
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Panda Admin
The acl files are up to 16M in size.  The RAM on the machine is 4G.
Allocating swap space 8G for the OS has fixed the crashing issue. The only
issue now is startup time. Squid is taking several minutes to start up.  Is
there a better solution that I'm missing?

Thanks!

On Tue, Feb 9, 2016 at 12:42 PM, Amos Jeffries  wrote:

> On 10/02/2016 5:21 a.m., Kinkie wrote:
> > If you are swapping performance will suffer terribly. How large are these
> > files and how much ram do youbhave?
>
>
> NP: fork() which is used by Squid can require virtual memory in large
> amounts. Even though the processes do not actually use that much RAM.
>
> In your particular case with Squid worker using 30% (say 'N') of your
> RAM, the fork() for those 5 ssl_crtd helpers will require Nx5 of virtual
> memory to start, while only using ~4MB of real RAM.
>
> Some OS do it better than others. Some actually allocate swap space for
> all that virtual memory and never use it (yuck).
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Crashing

2016-02-09 Thread Panda Admin
I would love to use another tool, however can your tools do ssl_bumping aka
filtering of HTTPS traffic WITHOUT putting a cert on the client side? This
is the only way I've been able to come up with to do both HTTPS and HTTP
Content Filtering using squid.

Thanks for all advice:)

On Tue, Feb 9, 2016 at 3:50 PM, Rafael Akchurin <
rafael.akchu...@diladele.com> wrote:

> Hello Panda Admin,
>
>
>
> If you do not mind looking at ICAP filtering instead of only URL filtering
> please take a look at our qlproxy (ICAP web filter for Squid).
>
> The shalla list formatted folders with categories can be used as is as
> third party blacklist provider and I presume takes less time to process
> upon start.
>
>
>
> Please note we currently do not support regexes in the list of domain
> names.
>
>
>
> Best regards,
>
> Rafael
>
>
>
> *From:* squid-users [mailto:squid-users-boun...@lists.squid-cache.org] *On
> Behalf Of *Panda Admin
> *Sent:* Tuesday, February 9, 2016 5:01 PM
> *To:* Kinkie 
> *Cc:* squid-us...@squid-cache.org
> *Subject:* Re: [squid-users] Squid Crashing
>
>
>
> I see that, but that's not possible. I still have system memory available.
>
> I just did a top while running squid, never went over 30% memory usage.
> It maxed out the CPU but not the memory. So, yeah...still confused.
>
>
>
> On Tue, Feb 9, 2016 at 10:55 AM, Kinkie  wrote:
>
> Hi,
>   it's all in the logs you posted:
>
> ipcCreate: fork: (12) Cannot allocate memory
> WARNING: Cannot run '/lib/squid3/ssl_crtd' process.
> ...
> FATAL: Failed to create unlinkd subprocess
>
> You've run of system memory during startup.
>
>
>
> On Tue, Feb 9, 2016 at 4:47 PM, Panda Admin 
> wrote:
> > Hello,
> >
> > I am running squid 3.5.13 and it crashes with these errors:
> >
> > 2016/02/09 15:43:24 kid1| Set Current Directory to /var/spool/squid3
> > 2016/02/09 15:43:24 kid1| Starting Squid Cache version 3.5.13 for
> > x86_64-pc-linux-gnu...
> > 2016/02/09 15:43:24 kid1| Service Name: squid
> > 2016/02/09 15:43:24 kid1| Process ID 7279
> > 2016/02/09 15:43:24 kid1| Process Roles: worker
> > 2016/02/09 15:43:24 kid1| With 1024 file descriptors available
> > 2016/02/09 15:43:24 kid1| Initializing IP Cache...
> > 2016/02/09 15:43:24 kid1| DNS Socket created at [::], FD 6
> > 2016/02/09 15:43:24 kid1| DNS Socket created at 0.0.0.0, FD 7
> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.78 from
> /etc/resolv.conf
> > 2016/02/09 15:43:24 kid1| Adding nameserver 10.31.2.79 from
> /etc/resolv.conf
> > 2016/02/09 15:43:24 kid1| Adding domain nuspire.com from
> /etc/resolv.conf
> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 5/10 'ssl_crtd'
> > processes
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > 2016/02/09 15:43:24 kid1| WARNING: Cannot run '/lib/squid3/ssl_crtd'
> > process.
> > 2016/02/09 15:43:24 kid1| helperOpenServers: Starting 0/15 'squidGuard'
> > processes
> > 2016/02/09 15:43:24 kid1| helperOpenServers: No 'squidGuard' processes
> > needed.
> > 2016/02/09 15:43:24 kid1| Logfile: opening log syslog:local5.info
> > 2016/02/09 15:43:24 kid1| ipcCreate: fork: (12) Cannot allocate memory
> > FATAL: Failed to create unlinkd subprocess
> > Squid Cache (Version 3.5.13): Terminated abnormally.
> > CPU Usage: 20.041 seconds = 19.115 user + 0.926 sys
> > Maximum Resident Size: 4019840 KB
> > Page faults with physical i/o: 0
> >
> >
> > Anybody have an idea why?
> >
>
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
>
>
> --
> Francesco
>
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Filtering HTTPS URLs

2016-02-11 Thread Panda Admin
Try adding
acl step1 at_step SslBump1
ssl_bump peek step1 bump_sites

This worked for me.  Just a suggestion:)


On Thu, Feb 11, 2016 at 3:59 AM, Amos Jeffries  wrote:

> On 11/02/2016 1:05 p.m., Victor Hugo wrote:
> > Hi,
> >
> > I was wondering if it is possible to filter HTTPS URLs using squid (for
> > example to blacklist reddit.com but allow https://www.reddit.com/r/news/
> )?
> >
> > I thought this may be possible using ssl_bump and url_regex. I have been
> > trying this using squid 3.5.13 but with no success.
> >
> > Here is the squid configuration that I have tried but doesn't seem to
> work
> > (it works for http sites though):
> >
>
> 
> >
> > acl whitelist-regex url_regex -i reddit.com/r/news
> > http_port 3129 ssl-bump
> cert=/opt/squid-3.5.13/etc/squid3/ssl_cert/myCA.pem
> > generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> > acl bump_sites ssl::server_name .reddit.com
> > ssl_bump bump bump_sites
> > ssl_bump splice !bump_sites
> > http_access allow whitelist-regex
> > http_access allow localhost
> > http_access deny all
>
> > Relevant access.log output (IP addresses redacted to x.x.x.x):
> > 1455145755.589  0 x.x.x.x TCP_DENIED/200 0 CONNECT
> www.reddit.com:443 -
> > HIER_NONE/- -
>
> So this is the bump happening, as you wanted.
>
> > 1455145755.669  0 x.x.x.x TAG_NONE/403 4011 GET
> > https://www.reddit.com/r/news - HIER_NONE/- text/html
>
> And something else has 403 (Forbidden) the request. Your ACL and
> http_access config looks fine. So I dont think its that.
>
>
> The first oddity is that your ssl_bump rules are doing bump without
> having fetched the clientHello details yet. So this is a "client-first"
> bumping situation in which Squid first negotiates TLS / HTTPS with the
> client, then completely separately negotiates TLS/HTTPS with the server.
>  - any errors in the server TLS might result in something like this 403
> (though it should be a 5xx status, it may not always be).
>  - the sslproxy_* settings are entirely what controls the server
> connection TLS.
>
>
> Second oddity is that its saying DENIED/200. 200 is 'allowed' in CONNECT
> actions. This could be a logging bug, or a sign of something going wrong
> in the bumping stage that alters the CONNECT logging as well.
>
>
> Are you able to experiment with using the Squid-4.0.5 release? there are
> some bumping bug fixes that are only in that release series.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users