[squid-users] reverse proxy Squid 4

2020-06-24 Thread Vieri
Hi,

Today I just migrated from Squid 3 to Squid 4, and I found that a reverse proxy 
that was working fine before is now failing. The client browser sees this 
message:

[No Error] (TLS code: SQUID_ERR_SSL_HANDSHAKE)
Handshake with SSL server failed: [No Error]

This is how I configured the backend:

cache_peer 10.215.144.16 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/etc/ssl/MY-CA/certs/W1_cert.cer 
sslkey=/etc/ssl/MY-CA/certs/W1_key_nopassphrase.pem 
sslcafile=/etc/ssl/MY-CA/cacert.pem 
ssloptions=NO_SSLv3,NO_SSLv2,NO_TLSv1_2,NO_TLSv1_1 sslflags=DONT_VERIFY_PEER 
front-end-https=on name=MyServer

The NO_TLSv* options are because the backend server is an old Windows 2003 
(which hasn't changed either).

How can I debug this?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] reverse proxy Squid 4

2020-06-24 Thread Vieri
This is what the squid cache log reports:

2020/06/25 00:29:05.467 kid1| 83,5| NegotiationHistory.cc(81) 
retrieveNegotiatedInfo: SSL connection info on FD 15 SSL version NONE/0.0 
negotiated cipher
2020/06/25 00:29:05.467 kid1| ERROR: negotiating TLS on FD 15: 
error::lib(0):func(0):reason(0) (5/-1/0)
2020/06/25 00:29:05.467 kid1| 83,5| BlindPeerConnector.cc(68) 
noteNegotiationDone: error=0x55cf5c9bb5b8
2020/06/25 00:29:05.467 kid1| TCP connection to 10.215.144.16/443 failed

Same old issue where openssl does not say why the handshake failed.

I'm having the same problem with an Apache reverse proxy, so now I'm falling 
back to use http on my backend.

Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy Squid 4

2020-06-25 Thread Vieri


On Thursday, June 25, 2020, 10:32:46 AM GMT+2, Amos Jeffries 
 wrote: 

>
>  tls-options=NO_SSLv3,NO_TLSv1_3 tls-min-version=1.0
>
>  tls_options=NO_SSLv3,NO_TLSv1_1,NO_TLSv1_2,NO_TLSv1_3
>
> removing the "sslflags=DONT_VERIFY_PEER"
>
> Then reduce the ssloptions= as much as you can. Remove if possible. 

Tried all of that, but still just getting this in the log:

kid1| 83,5| NegotiationHistory.cc(81) retrieveNegotiatedInfo: SSL connection 
info on FD 13 SSL version NONE/0.0 negotiated cipher
kid1| ERROR: negotiating TLS on FD 13: error::lib(0):func(0):reason(0) 
(5/-1/0)

> A packet trace of what is being attempted will be useful then.

Will try to save one.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4 and on_unsupported_protocol

2020-06-29 Thread Vieri
Hi,

I'd like to allow whatsapp web through a transparent tproxy sslbump Squid setup.

The target site is not loading:

wss://web.whatsapp.com/ws

I get TCP_MISS/400 305 GET https://web.whatsapp.com/ws in Squid cache log.

I'm not sure I know how to use the on_unsupported_protocol diective.

I have this in my config file:

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all

How can I change this to allow websockets through Squid, but preferably only 
for a specific SRC IP addr. acl?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and on_unsupported_protocol

2020-06-29 Thread Vieri


On Monday, June 29, 2020, 6:41:41 PM GMT+2, Eliezer Croitoru 
 wrote: 
>
>
> I believe what you are looking for is at:
> https://wiki.squid-cache.org/ConfigExamples/Chat/Whatsapp
 
Thanks, but the article doesn't work for me.
I still see Firefox complaining (console) about not being able to connect to 
wss://web.whatsapp.com/ws.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and on_unsupported_protocol

2020-06-30 Thread Vieri
http_access allow limited_requested_mimetypes_1 privileged_extra1_src_ips 
limited_dst_domains_1
http_reply_access allow limited_replied_mimetypes_1 privileged_extra1_src_ips 
limited_dst_domains_1
http_access deny restricted_requested_mimetypes_1
http_reply_access deny restricted_replied_mimetypes_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes
 restricted_replied_mimetypes_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes
 restricted_requested_mimetypes_1
http_access deny limited_requested_mimetypes_1
http_reply_access deny limited_replied_mimetypes_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes
 limited_requested_mimetypes_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes
 limited_replied_mimetypes_1
http_access deny !privileged_src_ips bad_dst_domains
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_dst_domains
 bad_dst_domains
http_access deny bad_dst_ccn_domains
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_dst_ccn
 bad_dst_ccn_domains
http_access deny bad_dst_ccn_ips
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_dst_ccn
 bad_dst_ccn_ips
http_access allow privileged_extra1_src_ips limited_dst_domains_1
http_access deny limited_dst_domains_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=limited_dst_domains_1
 limited_dst_domains_1
http_access deny bad_filetypes !good_dst_domains_with_any_filetype
http_reply_access deny bad_filetypes !good_dst_domains_with_any_filetype
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_filetypes
 bad_filetypes
http_access deny bad_requested_mimetypes !good_dst_domains_with_any_mimetype
http_reply_access deny bad_replied_mimetypes !good_dst_domains_with_any_mimetype
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes
 bad_requested_mimetypes
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_mimetypes
 bad_replied_mimetypes
http_access allow localnet bl_lookup
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=bad_dst_domains_bl
 all
debug_options rotate=1 ALL,1
append_domain .domain.org
reply_header_access Alternate-Protocol deny all
acl DiscoverSNIHost at_step SslBump1
acl NoSSLIntercept ssl::server_name_regex "/SAMBA/proxy-settings/allowed.direct"
ssl_bump peek DiscoverSNIHost
ssl_bump splice NoSSLIntercept
ssl_bump bump all
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service antivirus respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access antivirus allow all
include /etc/squid/squid.include.common
include /etc/squid/squid.include.hide
cache_mem 32 MB
max_filedescriptors 65536
icap_service_failure_limit -1
icap_persistent_connections off


Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and on_unsupported_protocol

2020-06-30 Thread Vieri
On Tuesday, June 30, 2020, 1:41:57 PM GMT+2, Eliezer Croitor 
 wrote: 

> ^(w[0-9]+|[a-z]+\.)?web\.whatsapp\.com$

Yes, it does. I should have seen that... Thanks for your help!

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cannot access web servers with a specific browser

2020-09-14 Thread Vieri
Hi,

Before digging into the whole squid configuration, I'd like to know what the 
following line means:

NONE_ABORTED/200 0 CONNECT 216.58.211.36:443 - HIER_NONE/- -

I get this when trying to access a web page with a specific browser (Google 
Chrome).

However, from the exact same client host, any other browser works fine (IE, 
Firefox) and I get this in the cache log:

NONE/200 0 CONNECT 216.58.211.36:443 - ORIGINAL_DST/216.58.211.36 -

along with many other log messages that follow.

So what does NONE_ABORTED mean and what should I search for to fix this so the 
client can use Chrome?

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access web servers with a specific browser

2020-09-14 Thread Vieri

On Monday, September 14, 2020, 4:00:30 PM GMT+2, Walter H. 
 wrote: 


>> So what does NONE_ABORTED mean and what should I search for to fix this so 
>> the client can use Chrome?
>>
> What about Microsoft Edge?

The client is Windows 7, so no Edge.
So I got hold of a Windows 10 client and tried Edge there. I got the same 
NONE_ABORTED issue while every other non-chromium browser works fine.

> as I see you don't do SSL-bump,

I am. I could send the whole config here. I also set up an explicit proxy, but 
it seems I'm having issues with kerberos. As a side question, how can one test 
negotiate_kerberos_auth on the command line? I run:
# /usr/libexec/squid/negotiate_kerberos_auth -s HTTP/fqdn@DOMAIN
WRITE_SOMETHING
BH Invalid request

What is the format/syntax of WRITE_SOMETHING?

I'd like to try the explciit proxy instead of ssl-bump to see if there's a 
difference.
Still, the Firefox and Chrome clients are in the same conditions and only one 
is failing.

> could it be that the clients (Chrome) capability of useable ciphersuites 
> may not confirm to the ones offered by the server; the reason for 
> 'NONE_ABORTED'?

If I let the clients by-pass the Squid proxy and connect directly to the 
servers the web pages are properly accessed -- no issues.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access web servers with a specific browser

2020-09-14 Thread Vieri

On Monday, September 14, 2020, 6:01:43 PM GMT+2, Alex Rousskov 
 wrote: 


>> I get this when trying to access a web page with a specific browser (Google 
>> Chrome).
>
> What is your Squid version? Does it have a fix for GREASE support as
> detailed in https://github.com/squid-cache/squid/pull/663 ?

I have squid-4.12.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access web servers with a specific browser

2020-09-15 Thread Vieri

On Monday, September 14, 2020, 9:22:52 PM GMT+2, Alex Rousskov 
 wrote: 


>> I have squid-4.12.
>
> .. which means that the answer to my second question is "no". You need
> to upgrade to Squid v4.13 (for several reasons).

As simple as that.
Thank you very much. I can confirm that fixed the issue.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] acl for urls without regex

2020-09-29 Thread Vieri
Hi,

Is it possible to create an ACL from a text file containing URLs without 
treating them as regular expressions?
Otherwise, I get errors of this kind:

 ERROR: invalid regular expression: 
'https://whatever.net/auth_hotmail/?user={email}&email={email}': Invalid 
content of \{\}

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-09-29 Thread Vieri
Hi,

I have a url_regex ACL loaded with this file:

https://drive.google.com/file/d/1C5aZqPfMD3qlVP8zvm67c9ZnXUfz-cEW/view?usp=sharing

Then I have an access denial like so:

http_access deny bad_dst_urls

Problem is that I am not expecting to block, eg. https://www.google.com, but I 
am.
I know it's this ACL because if I remove the htttp_access deny line above, the 
browser can access  just fine.

I've been  looking around this file for possible matches  for google.com, but 
there shouldn't be.

Can anyone please let me know if there's a match, or how to enable debugging  
to see which record in this ACL is actually triggering the denial?

I'm trying with:
debug_options rotate=1 ALL,1 85,2 88,2

Then I grep the log for bad_dst_urls and DENIED, but I can't seem to find a 
clear match.

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-09-29 Thread Vieri
> None of the file entries are anchored regex. So any one of them could match.

>> Can anyone please let me know if there's a match, or how to enable debugging 
>>  to see which record in this ACL is actually triggering the denial?
>
> To do that we will need to see the complete and exact URL which is being 
> blocked incorrectly.

One of them is https://www.google.com/.

> NP: a large number of that files entries can be far more efficiently blocked 
> using the dstdomain ACL type. For example:
>
>  acl blacklist dstdomain .appspot.com

Agreed. However, this file is generated by an external process I don't control 
(SOC). It's like a "threat feed" I need to load in Squid.
The easiest way for me would be to tell Squid that it's just a list of exact 
URLs, not a list of regexps. I understand that's not possible.

This list comes with entries such as:

https://domain.org/?something={whatever}&other=(this)

So, if I don't want Squid to complain I process it a little before serving it 
to it and the above line becomes:

https://domain.org/\?something=\{whatever}&other=\(this)

You mention anchoring them... So now I adjusted the processing and the above 
becomes:

^https://domain.org/\?something=\{whatever}&other=\(this)$

I'm still getting the same denial when a client tries to access 
https://www.google.com/.

This is what I can see in cache.log:

client_side_request.cc(751) clientAccessCheckDone: The request GET 
https://www.google.com/ is DENIED; last ACL checked: bad_dst_urls

I'm also seeing other denials such as:

 client_side_request.cc(751) clientAccessCheckDone: The request GET 
http://www.microsoft.com/pki/certs/MicRooCerAut2011_2011_03_22.crt is DENIED; 
last ACL checked: bad_dst_urls

If I grep http://www.microsoft.com/pki/certs in the ACL file I get no results 
at all.
That's why I'm puzzled.

So here's the new anchored regex file in case you have the chance to test it 
and reproduce the issue:

https://drive.google.com/file/d/1ZUP9eRAqLzMG162xHfYRV9vx_47kWuXs/view?usp=sharing

Squid doesn't complain about syntax errors so I'm assuming the ACL is as 
expected.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-10-01 Thread Vieri
Thank you very much.
I will try to set up an external ACL so I don't have to worry about regular 
expressions.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-10-02 Thread Vieri

Regarding the use of an external ACL I quickly implemented a perl script that 
"does the job", but it seems to be somewhat sluggish.

This is how it's configured in squid.conf:
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=10 children-idle=3 concurrency=8 %PROTO %DST %PORT %PATH 
/opt/custom/scripts/squid/ext_txt_blwl_acl.pl 
--categories=adv,aggressive,alcohol,anonvpn,automobile_bikes,automobile_boats,automobile_cars,automobile_planes,chat,costtraps,dating,drugs,dynamic,finance_insurance,finance_moneylending,finance_other,finance_realestate,finance_trading,fortunetelling,forum,gamble,hacking,hobby_cooking,hobby_games-misc,hobby_games-online,hobby_gardening,hobby_pets,homestyle,ibs,imagehosting,isp,jobsearch,military,models,movies,music,podcasts,politics,porn,radiotv,recreation_humor,recreation_martialarts,recreation_restaurants,recreation_sports,recreation_travel,recreation_wellness,redirector,religion,remotecontrol,ringtones,science_astronomy,science_chemistry,sex_education,sex_lingerie,shopping,socialnet,spyware,tracker,updatesites,urlshortener,violence,warez,weapons,webphone,webradio,webtv

I'd like to avoid the use of a DB if possible, but maybe someone here has an 
idea to share on flat file text searches.

Currently the dir structure of my blacklists is:

topdir
category1 ... categoryN
domains urls

So basically one example file to search in is topdir/category8/urls, etc.

The helper perl script contains this code to decide whether to block access or 
not:

foreach( @categories )
{
    chomp($s_urls = qx{grep -nwx '$uri_dst$uri_path' $cats_where/$_/urls | 
head -n 1 | cut -f1 -d:});

    if (length($s_urls) > 0) {
    if ($whitelist == 0) {
    $status = $cid." ERR message=\"URL ".$uri_dst." in BL ".$_." 
(line ".$s_urls.")\"";
    } else {
    $status = $cid." ERR message=\"URL ".$uri_dst." not in WL 
".$_." (line ".$s_urls.")\"";
    }
    next;
    }

    chomp($s_urls = qx{grep -nwx '$uri_dst' $cats_where/$_/domains | head 
-n 1 | cut -f1 -d:});

    if (length($s_urls) > 0) {
    if ($whitelist == 0) {
    $status = $cid." ERR message=\"Domain ".$uri_dst." in BL ".$_." 
(line ".$s_urls.")\"";
    } else {
    $status = $cid." ERR message=\"Domain ".$uri_dst." not in WL 
".$_." (line ".$s_urls.")\"";
    }
    next;
    }
}

There are currently 66 "categories" with around 50MB of text data in all.
So that's a lot to go through each time there's an HTTP request.
Apart from placing these blacklists on a ramdisk (currently on an M.2 SSD disk 
so I'm not sure I'll notice anything) what else can I try?
Should I reindex the lists and group them all alphabetically?
For instance should I process the lists in order to generate a dir structure as 
follows?

topdir
a b c d e f ... x y z 0 1 2 3 ... 7 8 9
domains urls

An example for a client requesting https://www.google.com/ would lead to 
searching only 2 files:
topdir/w/domains
topdir/w/urls

An example for a client requesting https://01.whatever.com/x would also lead to 
searching only 2 files:
topdir/0/domains
topdir/0/urls

An example for a client requesting https://8.8.8.8/xyz would also lead to 
searching only 2 files:
topdir/8/domains
topdir/8/urls

Any ideas or links to scripts that already prepare lists for this?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
Hi,

I'd like to allow websockets from specific domains through Squid in intercept 
sslbump mode.

One of the clients reports:

Firefox can’t establish a connection to the server at
wss://ed1lncb62202.webex.com/direct?type=websocket&dtype=binary&rand=1602057495268&uuidtag=C99EG7B6-G550-43CG-AD72-7EA5F2CA80B0&gatewayip=X.X.X.X.

This is what I have in my squid configuration:

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all

I am obviously not using on_unsupported_protocol properly.

Any suggestions?

Regards,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
I also tried:

on_unsupported_protocol tunnel all

on Squid v. 4.13.

I don't see any denials in the access log.
The only thing I see regarding the URL I mentioned earlier is:

TCP_MISS/200 673 GET https://ed1lncb62202.webex.com/direct? - 
ORIGINAL_DST/62.109.225.31 text/html

It is easy to reproduce by going to the webex test site:

https://www.webex.com/test-meeting.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
Hi,

Using Google Chrome instead of Firefox gives me the same result:

Error during WebSocket handshake: Unexpected response code: 200

I'm not sure what to look for in cache.log.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
> To allow WebSocket tunnels, you need http_upgrade_request_protocols available 
> since v5.0.4

Thanks for the info.
My distro does not include v. 5 yet as it's still beta, although I could try 
compiling it.

Just a thought though. What would the easiest way be to allow websockets 
through in v. 4? That is, for trusted domains, allow a direct connection maybe?

eg. 
acl direct_dst_domains dstdomain "/opt/custom/proxy-settings/allowed.direct"
# or:
# acl direct_dst_domains ssl::server_name_regex 
"/opt/custom/proxy-settings/allowed.direct"
always_direct allow direct_dst_domains

Thanks

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-08 Thread Vieri
OK, so I'm now trying to compile Squid 5 instead of backporting to V 4, but I'm 
getting this silly error:

cp ../../src/tests/stub_fd.cc tests/stub_fd.cc
cp: cannot create regular file 'tests/stub_fd.cc': No such file or directory
make[3]: *** [Makefile:1452: tests/stub_fd.cc] Error 1

I guess it may be because the script is not in the right subdir.

Is this  a known issue?
Can I simply disable building the tests?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-08 Thread Vieri
> As a workaround, try sequential build ("make" instead of "make -j...")

I removed -j, but I'm still getting a similar error:

cp ../../src/tests/stub_fd.cc tests/stub_fd.cc
cp: cannot create regular file 'tests/stub_fd.cc': No such file or directory
make[3]: *** [Makefile:1402: tests/stub_fd.cc] Error 1
make[3]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/src/icmp'
make[2]: *** [Makefile:6667: all-recursive] Error 1
make[2]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/src'
make[1]: *** [Makefile:5662: all] Error 2
make[1]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/src'
make: *** [Makefile:591: all-recursive] Error 1

Thanks for the suggestion. I'll try a few other things. Which version of 
automake do you use?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-10 Thread Vieri
On Friday, October 9, 2020, 3:28:01 AM GMT+2, Amos Jeffries 
 wrote: 

 > I advise explicitly using -j1 for the workaround build.


Well, I'm running with -j1, but I'm still getting the same error message.

Here's a snippet of the build log:

make -j1
Making all in compat
make[1]: Entering directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/compat'
/bin/sh ../libtool  --tag=CXX   --mode=compile x86_64-pc-linux-gnu-g++ 
-DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\" 
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\" 
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib -I../src 
-I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow 
-Woverloaded-virtual -pipe -D_REENTRANT -O2 -pipe -c -o assert.lo assert.cc

It finally ends with:

cp ../../src/tests/stub_fd.cc tests/stub_fd.cc
cp: cannot create regular file 'tests/stub_fd.cc': No such file or directory

Would you like to review the full build log?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-10 Thread Vieri
I'm also getting this other file that can't be copied:

cp ../../src/tests/stub_debug.cc tests/stub_debug.cc
cp: cannot create regular file 'tests/stub_debug.cc': No such file or directory
make[3]: *** [Makefile:1490: tests/stub_debug.cc] Error 1

Tried "make" and "make -j1", but the error message is the same.

Are you using a specific version of automake?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-11 Thread Vieri

Just a quick test and question.

If I manually create the tests subdirs and run make then I get an error such as:

/bin/sh ../../libtool  --tag=CXX   --mode=link x86_64-pc-linux-gnu-g++ -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Woverloaded-virtual -pipe 
-D_REENTRANT -O2 -pipe  -Wl,-O1 -Wl,--as-needed -o libdiskio.la  
DiskIOModule.lo ReadRequest.lo WriteRequest.lo libtests.la AIO/libAIO.la -lrt 
Blocking/libBlocking.la DiskDaemon/libDiskDaemon.la 
DiskThreads/libDiskThreads.la -lpthread IpcIo/libIpcIo.la Mmapped/libMmapped.la
libtool:   error: cannot find the library 'libtests.la' or unhandled argument 
'libtests.la'
make[4]: *** [Makefile:868: libdiskio.la] Error 1
make[4]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4/src/DiskIO'


This may be a dumb question, but where are the build instructions for 
libtests.la?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-12 Thread Vieri
I'm compiling on a Gentoo Linux system the tarball taken from 
http://www.squid-cache.org/Versions/v5/squid-5.0.4.tar.gz.

The build log (failed) is here (notice the call to make -j1):

https://drive.google.com/file/d/1no0uV3Ti1ILZavAaiOyFIY9W0eLRv87q/view?usp=sharing

If I build from git f4ade36 all's well:

https://drive.google.com/file/d/1y-3wlDT_OrwSp7epvDq63xpkYv8gu9Pq/view?usp=sharing

So now I'm just going to have to spot the difference.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-13 Thread Vieri

On Tuesday, October 13, 2020, 3:55:56 PM GMT+2, Alex Rousskov 
 wrote: 

> The beginning of the above log appears to show some unofficial bootstrapping 
> steps.


Yes, I was looking into this today and I saw that the actual difference between 
a manual build and a Gentoo Linux build is with the following:

1) the build fails as mentioned earlier in this thread when running 
Gentoo-specific "configure" scripts. Bootstrapping makes no real difference.

econf: updating squid-5.0.4-20200825-rf4ade365f/cfgaux/config.sub with 
/usr/share/gnuconfig/config.sub
econf: updating squid-5.0.4-20200825-rf4ade365f/cfgaux/config.guess with 
/usr/share/gnuconfig/config.guess
./configure --prefix=/usr --build=x86_64-pc-linux-gnu 
--host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info 
--datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib 
--disable-dependency-tracking --disable-silent-rules 
--docdir=/usr/share/doc/squid-5.0.4 --htmldir=/usr/share/doc/squid-5.0.4/html 
--with-sysroot=/ --libdir=/usr/lib64

Correct me if I'm wrong, but I don't see anything wrong with the third line and 
the parameters passed to configure (unless disable-dependency-tracking could 
have some side-effects).
So I guess the problem might be with the first and second lines where some 
config scripts seem to be replaced.
The timestamps in /usr/share/gnuconfig/config.{sub,guess} are more recent than 
the ones distributed in the Squid tarball.

2) the build succeeds even when using the Gentoo build environment just as long 
as I do not run the Gentoo-specific econf (configure) script but "./configure" 
instead.

I guess I will need to bring this up on the Gentoo forum to see what's going 
on. I am not instructing the build system to "patch" cfgaux so I guess "econf" 
automatically detects something in the squid tarball that makes it patch the 
config.* files.

Thanks for your time.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-15 Thread Vieri
 On Tuesday, October 13, 2020, 6:14:18 PM GMT+2, Alex Rousskov 
 wrote: 

> You should probably follow up with Gentoo folks responsible for this Squid 
> customization.

Squid 5 now builds and installs perfectly on Gentoo Linux with a few custom 
changes to the distro's package installation script. I hope the devs will 
include these changes so Squid 5 can be readily available to everyone.
BTW it "makes" in parallel fine with -jx where x > 1, so no issues there either.

So, coming back to the original post: websockets.

I added this to Squid 5:

http_upgrade_request_protocols OTHER allow all

Am I right if I state that this should allow any protocol not just WebSockets?
In other words, I do not need to be specific with 
'http_upgrade_request_protocols WebSocket allow all' unless I want to, right?

Unfortunately, after reloading Squid 5 the client browser still states the same:

The connection to 
wss://ed1lncb65702.webex.com/direct?type=websocket&dtype=binary&rand=1602769907574&uuidtag=9E73C14G-1580-43B4-B8D4-91453FCF1939&gatewayip=MY_IP_ADDR
 was interrupted while the page was loading.

And in access.log I can see this:

[Thu Oct 15 15:52:27 2020].411  29846 10.215.144.48 TCP_TUNNEL/101 0 GET 
https://ed1lncb65702.webex.com/direct? - ORIGINAL_DST/62.109.225.174 -
[Thu Oct 15 15:52:27 2020].831    125 10.215.144.48 NONE_NONE/000 0 CONNECT 
62.109.225.174:443 - ORIGINAL_DST/62.109.225.174 -
[Thu Oct 15 15:52:28 2020].786 11 10.215.111.210 NONE_NONE_ABORTED/000 0 
CONNECT 44.233.111.149:443 - HIER_NONE/- -
[Thu Oct 15 15:52:37 2020].414  29870 10.215.144.48 TCP_TUNNEL/101 0 GET 
https://ed1lncb65702.webex.com/direct? - ORIGINAL_DST/62.109.225.174 -
[Thu Oct 15 15:52:37 2020].919    107 10.215.144.48 NONE_NONE/000 0 CONNECT 
62.109.225.174:443 - ORIGINAL_DST/62.109.225.174 -

What does NONE_NONE/000 mean?

Where can I go from here?
What can I try to debug this further?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-16 Thread Vieri

 On Thursday, October 15, 2020, 5:28:03 PM GMT+2, Alex Rousskov 
 wrote: 

>> In other words, I do not need to be specific with
>> 'http_upgrade_request_protocols WebSocket allow all' unless I want
>> to, right?
>
> Just in case somebody else starts copy-pasting the above rule into their
> configurations: The standard (RFC 6455) WebSocket protocol name in HTTP
> Upgrade requests is "websocket". Squid uses case-sensitive comparison
> for those names so you should use "websocket" in squid.conf.

OK, good to know because:

squid-5.0.4-20200825-rf4ade365f/src/cf.data.pre contains:
    Usage: http_upgrade_request_protocols  allow|deny [!]acl ...

    The required "protocol" parameter is either an all-caps word OTHER or an
    explicit protocol name (e.g. "WebSocket") optionally followed by a slash
    and a version token (e.g. "HTTP/3"). Explicit protocol names and
    versions are case sensitive.

That's why I used "WebSocket" instead of "websocket" in my example. To avoid 
confusion, cf.data.pre could be updated to be more clear.


> The important part here is the existence of those extra transactions.
> They may be related to SslBump if you are bumbing this traffic, but then
> I would expect a slightly different access.log composition.

Hmm, I'm supposed to be sslbumping, yes. I can share my full squid config & 
iptables redirection entries if you wish.

> https://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction

 I enabled debugging on a test system where I was the only client (one Firefox 
instance).

The access log is here:

https://drive.google.com/file/d/1jryX5BW4yxLTSBe0QDavPSiKLBpOvtnV/view?usp=sharing

The only odd thing I see is a few ABORTED but they are all WOFF fonts which 
should be unimportant except for 
https://join-test.webex.com/mw3300/mywebex/header.do which is only a TCP 
refresh "abort".

The overwhelming cache log is here (I've sed'ed a few strings for privacy 
reasons):

https://drive.google.com/file/d/1QYRr-0F-DGnCZtyuuAw8RsEgcHICN_0c/view?usp=sharing

I can see the upgrade messages are parsed:

HttpHeader.cc(1548) parse: parsed HttpHeaderEntry: 'Upgrade: WebSocket'

I suppose that adding the "Upgrade[66]" entry is as expected.

Then, I get lost. I can see that Squid is trying to open ed1lncb62801.webex.com 
with https, but it is unclear to me why the ciient complains that the 
connection to the wss:// site is being interrupted:

The connection to 
wss://ed1lncb62801.webex.com/direct?type=websocket&dtype=binary&rand=1602830016480&uuidtag=5659FGE6-DF29-47A7-859A-G4D5FDC937A2&gatewayip=PUB_IPv4_ADDR_2
 was interrupted while the page was loading.

Thanks for all the help you can give me.

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-16 Thread Vieri
Hi,

I think I found something in the cache.log I posted before.

sendRequest: HTTP Server conn* local=PUB_IPv4_ADDR_3
...
sendRequest: HTTP Server conn* local=PUB_IPv4_ADDR_2

It seems that Squid sometimes connects to the remote HTTP server with either 
one of the available addresses on the Squid box (eg. PUB_IPv4_ADDR_2, 
PUB_IPv4_ADDR_3, etc). These addresses are on ppp interfaces. In fact, I 
noticed that if the Firefox client shows this error message in its console as 
in my previous post:

The connection to 
wss://ed1lncb62801.webex.com/direct?type=websocket&dtype=binary&rand=1602830016480&uuidtag=5659FGE6-DF29-47A7-859A-G4D5FDC937A2&gatewayip=PUB_IPv4_ADDR_2
 was interrupted while the page was loading.

then I see a corresponding 'sendRequest: HTTP Server conn* 
local=PUB_IPv4_ADDR_3' when trying to connect to the same origin. So I'm 
deducing that the remote websocket server is expecting a client connection from 
PUB_IPv4_ADDR_2 when in fact Squid is trying to connect from PUB_IPv4_ADDR_3 -- 
hence the "interruption" message.

My test Squid instance is running on a multi-ISP router, so I guess I have to 
figure out how to either force connections out one interface only for the Squid 
cache or tell Squid to only bind to one interface.

It's only a wild guess though.

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-16 Thread Vieri
BTW how does Squid decide which IP address to use for "local" here below?

sendRequest: HTTP Server conn* local=

I tried specifying a bind address in http_port and https_port as well as 
routing traffic from that address out through just one ppp interface, but that 
doesn't seem to change the way "local" is assigned an address.

Is there a way to keep "local" always the same?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-16 Thread Vieri

On Friday, October 16, 2020, 4:48:55 PM GMT+2, Alex Rousskov 
 wrote: 

> tcp_outgoing_address.


OK, I fixed the "local" address issue, but I'm still seeing the same behavior.

I pinpointed one particular request that's failing:

2020/10/16 16:56:37.250 kid1| 85,2| client_side_request.cc(745) 
clientAccessCheckDone: The request GET 
https://ed1lncb62601.webex.com/direct?type=websocket&dtype=binary&rand=1602860196950&uuidtag=G7609603-81A2-4B8D-A1C0-C379CC9B12G9&gatewayip=PUB_IPv4_ADDR_2
 is ALLOWED; last ACL checked: all

It is in this log:

https://drive.google.com/file/d/1OrB42Cvom2PNmV-dnfLVrnMY5IhJkcpS/view?usp=sharing

I see a lot of '101 Switching Protocols' and references to upgrade to 
websockets, but I'm not sure where it is actually failing.

I don't know how to narrow this down further, but if someone could give it 
another peak I'd be most grateful.

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-18 Thread Vieri

On Saturday, October 17, 2020, 10:36:47 PM GMT+2, Alex Rousskov 
 wrote: 

> or due to some TLS error.
> I filed bug #5084 

 Hi again,

Thanks for opening a bug report.

I don't want to add anything there myself because I wouldn't want to confuse 
whoever might take this issue into account, but I'd like to comment on this 
list that I've captured the traffic between Squid and the destination server.
It's here:

https://drive.google.com/file/d/1WS7Y62Fng5ggXryzKGW1JOsJ16cyR0mg/view?usp=sharing

I can see a client hello, Server Hello Done, Cipher Spec, etc, but then it 
starts over and over again.
So, could it be a TLS issue as you hinted?

I also captured the client console regarding the wss messages (Firefox).
It won't reveal much, but here it is anyway:

https://drive.google.com/file/d/1u4uXW0TCTwClE2kt2nbJSGt5VLdKC03t/view?usp=sharing
NB: the destination server is not the same one as in the packet trace, but 
that's what the client gets each time (it keeps showing '101 Switching 
Protocols' over and over).

Please let me know if I should add something to the bug report, or if you see 
anything of interest in the data I've sent.

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslbump https intercepted or tproxy

2020-10-19 Thread Vieri
Hi,

It's unclear to me if I can use TPROXY for HTTPS traffic.

If I divert traffic and use tproxy in the Linux kernel and then set this in 
squid:

https_port 3130 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem

it seems to be working fine, just as if I were to REDIRECT https traffic and 
then use this in Squid:

https_port 3130 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem

So, does anyone know if it's not recommended / not supported to use tproxy with 
https traffic?
I'm asking because I don't see any issues with tproxy, with the added advantage 
of being able to route on the gateway per source IP addr. (in intercepted mode, 
the source is always Squid).

Are there any reasons for which one would not use TPROXY with HTTPS?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid restart

2020-10-31 Thread Vieri
Size:   15.07 KB
    Requests given to unlinkd:  4657
Median Service Times (seconds)  5 min    60 min:
    HTTP Requests (All):   0.05046  0.05046
    Cache Misses:  0.06286  0.06286
    Cache Hits:    0.0  0.0
    Near Hits: 0.15048  0.15048
    Not-Modified Replies:  0.0  0.0
    DNS Lookups:   0.0  0.0
    ICP Queries:   0.0  0.0
Resource usage for squid:
    UP Time:    108.639 seconds
    CPU Time:   10.588 seconds
    CPU Usage:  9.75%
    CPU Usage, 5 minute avg:    12.90%
    CPU Usage, 60 minute avg:   12.90%
    Maximum Resident Size: 462736 KB
    Page faults with physical i/o: 0
Memory accounted for:
    Total accounted:    37879 KB
    memPoolAlloc calls:   1256976
    memPoolFree calls:    1307898
File descriptor usage for squid:
    Maximum number of file descriptors:   4096
    Largest file desc currently in use:    567
    Number of file desc currently in use:  559
    Files queued for open:   0
    Available number of file descriptors: 3537
    Reserved number of file descriptors:   100
    Store Disk files open:   0
Internal Data Structures:
   997 StoreEntries
   997 StoreEntries with MemObjects
   683 Hot Object Cache Items
   683 on-disk objects

This did not happen with Squid 4, or maybe it wasn't as obvious.


I guess the reason could be for this:

    Maximum number of file descriptors:   4096
    Largest file desc currently in use:   4009
    Number of file desc currently in use: 3997

However, I set the following directive in squid.conf:

max_filedescriptors 65536

It doesn't seem to be honored here unless I stop and restart the squid service 
again (/etc/init.d/squid restart from command line):

File descriptor usage for squid:
    Maximum number of file descriptors:   65535

It seems that if I run the same command (/etc/init.d/squid restart) from 
crontab, that ulimit is not honored. I guess that's the root cause of my issue 
because I am asking cron to restart Squid once daily. I'll try not to, but I 
was hoping to see if there was a reliable way to fully restart the Squid 
process.

Vieri



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid restart

2020-11-02 Thread Vieri


On Saturday, October 31, 2020, 4:08:23 PM GMT+1, Amos Jeffries 
 wrote: 

>> However, I set the following directive in squid.conf:
>> 
>> max_filedescriptors 65536
>> 
> Are you using systemd, SysV or another init ?

I'm using SysV on Gentoo Linux.

> It doesn't seem to be honored here unless I stop and restart the squid 
> service again (/etc/init.d/squid restart from command line):
> 
> File descriptor usage for squid:
>      Maximum number of file descriptors:   65535
> 
> It seems that if I run the same command (/etc/init.d/squid restart) from 
> crontab, that ulimit is not honored. I guess that's the root cause of my 
> issue because I am asking cron to restart Squid once daily. I'll try not to, 
> but I was hoping to see if there was a reliable way to fully restart the 
> Squid process.
> 
> Vieri

> 

The init system restart command is the preferred one - it handles any 
system details that need updating. Alternatively, "squid -k restart" can 
be used.

The SysV init script works fine when run from command line or at boot time (and 
probably from a custom inittab script -- cannot confirm it yet). The problem 
shows up when running it from cron (I have cronie-1.5.4).
I'll take a look at the '-k restart' alternative.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid restart

2020-11-02 Thread Vieri
Just in case anyone else has this problem, or if anyone would like to comment 
on this, here's the solution I've found.

Running '/etc/init.d/squid restart' from cron (setting it up in crontab) does 
not honor ulimits.

Configuring /etc/crontab with something like 'bash -l /etc/init.d/squid 
restart' does not work either (it doesn't seem to run at all).

However, creating a custom.sh script somewhere which calls /etc/init.d/squid 
restart, and then configuring crontab with 'bash -l -c /somewhere/custom.sh' 
actually works. I now see:

# squidclient mgr:info
[...]
File descriptor usage for squid:
    Maximum number of file descriptors:   65535
    Largest file desc currently in use:   1583
    Number of file desc currently in use: 1576
    Files queued for open:   0
    Available number of file descriptors: 63959
    Reserved number of file descriptors:   100
    Store Disk files open:       0

I'm not sure why, but it works.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-11-19 Thread Vieri

On Wednesday, November 4, 2020, 3:27:25 AM GMT+1, Alex Rousskov 
 wrote: 
>   https://bugs.squid-cache.org/show_bug.cgi?id=5084

Hi,

I added a comment to that bug report.
I cannot reproduce the problem anymore, at least not with the latest version of 
Squid 5.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5 service stops after assertion failure

2021-01-24 Thread Vieri
Hi,

My Squid web proxy crashed as shown in this log:

2021/01/24 13:18:13 kid1| helperHandleRead: unexpected reply on channel 0 from 
bllookup #Hlpr21 '43 ERR message=[...]
    current master transaction: master65
2021/01/24 13:18:13 kid1| assertion failed: helper.cc:1066: "skip == 0 && eom 
== NULL"
    current master transaction: master65
2021/01/24 13:18:13 kid1| Set Current Directory to /var/cache/squid
2021/01/24 13:18:13 kid1| Starting Squid Cache version 
5.0.4-20201125-r5fadc09ee for x86_64-pc-linux-gnu...
2021/01/24 13:18:13 kid1| Service Name: squid
[...]
REPEATS (assertion failure & squid restart)
[...]
2021/01/24 13:18:27 kid1| helperHandleRead: unexpected reply on channel 0 from 
bllookup #Hlpr21 '2 ERR message=[...]
    current master transaction: master76
2021/01/24 13:18:27 kid1| assertion failed: helper.cc:1066: "skip == 0 && eom 
== NULL"
    current master transaction: master76
2021/01/24 13:18:27| Removing PID file (/run/squid.pid)
2021/01/24 13:18:34| Pinger exiting.
2021/01/24 13:18:37| Pinger exiting.

After the assertion failure Squid tries to restart a few times (assertion 
failures seen again) and finally exits.
A manual restart works, but I don't know for how long.

The external script "bllookup" is probably responsible for bad output, but 
maybe Squid could handle it without crashing.

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5 service stops after assertion failure

2021-01-25 Thread Vieri


On Sunday, January 24, 2021, 11:03:19 PM GMT+1, Amos Jeffries 
 wrote: 

>> The external script "bllookup" is probably responsible for bad output,
>
> That is a certainty.
>
>> but maybe Squid could handle it without crashing.
> 
> As you noticed, Squid halts service only after the helper fails 10 
> multiple times in a row. Before that Squid is restarting the helper to 
> see if it was a temporary issue.

OK, the external script is definitely guilty. However, it is buggy and triggers 
the Squid assertion failure only in specific circumstances. So it's 
trasaction-specific. In my use case I would definitely prefer that only a few 
transactions were "killed", and that the whole of the proxy service would keep 
working.
Of course, I would still need to identify these cases and fix them, but in the 
meantime I would not get a general crash.
On the other hand a general failure forces me to look into this issue with 
greater celerity. ;-) 

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5 service stops after assertion failure

2021-01-25 Thread Vieri

On Sunday, January 24, 2021, 11:08:49 PM GMT+1, Alex Rousskov 
 wrote: 

> Filing a bug report with Squid Bugzilla may increase chances of this problem 
> getting fixed.

Done here:

https://bugs.squid-cache.org/show_bug.cgi?id=5100

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] c-icap, clamav and squid

2021-02-12 Thread Vieri
Hi,

I don't know whether this question should be asked here or on the c-icap or 
clamav lists.

I've had a c-icap/squid failure and noticed that it was because my tmpfs on 
/var/tmp was full (12 GB).

It was filled with files such as these:

# lsof +D /var/tmp/
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
c-icap 773 root   31u   REG   0,48 1204 2169779504 
/var/tmp/CI_TMP_xqWE8   B
c-icap    3080 root   29u   REG   0,48 1204 2169784571 
/var/tmp/CI_TMP_pE6B7   6

The fact that these files build up and are not deleted might be a side-effect 
of something that's failing.

Do you think that the c-icap process is the only one responsible for cleaning 
these files up?
Or is there some Squid configuration option or a cache log event I should check 
regarding this?

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Why some traffic is TCP_DENIED

2021-02-16 Thread Vieri
Hi,

I'm trying to understand why Squid denies access to some sites, eg:

[Tue Feb 16 10:15:36 2021].044  0 - TCP_DENIED/302 0 GET 
http://www.microsoft.com/pki/certs/MicRooCerAut2011_2011_03_22.crt - 
HIER_NONE/- text/html
[Tue Feb 16 10:15:36 2021].050 46 10.215.248.160 TCP_DENIED/403 3352 - 
52.109.12.25:443 - HIER_NONE/- text/html
[Tue Feb 16 10:15:36 2021].050  0 10.215.248.160 NONE_NONE/000 0 - 
error:transaction-end-before-headers - HIER_NONE/- -
[Tue Feb 16 10:15:36 2021].052    140 10.215.246.144 TCP_MISS/200 193311 GET 
https://outlook.office.com/mail/ - ORIGINAL_DST/52.97.168.210 text/html
[Tue Feb 16 10:15:36 2021].053 49 10.215.248.74 TCP_MISS/200 2037 GET 
https://puk1-collabhubrtc.officeapps.live.com/rtc2/signalr/negotiate? - 
ORIGINAL_DST/52.108.88.1 application/json
[Tue Feb 16 10:15:36 2021].057  0 10.215.247.159 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- -
[Tue Feb 16 10:15:36 2021].057  0 10.215.247.159 TCP_DENIED/403 3353 - 
40.67.251.132:443 - HIER_NONE/- text/html
[Tue Feb 16 10:15:36 2021].057  0 10.215.247.159 NONE_NONE/000 0 - 
error:transaction-end-before-headers - HIER_NONE/- -


If I take the first line in the log and I open the URL from a client I use then 
the site opens as expected, and the corresponding Squid log is:

[Tue Feb 16 10:45:50 2021].546    628 10.215.111.210 TCP_MISS/200 2134 GET 
https://www.microsoft.com/pki/certs/MicRooCerAut2011_2011_03_22.crt - 
ORIGINAL_DST/23.210.36.30 application/octet-stream
[Tue Feb 16 10:45:52 2021].668 49 10.215.111.210 NONE_NONE/000 0 CONNECT 
216.58.215.138:443 - ORIGINAL_DST/216.58.215.138 -

In this log I see my host's IP addr. 10.215.111.210.
However, in the first log I do not see a source IP address. Why?

Other clients seem to be denied access with errors in the log such as 
"NONE_NONE/000"  followed by error:invalid-request or 
error:transaction-end-before-headers. How can I find out why I get "invalid 
requests"? Would a tcpdump on the server or client help? Or should I enable 
verbose debugging in Squid?

BTW this might be irrelevant but these messages seem to come up when accessing 
office 365 sites.

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] kswapd0 and memory usage

2021-03-29 Thread Vieri
Hi,

I've been running squid & c-icap for years, and only recently have I had a 
severe system slowdown.

My kswapd0 process was at a constant 100% CPU usage level until I forced 
restarting of both squid and c-icap.

I've been using several Squid versions over the years, but the only differences 
I know of between my previous recent setup that worked and the current setup 
that has "failed" once (for now)  are:

- upgraded from 5.0.4-20201125-r5fadc09ee to Version 5.0.5-20210223-r4af19cc24

- set cgroups for both squid and c-icap services with just one setting: 
cpu.shares 512

- upgraded to c-icap 0.5.8

Given the stressful situation I only had time to notice that kswapd0 was at 
100%, that top reported that all swap space was being used, and that the whole 
server was very sluggish. The additional problem is that the system is a router 
and uses TPROXY with squid sslbump so I don't think I can virtualize the web 
proxying services. Hence the use of cgroups to try to contain squid, c-icap and 
clamav. I have yet to define a cgroup for memory usage.

Restarting Squid and c-icap alone (not clamd) immediately solved the kswapd0 
"gone wild" issue.
Mem usage went back to something like:

# free -h
  total    used    free  shared  buff/cache   available
Mem:   31Gi   9.2Gi    21Gi    48Mi   1.0Gi    21Gi
Swap:  35Gi   1.7Gi    33Gi

I only have debug_options rotate=1 ALL,1 in my squid config file, and sifting 
through cache.log doesn't give me any clues.

If this were to happen again (not sure when or if) what should I try to search 
for?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kswapd0 and memory usage

2021-03-31 Thread Vieri
 On Tuesday, March 30, 2021, 8:01:30 AM GMT+2, Amos Jeffries 
 wrote: 

>> If this were to happen again (not sure when or if) what should I try to 
>> search for?
>
> Output of the "squidclient mgr:mem", "top" and "ps waux" commands would 
> be good.
>
> Those will show how Squid is using the memory it has, what processes are 
> using the most memory, and what processes are running. Most memory 
> issues can be triaged with that info.

Will do, thanks. I have a script that tries to "predict" when these problems 
are about to happen. It runs something like
timeout 30 squidclient mgr:info
and if it actually times out then it restarts both squid and c-icap.
So I'm afraid I might not get anything out of "squidclient mgr:mem", but I will 
run top -b -n 1 and ps waux.

Thanks,

Vieri



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL handshake

2021-07-27 Thread Vieri
Hi,

Just recently I've noticed that LAN clients going through Squid with sslbump 
are all of a sudden unable to access certain HTTPS sites such as 
login.yahoo.com.
The squid log has lines like:

kid1| 4,3| Error.cc(22) update: recent: 
ERR_SECURE_CONNECT_FAIL/SQUID_ERR_SSL_HANDSHAKE+TLS_LIB_ERR=1423506E+TLS_IO_ERR=1

and the client error page shows a line like this:

SQUID_TLS_ERR_CONNECT+TLS_LIB_ERR=14094410+TLS_IO_ERR=1

I'm not sure why the lib error code is different. I might not have tracked down 
the right connection in the log.

I have not changed anything in the OS so it might be because of change in the 
remote web service.
It might be that my openssl version is already too old (1.1.1g), and that the 
web site forces the use of an unsupported cypher?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL handshake

2021-07-28 Thread Vieri
Hi,

I don't know if my situation is like Nishant's, but today my issues have gone 
away without intervention on my behalf.
I'm guessing the cause was on the remote server's side or some in-between SSL 
inspection...

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-16 Thread Vieri
Hi,

Just a quick follow-up on this.

I dropped squidclamav so I could test c-icap-modules's clamd service instead.
The only difference between the two is that squidclamav was using unix sockets 
while c-icap-modules is using clamd.

At first, the results were good. The open fd numbers were fluctuating, but 
within the 1k-2k limit during the first days. However, today I'm getting 4k, 
and it's only day 5. I suspect I'll be getting 10k+ numbers within another week 
or two. That's when I'll have to restart squid if I don't want the system to 
come to a network crawl.

I'm posting info and filedescriptors here:

https://drive.google.com/file/d/1V7Horvvak62U-HjSh5pVEBvVnZhu-iQY/view?usp=sharing

https://drive.google.com/file/d/1P1DAX-dOfW0fzt1sAeyT35brQyoPVodX/view?usp=sharing

By the way, what does "Largest file desc currently in use" mean exactly? Should 
this value also drop (eventually) under sane conditions?

So I guess moving from squidclamav to c-icap-modules did improve things, but 
I'm still facing something wrong. I could try moving back to squidclamav in 
"clamd mode" instead of unix sockets just to see if I get the same partial 
improvement as the one I've witnessed this week.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-18 Thread Vieri

From: Amos Jeffries 
>
> Sorry I have a bit of a distraction going on ATM so have not got to that 

> detailed check yet. Good to hear you found a slightly better situation > 
> though.
[...]
> In normal network conditions it should rise and fall with your peak vs 
> off-peak traffic times. I expect with your particular trouble it will 
> mostly just go upwards.


No worries. I'd like to confirm that I'm still seeing the same issue with 
c-icap-modules, even though it's slightly better in that the FD numbers grow 
slower, at least at first.
I must say that it seems to be growing faster now. I had 4k two days ago, now I 
have:
Largest file desc currently in use:   6664
Number of file desc currently in use: 6270
So it seesm that the more days go by, the faster the FD numbers rise.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-27 Thread Vieri
Hi,

I just wanted to add some information to this topic, although I'm not sure if 
it's related.


I noticed that if I set bypass=1 in squid.conf (regarding ICAP), and if I stop 
the local clamd service (not the c-icap service), then the clients see Squid's 
ERR_ICAP_FAILURE page.
Is this expected?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-29 Thread Vieri
Hi,

I reproduced the problem, and saw that the c-icap server (or its squidclamav 
module) reports a 500 internal server error when clamd is down. I guess that's 
not bypassable?


The c-icap server log reports:

Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(1934) dconnect: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, entering.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(2015) connectINET: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, ERROR Can't connect on 127.0.0.1:3310.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(2015) connectINET: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, ERROR Can't connect on 127.0.0.1:3310.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(744) 
squidclamav_end_of_data_handler: Mon Jan 29 08:30:35 2018, 5134/1290311424, 
ERROR Can't connect to Clamd daemon.
Mon Jan 29 08:30:35 2018, 5134/1290311424, An error occured in end-of-data 
handler !return code : -1, req->allow204=1, req->allow206=0


Here's Squid's log:

https://drive.google.com/file/d/18HmM8pOuDQmE4W_vwmSncXEeJSvgDjDo/view?usp=sharing

I was hoping I could relate this to the original topic, but I'm afraid they are 
two different issues.


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP 500 is not bypassed

2018-01-30 Thread Vieri
Alex, thanks for your time.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] installing Squid: /run dir creation

2019-01-29 Thread Vieri
Hi,

My Linux distro warns me that when trying to install Squid an attempt is made 
to write to a "volatile" dir.

The Makefile in the src subdir contains:

    $(mkinstalldirs) $(DESTDIR)`dirname $(DEFAULT_PID_FILE)`

The default PID file being /run/squid.pid, the above tries to make the /run dir.

Is it necessary to keep this in the Makefile?

Shouldn't the /run/* files be created at runtime anyway?

The /run dir is also created by the OS.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] installing Squid: /run dir creation

2019-01-29 Thread Vieri

On Tuesday, January 29, 2019, 1:06:22 PM GMT+1, Amos Jeffries 
 wrote: 
>>
>> Is it necessary to keep this in the Makefile?
>> 
>
> Yes. The path is configurable with --with-pidfile=PATH, so it can be
> absolutely anywhere.
>
> It would help to have a hint about what OS you are using and what
> /configure parameters you used.

I'm using Gentoo and the ebuild (package manager) hardcodes the PID file name 
when calling the configure script:

--with-pidfile=/run/squid.pid

So if this is the case then maybe it would make sense to remove that 
mkinstalldirs line in the Makefile, at least only downstream by the Gentoo devs 
as a patch before configuring/compiling. 
Makefiles might change in the future, but that would be up to the Gentoo devs 
to update. 

I don't know for sure yet if this is why Gentoo "warns" me that the Squid 
installation is trying to write to /run, or if there are other parts of the 
installation code that might do so too.

I'll make a few tests first, but correct me if I'm wrog when I say that if one 
*always* passes the same PID file path to the configure script then that 
mkinstalldirs can be safely removed from the Makefile.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] installing Squid: /run dir creation

2019-01-29 Thread Vieri
I can add the following info to my previous e-mail. Here's the configure 
command (the pid file name is always the same -- other options may vary 
according to user preferences or system deps):

$ ./configure --prefix=/usr --build=x86_64-pc-linux-gnu 
--host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info 
--datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib 
--disable-dependency-tracking --disable-silent-rules 
--docdir=/usr/share/doc/squid-4.5 --htmldir=/usr/share/doc/squid-4.5/html 
--with-sysroot=/ --libdir=/usr/lib64 --sysconfdir=/etc/squid 
--libexecdir=/usr/libexec/squid --localstatedir=/var 
--with-pidfile=/run/squid.pid --datadir=/usr/share/squid 
--with-logdir=/var/log/squid --with-default-user=squid 
--enable-removal-policies=lru,heap --enable-storeio=aufs,diskd,rock,ufs 
--enable-disk-io 
--enable-auth-basic=NCSA,POP3,getpwnam,SMB,SMB_LM,LDAP,PAM,RADIUS 
--enable-auth-digest=file,LDAP,eDirectory --enable-auth-ntlm=SMB_LM 
--enable-auth-negotiate=kerberos,wrapper 
--enable-external-acl-helpers=file_userip,session,unix_group,delayer,time_quota,wbinfo_group,LDAP_group,eDirectory_userip,kerberos_ldap_group
 --enable-log-daemon-helpers --enable-url-rewrite-helpers 
--enable-cache-digests --enable-delay-pools --enable-eui --enable-icmp 
--enable-follow-x-forwarded-for --with-large-files 
--with-build-environment=default --disable-strict-error-checking 
--disable-arch-native --with-included-ltdl=/usr/include 
--with-ltdl-libdir=/usr/lib64 --with-libcap --enable-ipv6 --disable-snmp 
--with-openssl --with-nettle --with-gnutls --enable-ssl-crtd --disable-ecap 
--disable-esi --enable-htcp --enable-wccp --enable-wccpv2 
--enable-linux-netfilter --enable-zph-qos --with-netfilter-conntrack 
--with-mit-krb5 --without-heimdal-krb5

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] daily releases

2019-01-30 Thread Vieri
Hi,

Does anyone know of a convenient one-liner to get the latest daily release 
tarball, eg. 
http://www.squid-cache.org/Versions/v4/squid-4.5-20190128-r568e66b7c.tar.gz, 
without having to search for it manually on the web?

Either that or a symlink that would always point to the "latest daily".

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] daily releases

2019-01-31 Thread Vieri
 
On Wednesday, January 30, 2019, 9:12:51 PM GMT+1, Amos Jeffries 
 wrote: 
>> Does anyone know of a convenient one-liner to get the latest daily
>> release tarball, eg.
>> http://www.squid-cache.org/Versions/v4/squid-4.5-20190128-r568e66b7c.tar.gz,
>> without having to search for it manually on the web?
>
> The contents of the tarball are provided by rsync to optimize update
> bandwidth:
> 
> <https://wiki.squid-cache.org/DeveloperResources#Bootstrapped_sources_via_rsync>

rsync allows to sync the latest source for a particular main version (eg. Squid 
4 or Squid 5).
However, it does not allow to pull in squid v. 4's source code published on Jan 
28th 2019 just like I would get by downloading 
squid-4.5-20190128-r568e66b7c.tar.gz.
Furthermore, I'm guessing that the "daily" tarballs that are published on the 
web site's download page are hand-picked because they are known to solve bugs, 
and are considered to be somewhat "stable". For instance, if I were to rsync 
today would I get the same code as that of the above mentioned tarball?

Another simple solution would be to be able to list the files in the 
/Versions/v4/ directory, but it is not allowed by the server.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] external helper

2020-03-05 Thread Vieri
Hi,

I'm using a perl helper script in Squid, and I've migrating to Squid 4 from 
Squid 3. It seems that there's an extra field in the string Squid passes to the 
helper program.

I'd like to know what the character "-" means at the end of the passed string 
as in this message:

external_acl.cc(1085) Start: externalAclLookup: will wait for the result of 
'http www.fltk.org 80 / -' in 'bllookup' (ch=0x5633eaab2118).

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper

2020-03-05 Thread Vieri

On Thursday, March 5, 2020, 11:37:28 AM GMT+1, Amos Jeffries 
 wrote: 

>
> It means the 'acl' line in squid.conf did not contain any value to pass as 
> extra parameter(s) to that helper lookup.
>
> See
> 

Thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] debug a failure connection

2020-03-12 Thread Vieri
Hi,

I'm trying to understand what could cause Squid not to connect to the following 
site:

2020/03/12 11:48:24.115 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
FwdState::ConnectedToPeer(0x561b8b5c7918, local=10.215.144.48:51303 
remote=1.2.3.4:443 FD 784 flags=25, 0x561b8a7ee5b8/0x561b8a7ee5b8)
2020/03/12 11:48:24.115 kid1| 17,4| AsyncCall.cc(37) make: make call 
FwdState::ConnectedToPeer [call219229]
2020/03/12 11:48:24.115 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x561b8b5c7918
2020/03/12 11:48:24.115 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x561b8b5c7918
2020/03/12 11:48:24.115 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x561b8a7ee5b8
2020/03/12 11:48:24.115 kid1| 17,3| FwdState.cc(447) fail: 
ERR_SECURE_CONNECT_FAIL "Service Unavailable"
    1.2.3.4:443


A direct connection by-passing Squid shows that the https site opens fine but 
with a 3DES cipher. In my Squid 4 test I set this temp values just in case:
tls_outgoing_options flags=DONT_VERIFY_PEER cipher=ALL options=ALL


I don't know how to interpret the messages previous to the 
ERR_SECURE_CONNECT_FAIL line. Do I need to send them all? Which debug options 
would be more useful?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] dynamic ACLs

2020-04-16 Thread Vieri
Hi,

In sslbump tproxy "mode" one cannot authenticate user to limit/allow their 
access to web content.

I was thinking however of making a web form with auth within a custom Squid 
error page. This way a user would "automatically" whitelist a web site and have 
access to it while the IT dep. would know which user accessed where despite the 
site being blacklisted.

From the error page I can tell which ACL is blocking that site so I could 
create an "exception" ACL for that ACL.
My question is: can this whitelist or graylist ACL be dynamic without needing 
to reload Squid, a bit like ipsets with iptables/nftables without the need to 
reload rules?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tproxy sslbump and user authentication

2020-04-20 Thread Vieri
Hi,

Is it possible to somehow combine the filtering capabilities of tproxy ssl-bump 
for access to https sites and the access control flexibility of proxy_auth (eg. 
kerberos)?

Is having two proxy servers in sequence an acceptable approach, or can it be 
done within the same instance with the CONNECT method?

My first approach would be to configure clients to send their user credentials 
to an explicit proxy (Squid #1) which would then proxy_auth via Kerberos to a 
PDC. ACL rules would be applied here based on users, domains, IP addr., etc.

The http/https traffic would then go forcibly through a tproxy ssl-bump host 
(Squid #2) which would basically analyze/filter traffic via ICAP.

Has anyone already dealt with this problem, and how?

Regards,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tproxy sslbump and user authentication

2020-04-21 Thread Vieri

On Tuesday, April 21, 2020, 8:29:28 AM GMT+2, Amos Jeffries 
 wrote: 
>
> Please see the FAQ:
> <https://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Why_can.27t_I_use_authentication_together_with_interception_proxying.3F>
>
> Why bother with the second proxy at all? The explicit proxy has access
> to all the details the interception one does (and more - such as
> credentials). It should be able to do all filtering necessary.

Can the explicit proxy ssl-bump HTTPS traffic and thus analyze traffic with 
ICAP + squidclamav, for instance?
Simply put, will I be able to block, eg. https://secure.eicar.org/eicarcom2.zip 
not by mimetype, file extension, url matching, etc., but by analyzing its 
content with clamav via ICAP?

> TPROXY and NAT are for proxying traffic of clients which do not support
> HTTP proxies. They are hugely limited in what they can do. If you have
> ability to use explicit-proxy, do so.

Unfortunately, some programs don't support proxies, or we simply don't care and 
want to force-filter traffic anyway.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tproxy sslbump and user authentication

2020-04-24 Thread Vieri

On Tuesday, April 21, 2020, 2:41:02 PM GMT+2, Matus UHLAR - fantomas 
 wrote: 

>>On Tuesday, April 21, 2020, 8:29:28 AM GMT+2, Amos Jeffries 
>> wrote:
>>>
>>> Please see the FAQ:
>>> <https://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Why_can.27t_I_use_authentication_together_with_interception_proxying.3F>
>>>
>>> Why bother with the second proxy at all? The explicit proxy has access
>>> to all the details the interception one does (and more - such as
>>> credentials). It should be able to do all filtering necessary.
>
> On 21.04.20 12:33, Vieri wrote:
>>Can the explicit proxy ssl-bump HTTPS traffic and thus analyze traffic with 
>>ICAP + squidclamav, for instance?
>
> yes.
>
>>Simply put, will I be able to block, eg. 
>> https://secure.eicar.org/eicarcom2.zip not by mimetype, file extension,
>> url matching, etc., but by analyzing its content with clamav via ICAP?
>
> without bumping, you won't be able to block by anything, only by 
> secure.eicar.org hostname.

Hi,

I'm not sure I understand how that should be configured.

I whipped up a test instance with the configuration I'm showing below.

My browser can authenticate via kerberos and access several web sites (http & 
https) if I explicitly set it to proxy everything to squid10.mydomain.org on 
port 3228.
However, icap/clamav filtering is "not working" for neither http nor https.
My cache log shows a lot of messages regarding "icap" when I try to download an 
eicar test file. So something is triggered, but before sending a huge log to 
the mailing list, what should I be looking for exactly, or is there a specific 
loglevel I should set?

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager

pid_filename /run/squid.testexplicit.pid
access_log daemon:/var/log/squid/access.test.log squid
cache_log /var/log/squid/cache.test.log

acl explicit myportname 3227
acl explicitbump myportname 3228
acl interceptedssl myportname 3229

http_port 3227
# http_port 3228 tproxy
http_port 3228 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem 
sslflags=NO_DEFAULT_CA
https_port 3229 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem 
sslflags=NO_DEFAULT_CA
sslproxy_flags DONT_VERIFY_PEER

sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db_test -M 
16MB
sslcrtd_children 40 startup=20 idle=10

cache_dir diskd /var/cache/squid.test 32 16 256

external_acl_type nt_group ttl=0 children-max=50 %LOGIN 
/usr/libexec/squid/ext_wbinfo_group_acl -K

auth_param negotiate program /usr/libexec/squid/negotiate_kerberos_auth -s 
HTTP/squid10.mydomain.org@MYREALNAME
auth_param negotiate children 60
auth_param negotiate keep_alive on

acl localnet src 10.0.0.0/8
acl localnet src 192.168.0.0/16
acl localnet src 172.16.0.1
acl localnet src fc00::/7

acl ORG_all proxy_auth REQUIRED

http_access deny explicit !ORG_all
#http_access deny explicit SSL_ports
http_access deny explicitbump !localnet
http_access deny explicitbump !ORG_all
http_access deny interceptedssl !localnet
http_access deny interceptedssl !ORG_all

http_access allow CONNECT interceptedssl SSL_ports

http_access allow localnet
http_reply_access allow localnet

http_access allow ORG_all

debug_options rotate=1 ALL,9
# debug_options rotate=1 ALL,1

append_domain .mydomain.org

ssl_bump stare all
ssl_bump bump all

http_access allow localhost

http_access deny all

coredump_dir /var/cache/squid

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service antivirus respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access antivirus allow all
icap_service_failure_limit -1
icap_persistent_connections off


--
Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] explicit proxy and iptables

2020-04-27 Thread Vieri
Hi,

I've been using Squid + TPROXY in transparent sslbump mode for quite a while 
now, but I'd like to use an explicit proxy with user authentication instead.

I have Squid on my first firewall/gateway node, and then I have another gateway 
(node 2) where all the HTTP requests go through, with multiple ISPs.

In transparent tproxy mode, I can obviously mark packets according to the 
"real" client src IP addresses and then use, eg., different ISPs based on 
client src addr.

In the explicit setup, the gateway (node 2) only sees one IP address as HTTP 
source -- the one on the "first node" with the explicit Squid proxy. I presume 
that in this case there is NO WAY I can somehow inform the gateway on node 2 of 
the "real" clent IP addresses?

I can imagine the answer to this silly question, but nonetheless I prefer to 
ask just to make sure. ;-)

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-19 Thread Vieri
Hi,

I'm trying to set up Squid as a reverse proxy on a host with IP address 
10.215.144.91 so that web browsers can connect to it on port 443 and request 
pages from an OWA server at 10.215.144.21:443.

I have this in my squid.conf:

https_port 10.215.144.91:443 accel cert=/etc/ssl/squid/owa_cert.cer 
key=/etc/ssl/squid/owa_key.pem defaultsite=webmail2.mydomain.org

cache_peer 10.215.144.21 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/etc/ssl/squid/client.cer sslkey=/etc/ssl/squid/client_key.pem 
ssloptions=ALL sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=owaServer
# cache_peer 10.215.144.21 parent 80 0 no-query originserver login=PASS 
front-end-https=on name=owaServer

acl OWA dstdomain webmail2.mydomain.org
cache_peer_access owaServer allow OWA
never_direct allow OWA

http_access allow OWA
http_access deny all
miss_access allow OWA
miss_access deny all

Note that if I comment out the "cache_peer parent 443" line above and uncomment 
the "cache_peer parent 80" line then the web browser client successfully 
connects and can view the OWA pages after logging in.

However, the connection fails if I use 443 between squid at 10.215.144.91 and 
the OWA backend at 10.215.144.21. The client views a Squid error page with an 
SSL handshake error.

Here's the cache log when I try to connect with a client:

2017/01/20 00:10:42.284 kid1| Error negotiating SSL on FD 16: 
error::lib(0):func(0):reason(0) (5/0/0)
2017/01/20 00:10:42.284 kid1| TCP connection to 10.215.144.21/443 failed
2017/01/20 00:10:42.285 kid1| 5,5| comm.cc(1038) comm_remove_close_handler: 
comm_remove_close_handler: FD 16, AsyncCall=0x80d93a00*2
2017/01/20 00:10:42.285 kid1| 9,5| AsyncCall.cc(56) cancel: will not call 
Ssl::PeerConnector::commCloseHandler [call453] because comm_remove_close_handler
2017/01/20 00:10:42.285 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: 
PeerConnector.cc(742) will call FwdState::ConnectedToPeer(0x80d8b9f0, 
local=10.215.144.91:55948 remote=10.215.144.21:443 FD 16 flags=1, 
0x809d49a0/0x809d49a0) [call451]
2017/01/20 00:10:42.285 kid1| 93,5| AsyncJob.cc(137) callEnd: 
Ssl::PeerConnector::negotiateSsl() ends job [ FD 16 job42]
2017/01/20 00:10:42.285 kid1| 83,5| PeerConnector.cc(58) ~PeerConnector: Peer 
connector 0x80d8b590 gone
2017/01/20 00:10:42.285 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob 
destructed, this=0x80d8b5b4 type=Ssl::PeerConnector [job42]
2017/01/20 00:10:42.285 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
FwdState::ConnectedToPeer(0x80d8b9f0, local=10.215.144.91:55948 
remote=10.215.144.21:443 FD 16 flags=1, 0x809d49a0/0x809d49a0)
2017/01/20 00:10:42.285 kid1| 17,4| AsyncCall.cc(38) make: make call 
FwdState::ConnectedToPeer [call451]
2017/01/20 00:10:42.285 kid1| 17,3| FwdState.cc(415) fail: 
ERR_SECURE_CONNECT_FAIL "Service Unavailable"
https://webmail2.mydomain.org/Exchange2/
2017/01/20 00:10:42.285 kid1| TCP connection to 10.215.144.21/443 failed

I don't understand the "Service Unavailable" bit above.
I can connect just fine from the command line on the squid server at 
10.215.144.91 as you can see below.

# wget --no-check-certificate -O -  https://10.215.144.21 
--2017-01-20 00:41:10--  https://10.215.144.21/
Connecting to 10.215.144.21:443... connected.
WARNING: cannot verify 10.215.144.21's certificate, issued by 
'/C=xx/ST=xx/O=xx/OU=xx/CN=xxx/emailAddress=x...@xx.xxx':
Unable to locally verify the issuer's authority.
WARNING: certificate common name 'XYZ' doesn't match requested host name 
'10.215.144.21'.
HTTP request sent, awaiting response... 200 OK
Length: 1546 (1.5K) [text/html]

What can I try?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-20 Thread Vieri




- Original Message -
From: Amos Jeffries 

> Firstly remove the ssloptions=ALL from your config.
> 

> Traffic should be able to go through at that point.

Thanks for the feedback.

I tried it again, but this time with a non-OWA IIS HTTPS server.

Here's the squid.conf:

https_port 10.215.144.91:35443 accel cert=/etc/ssl/squid/cert.cer 
key=/etc/ssl/squid/key.pem defaultsite=www.mydomain.org

cache_peer 10.215.144.66 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/etc/ssl/squid/client.cer sslkey=/etc/ssl/squid/client_key.pem 
front-end-https=on name=httpsServer

acl HTTPSACL dstdomain www.mydomain.org
cache_peer_access httpsServer allow HTTPSACL
never_direct allow HTTPSACL

http_access allow HTTPSACL
http_access deny all

And here's the log when trying to connect from a web browser:

2017/01/20 10:31:06.724 kid1| 5,3| comm.cc(553) commSetConnTimeout: 
local=10.215.144.91:57753 remote=10.215.144.66:443 FD 14 flags=1 timeout 30
2017/01/20 10:31:06.724 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 14, type=1, 
handler=1, client_data=0x80cb86e0, timeout=0
2017/01/20 10:31:06.724 kid1| 93,5| AsyncJob.cc(152) callEnd: 
Ssl::PeerConnector status out: [ FD 14 job16]
2017/01/20 10:31:06.724 kid1| 93,5| AsyncCallQueue.cc(57) fireNext: leaving 
AsyncJob::start()
2017/01/20 10:31:06.724 kid1| 83,5| bio.cc(118) read: FD 14 read 0 <= 7
2017/01/20 10:31:06.724 kid1| Error negotiating SSL on FD 14: 
error::lib(0):func(0):reason(0) (5/0/0)
2017/01/20 10:31:06.724 kid1| TCP connection to 10.215.144.66/443 failed
2017/01/20 10:31:06.724 kid1| 5,5| comm.cc(1038) comm_remove_close_handler: 
comm_remove_close_handler: FD 14, AsyncCall=0x80cd0ff8*2
2017/01/20 10:31:06.724 kid1| 9,5| AsyncCall.cc(56) cancel: will not call 
Ssl::PeerConnector::commCloseHandler [call117] because comm_remove_close_handler
2017/01/20 10:31:06.724 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: 
PeerConnector.cc(742) will call FwdState::ConnectedToPeer(0x80cae868, 
local=10.215.144.91:57753 remote=10.215.144.66:443 FD 14 flags=1, 
0x80cd0ed0/0x80cd0ed0) [call115]
2017/01/20 10:31:06.724 kid1| 93,5| AsyncJob.cc(137) callEnd: 
Ssl::PeerConnector::negotiateSsl() ends job [ FD 14 job16]
2017/01/20 10:31:06.724 kid1| 83,5| PeerConnector.cc(58) ~PeerConnector: Peer 
connector 0x80cb86e0 gone
2017/01/20 10:31:06.724 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob 
destructed, this=0x80cb8704 type=Ssl::PeerConnector [job16]
2017/01/20 10:31:06.725 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
FwdState::ConnectedToPeer(0x80cae868, local=10.215.144.91:57753 
remote=10.215.144.66:443 FD 14 flags=1, 0x80cd0ed0/0x80cd0ed0)
2017/01/20 10:31:06.725 kid1| 17,4| AsyncCall.cc(38) make: make call 
FwdState::ConnectedToPeer [call115]
2017/01/20 10:31:06.725 kid1| 17,3| FwdState.cc(415) fail: 
ERR_SECURE_CONNECT_FAIL "Service Unavailable"

I'm not getting any useful debug information, at least not the one I can 
understand.

Maybe I should rebuild Squid?

# squid -v
Squid Cache: Version 3.5.14
Service Name: squid
configure options:  '--prefix=/usr' '--build=i686-pc-linux-gnu' 
'--host=i686-pc-linux-gnu' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc' 
'--localstatedir=/var/lib' '--disable-dependency-tracking' 
'--disable-silent-rules' '--libdir=/usr/lib' '--sysconfdir=/etc/squid' 
'--libexecdir=/usr/libexec/squid' '--localstatedir=/var' 
'--with-pidfile=/run/squid.pid' '--datadir=/usr/share/squid' 
'--with-logdir=/var/log/squid' '--with-default-user=squid' 
'--enable-removal-policies=lru,heap' '--enable-storeio=aufs,diskd,rock,ufs' 
'--enable-disk-io' 
'--enable-auth-basic=MSNT-multi-domain,NCSA,POP3,getpwnam,SMB,LDAP,PAM,RADIUS' 
'--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-ntlm=smb_lm' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=file_userip,session,unix_group,wbinfo_group,LDAP_group,eDirectory_userip,kerberos_ldap_group'
 '--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-cache-digests' '--enable-delay-pools' '--enable-eui' '--enable-icmp' 
'--enable-follow-x-forwarded-for' '--with-large-files' 
'--disable-strict-error-checking' '--disable-arch-native' 
'--with-ltdl-includedir=/usr/include' '--with-ltdl-libdir=/usr/lib' 
'--with-libcap' '--enable-ipv6' '--disable-snmp' '--with-openssl' 
'--with-nettle' '--with-gnutls' '--enable-ssl-crtd' '--disable-ecap' 
'--disable-esi' '--enable-htcp' '--enable-wccp' '--enable-wccpv2' 
'--enable-linux-netfilter' '--with-mit-krb5' '--without-heimdal-krb5' 
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu' 
'CC=i686-pc-linux-gnu-gcc' 'CFLAGS=-O2 -march=i686 -pipe' 'LDFLAGS=-Wl,-O1 
-Wl,--as-needed' 'CXXFLAGS=-O2 -march=i686 -pipe' 
'PKG_CONFIG_PATH=/usr/lib/pkgconfig'

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-22 Thread Vieri




- Original Message -
From: Amos Jeffries 
>
> You could try with a newer Squid version since the bio.cc code might be

> making something else happen in 3.5.23. If that still fails the 4.0 beta
> has different logic and far better debug info in this area.

I tried 3.5.23 and I finally got a clear hint.
Basically, I was missing sslcafile.
My setup works now.

Thanks

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-24 Thread Vieri
able-htcp' '--enable-wccp' '--enable-wccpv2' 
'--enable-linux-netfilter' '--with-mit-krb5' '--without-heimdal-krb5' 
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu' 
'CC=i686-pc-linux-gnu-gcc' 'CFLAGS=-O2 -march=i686 -pipe' 'LDFLAGS=-Wl,-O1 
-Wl,--as-needed' 'CXXFLAGS=-O2 -march=i686 -pipe' 
'PKG_CONFIG_PATH=/usr/lib/pkgconfig'

# openssl version
OpenSSL 1.0.2j  26 Sep 2016

Unfortunately, Squid's or OpenSSL's log message isn't too informative, even in 
Squid 4.
Also, I'm not sure why the SSL version isn't picked up (NONE/0.0) but I don't 
think it changes anything.

What else can I try?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-24 Thread Vieri




- Original Message -
From: Alex Rousskov 
>
> The peer at 10.215.144.21:443 accepted Squid connection and then closed

> it, probably before sending anything to Squid

Thanks Alex.

I was lucky enough to try the following options in cache_peer:
ssloptions=NO_SSLv3,NO_SSLv2,NO_TLSv1_2,NO_TLSv1_1

This solves the issue. I understand it forces using TLS 1.0. In fact, the OWA 
origin server is a Windows server 2003 and only supports SSLv{2,3} and TLS 1.0.

It seems that Squid delegates SSL to OpenSSL and it's really too bad the latter 
can't be a little bit more verbose. I know this isn't the right list for this 
but couldn't OpenSSL simply have logged something regarding "unsupported 
TLS/SSL versions"? I'm only supposing that without the ssloptions I posted 
above, openssl will try TLS 1.2 and silently fail if that doesn't succeed.

Regardless, it all seems to be working now, even with Squid 3.5.14.

Thanks again,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-26 Thread Vieri
y.cc(83) 
retrieveNegotiatedInfo: SSL connection info on FD 18 SSL version NONE/0.0 
negotiated cipher 
2017/01/24 17:20:28.997 kid1| Error negotiating SSL on FD 18: 
error::lib(0):func(0):reason(0) (5/0/0)
2017/01/24 17:20:28.997 kid1| TCP connection to 10.215.144.21/443 failed


However, I haven't found any hint of what the client (cache_peer) tried to 
offer.

Maybe if Squid gets an SSL negotiation error with no apparent reason then it 
might need to retry connecting by being more explicit, just like in my cURL and 
openssl binary examples above.


I used the latest Squid 4 beta by the way.

I would have understood earlier the reason of the connection failure if 
Squid/OpenSSL had logged how they were actually hitting on the server.

Anyway, it's not a big deal now that I know what to do if this kind of 
connection issue comes back up. It could be useful to others though if the 
logging could be a tad more verbose or if Squid could retry connections by 
explictly specifying protocols (and logging them).

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-01-27 Thread Vieri




- Original Message -
From: Alex Rousskov 

>> It's interesting to note that the following actually DOES give more 
>> information (unsupported 

>> protocol):>
> * If the server sent nothing, then Curl gave you potentially incorrect
> information (i.e., Curl is just _guessing_ what went wrong).


I never tried telling Squid to use TLS 1.1 ONLY so I never got to see Squid's 
log when using that protocol. I'm supposing I would have seen the same thing in 
Squid as I've seen it with CURL.
So I'm sure Squid would log useful information for the sys admin but... (see 
below).

>> Maybe if Squid gets an SSL negotiation error with no apparent reason
>> then it might need to retry connecting by being more explicit, just
>> like in my cURL and openssl binary examples above.
>
> Sorry, I do not know what "retry connecting by being more explicit"
> means. AFAICT, neither Curl nor s_client tried reconnecting in your
> examples. Also, an appropriate default for a command-line client is
> often a bad default for a proxy. It is complicated.


Let me rephrase my point but please keep in mind that I have no idea how Squid 
actually behaves. Simply put, when Squid tries to connect for the first time, 
it will probably (I'm guessing here) try the most secure protcol known today 
(ie. TSL 1.2), or let OpenSSL decide by default which is probably the same. In 
my case, the server replies nothing. That would be like running:

# curl -k -v https://10.215.144.21
or
# openssl s_client -connect 10.215.144.21:443

They give me the same information as Squid's log... almost nothing.

So my point is, if that first connection fails and gives me nothing for TLS 1.2 
(or whatever the default is), two things can happen: either the remote site is 
failing or it isn't supporting the protocol. Why not "try again" but this time 
by being more specific? It would be like doing something like this:

# openssl s_client -connect 10.215.144.21:443 || openssl s_client -connect 
10.215.144.21:443 -tls1_1 || openssl s_client -connect 10.215.144.21:443 -tls1
 

Of course, this shouldn't be done each and every time it tries to connect 
because it would probably give performance issues. If Squid successfully 
connects with TSL 1.0 then it could "remember" that for later connections to 
the same peer. It could also forget it after a sensible timeout, in case the 
remote peer starts supporting a safer protocol.

> Agreed in general, but the devil is in the details. Improving this is
> difficult, and nobody is working on it at the moment AFAIK.


I can imagine it must be difficult...


Instead of improving the source code, maybe a FAQ or some doc related to "squid 
error negotiating SSL" which would describe what to try when the error message 
is a mere "handshake failure". In the end, it's as simple as setting ssloptions 
correctly (in my case, NO_SSLv3,NO_SSLv2,NO_TLSv1_2,NO_TLSv1_1). I know there 
could be many other reasons for such a failure but at least that would be a 
good starting point.


Or even better... if Squid detects an SSL handshake failure with no extra info 
like in my case, can't it simply log an extra string that would look something 
like "Failed to negotiate SSL for unknown reason. Try setting ssloptions 
(cache_peer) or options (https_port) with a combination of NO_SSLv2 NO_SSLv3 
NO_TLSv1 NO_TLSv1_1 NO_TLSv1_2. Find out which SSL protocol is supported by the 
remote peer. If the connection still fails then you will need to analyze 
traffic with the peer to find out the reason."

In my case, that would have been enough info in Squid's log to fix the issue.

Thanks again.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-02-02 Thread Vieri




- Original Message -
From: Amos Jeffries 
>
> Reason #1 is that the TLS protocol is a security protocol for securing a
> single 'hop' (just one TCP connection). So ideally TLS details would not
> be remembered at all, it's a dangerous thing in security to remember
> details in the middleware.
>
> Reason #2 is that Squid has passed on the 'terminate' signal to the
> client (curl).
> 
> As far as Squid is concerned, there is no "again" connection. There is a

> connection, which fails. The end.
>

>> # openssl s_client -connect 10.215.144.21:443 || openssl s_client
>> -connect 10.215.144.21:443 -tls1_1 || openssl s_client -connect
>> 10.215.144.21:443 -tls1> 
> Which brings us to reason #3; downgrade attacks.

> You may have heard of the POODLE attack.
>
> Squid (mostly) avoids the whole class of vulnerabilities by leaving the
> fallback decisions to the client whenever it can.


Thank you very much for explaining all this. It's quite clear now.
There's just one little thing that may be useful in the log, though.
I might be wrong or maybe everyone already knows what to try when they get a 
non-informative SSL handshake error but it would have helped me to get a "hint" 
from Squid telling me to try fiddling with the ssloptions and options flags. I 
realize now it's great if Squid follows the secure logic of "There is a 
connection, which fails. The end.", but whenever that happens (and the info is 
0, only "handshake error") wouldn't it be safe to just print a hint line in the 
server's log?


Anyway, as I said before, I know what to do from now on so it's not a big deal. 
;-)

Thanks again,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] renegotiation

2017-02-02 Thread Vieri
Hi,

I'm running Squid 4 beta.

# squid -v
Squid Cache: Version 4.0.17-20170122-r14968

I tested the following where Squid is listening on port 443 in accel mode.

# echo "R" | openssl s_client -connect 192.168.101.2:443 2>&1 3>&1 | grep 
RENEGOTIATING
RENEGOTIATING

How can I disable client renegotiation?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] renegotiation

2017-02-02 Thread Vieri




- Original Message -
From: Amos Jeffries 
> Renegotiating to an insecure version or cipher set is an issue to be
> fixed by configuring tls-min-version=1.Y and tls-options= disabling
> unwanted ciphers etc.
> 
> The potential DoS related to renegotiation is now prevented by rate
> limiting.
> 
> The current generation of OpenSSL libraries (1.0+) all contain built-in
> protection from older forms of renegotiate that had other CVE issues.


Thanks again, Amos!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] choose TLS version

2017-02-02 Thread Vieri
Hi,

Are the following two lines equivalent?

https_port ... options=NO_SSLv3,NO_SSLv2,NO_TLSv1_1,NO_TLSv1

https_port ... tls-min-version=1.2

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cannot access https site

2017-05-15 Thread Vieri
Hi,



My goal is to set up Squid so it can act as a transparent proxy for local 
clients browsing the web. It should "deny all" except traffic to the 
destination domains included in an ACL file.

This is my squid config:

http_port 3129 tproxy
https_port 3130 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range

acl intercepted myportname 3129
acl interceptedssl myportname 3130

acl allowed_domains dstdomain "/usr/local/share/proxy-settings/allowed.domains"

http_access deny intercepted !localnet
http_access deny interceptedssl !localnet
http_access deny !allowed_domains
http_access allow localnet

sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 16MB
sslcrtd_children 10
ssl_bump stare all
ssl_bump bump all
sslproxy_cert_error allow all
always_direct allow all

The ACL file allowed.domains contains:
.squid-cache.org
.stackexchange.com

When a client in localnet tries to access http://www.squid-cache.org, 
everything works fine, as expected.

However, when the same client tries to access https://stackexchange.com, the 
first SQUID error page says that access is denied to https://151.101.1.69/* 
(that's one of stackexchange's IP addresses).
How can I avoid this?

If I add 151.101.1.69 to allowed.domains I get a SQUID SSL handshake error page 
with https://*.stackexchange.com/* (bad write retry).

What am I doing wrong?

Also, would I have performance issues if the "allowed.domains" ACL file becomes 
very big over time?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access https site

2017-05-16 Thread Vieri
interceptedsslnormal !localnet
http_access deny interceptednormal !localnet
http_access allow CONNECT SSL_ports
http_access deny !allowed_domains
cache_mgr i...@mydomain.org
email_err_data on
error_directory /usr/share/squid/errors/ORG
append_domain .mydomain.org
http_access allow localnet
sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 16MB
sslcrtd_children 10
ssl_bump stare all
ssl_bump bump all
sslproxy_cert_error allow all
always_direct allow all
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all
include /etc/squid/squid.custom.common
include /etc/squid/squid.custom.hide
cache_dir diskd /var/cache/squid 100 16 256

# grep -v "^#" squid.custom.hide | grep -v "^$"
httpd_suppress_version_string on
dns_v4_first on
via off
forwarded_for off
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access All deny all

So this setup is a mixed explicit/transparent proxy. Right now, I'm just trying 
to focus on the transparent part only.
The goal is to allow http/https traffic to allowed_domains only and to force 
content analysis via ICAP (clamav) of both http and https content.

The above config now seems to work and I can access sites listed in 
allowed_domains only. I just hope I got it all cleared out.

BTW I've seen the example at 
http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit where it 
suggests to use:

acl step1 at_step SslBump1
ssl_bump peek step1

Should I be using that instead of "ssl_bump stare all"?

Which "other configuration aspects are wrong", as you say?

Are you referring to "sslproxy_cert_error allow all" or are there more?

# squid -version
Squid Cache: Version 3.5.14
Service Name: squid
configure options:  '--prefix=/usr' '--build=i686-pc-linux-gnu' 
'--host=i686-pc-linux-gnu' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc' 
'--localstatedir=/var/lib' '--disable-dependency-tracking' 
'--disable-silent-rules' '--libdir=/usr/lib' '--sysconfdir=/etc/squid' 
'--libexecdir=/usr/libexec/squid' '--localstatedir=/var' 
'--with-pidfile=/run/squid.pid' '--datadir=/usr/share/squid' 
'--with-logdir=/var/log/squid' '--with-default-user=squid' 
'--enable-removal-policies=lru,heap' '--enable-storeio=aufs,diskd,rock,ufs' 
'--enable-disk-io' 
'--enable-auth-basic=MSNT-multi-domain,NCSA,POP3,getpwnam,SMB,LDAP,PAM,RADIUS' 
'--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-ntlm=smb_lm' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=file_userip,session,unix_group,wbinfo_group,LDAP_group,eDirectory_userip,kerberos_ldap_group'
 '--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-cache-digests' '--enable-delay-pools' '--enable-eui' '--enable-icmp' 
'--enable-follow-x-forwarded-for' '--with-large-files' 
'--disable-strict-error-checking' '--disable-arch-native' 
'--with-ltdl-includedir=/usr/include' '--with-ltdl-libdir=/usr/lib' 
'--with-libcap' '--enable-ipv6' '--disable-snmp' '--with-openssl' 
'--with-nettle' '--with-gnutls' '--enable-ssl-crtd' '--disable-ecap' 
'--disable-esi' '--enable-htcp' '--enable-wccp' '--enable-wccpv2' 
'--enable-linux-netfilter' '--with-mit-krb5' '--without-heimdal-krb5' 
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu' 
'CC=i686-pc-linux-gnu-gcc' 'CFLAGS=-O2 -march=i686 -pipe' 'LDFLAGS=-Wl,-O1 
-Wl,--as-needed' 'CXXFLAGS=-O2 -march=i686 -pipe' 
'PKG_CONFIG_PATH=/usr/lib/pkgconfig'

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid and c-icap ERR_ICAP_FAILURE

2017-05-23 Thread Vieri
Hi,

I know I have an "old" version of Squid (3.5.14), but I'd like to know if the 
issue I'm seeing has been fixed in newer versions before I upgrade.

I can't easily reproduce the failure. The Squid process uses a c-icap module to 
scan content (squidclamav).
It's all fine, in general, but at times clients get the ERR_ICAP_FAILURE Squid 
error page.
When this happens (usually when there's a lot of traffic), users can't browse 
the web and a c-icap restart on the Squid server (c-icap server and Squid on 
same proxy server) does not solve the issue.
I need to run at least "squid -k reconfigure" to make everything work again 
(restart not necessary).

Any idea why?

I'm not posting the squid log files because I couldn't find anything relevant 
(debug not enabled).

Maybe I shouldn't waste anyone's time if this is a known issue and update to 
the latest version first, right?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Vieri
Hi,

I'd like to block access to Google Mail but allow it to Google Drive. I also 
need to intercept Google Drive traffic (https) and scan its content via c-icap 
modules for threats (with clamav and other tools which would block potentially 
harmful files).

I've failed so far.

I added mail.google.com to a custom file named "denied.domains" and loaded as 
denied_domains ACL in Squid. I know that in TLS traffic there are only IP 
addresses, so I created the "server_name" ACL as seen below.

[...]
acl denied_domains dstdomain "/usr/local/share/proxy-settings/denied.domains"
http_access deny denied_domains !allowed_groups !allowed_ips
http_access deny CONNECT denied_domains !allowed_groups !allowed_ips
[...]
reply_header_access Alternate-Protocol deny all
acl AllowTroublesome ssl::server_name .google.com .gmail.com
acl DenyTroublesome ssl::server_name mail.google.com
http_access deny DenyTroublesome
ssl_bump peek all
ssl_bump splice AllowTroublesome
ssl_bump bump all

First of all, I was expecting that if a client tried to open 
https://mail.google.com, the connection would be blocked by Squid 
(DenyTroublesome ACL). It isn't. Why?

Second, I am unable to scan content since Squid is splicing all Google traffic. 
However, if I "bump AllowTroublesome", I can enter my username in 
https://accounts.google.com, but trying to access to the next step (user 
password) fails with an unreported error.

Any suggestions?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Vieri
s deny interceptedsslnormal !localnet
http_access deny interceptednormal !localnet
cache_mgr i...@mydomain.org
email_err_data on
error_directory /usr/share/squid/errors/MYORG
append_domain .mydomain.org
sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 16MB
sslcrtd_children 10
reply_header_access Alternate-Protocol deny all
ssl_bump stare all
ssl_bump bump all
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all
include /etc/squid/squid.custom.common
include /etc/squid/squid.custom.hide
cache_dir diskd /var/cache/squid 100 16 256
http_access allow localnet

# grep -v ^# squid.custom.common  | grep -v "^\$"
cache_mgr i...@mydomain.org
email_err_data on
error_directory /usr/share/squid/errors/MYORG

# grep -v ^# squid.custom.hide  | grep -v "^\$"
httpd_suppress_version_string on
dns_v4_first on
via off
forwarded_for off
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access All deny all

Do you require the full ACLs too?

# grep google /usr/local/share/proxy-settings/*
/usr/local/share/proxy-settings/denied.domains:play.google.com
/usr/local/share/proxy-settings/denied.domains:mail.google.com

Note that the above configuration correctly blocks access to 
https://mail.google.com.
It also allows access to https://accounts.google.com and I can enter my Google 
username. However, I cannot press "the Next button" to enter the password. I 
could try to study the web page's source code but at a first glance:
1) Google login works fine if I by-pass the Squid proxy or if I use "ssl_bump 
splice".
2) I am not denying access to any Google service except for "play" and "mail".

Not being able to press "the Next button" is what I meant by "unreported error" 
in my previous e-mail. It is easy to reproduce with my squid.conf.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-28 Thread Vieri
counts.google.com *ONLY*, and bump everything else (although I'd prefer to 
understand why bumping isn't "working" for this site).

I've tried this:

acl GoogleAccounts ssl::server_name accounts.google.com
#acl GoogleAccounts dstdomain accounts.google.com
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump splice GoogleAccounts
ssl_bump bump all

However, traffic to accounts.google.com is not spliced, it's bumped like the 
rest.

Can FQDNs be used in ACLs as in the example above even when peeking at step 1?
If I need to peek at step 2 for GoogleAccounts to splice then I take it I won't 
be able to "bump all" (the rest).
Likewise, If I need to stare at step 2 then I'll never be able to splice 
GoogleAccounts.

Please let me know if I'm totally off course.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid block by Content-Type or Content-Disposition

2017-05-29 Thread Vieri
Hi,

I'm unable to block specific file downloads in http/https traffic. For example, 
I'd like to block .cab files from being downloaded.

Here's what I have:

# grep cab /usr/local/proxy-settings/denied.filetypes
\.cab(\?.*)?$

# grep -v ^# squid.test.conf | grep -v ^$
http_access allow localhost manager
http_access deny manager
http_port 3228 tproxy
https_port 3229 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl interceptedhttp myportname 3228
acl interceptedhttps myportname 3229
acl denied_filetypes urlpath_regex -i 
"/usr/local/proxy-settings/denied.filetypes"
acl denied_mimetypes_req req_mime_type -i application/x-cab
acl denied_mimetypes_rep rep_mime_type -i application/x-cab
http_access deny denied_mimetypes_req
http_access deny denied_mimetypes_rep
http_access deny denied_filetypes
http_access deny interceptedhttp !localnet
http_access deny interceptedhttps !localnet
sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db_test -M 
16MB
sslcrtd_children 10
reply_header_access Alternate-Protocol deny all
ssl_bump stare all
ssl_bump bump all
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all
cache_dir diskd /var/cache/squid.test 100 16 256
http_access allow localnet
http_access allow localhost
http_access deny all
coredump_dir /var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
pid_filename /run/squid.test.pid
access_log daemon:/var/log/squid/access.test.log squid
cache_log /var/log/squid/cache.test.log
debug_options rotate=1 ALL,5

In cache.log I see:

Content-Type: application/x-cab
Content-Disposition: attachment;filename="fake.cab";filename*=UTF-8''fake.cab

BTW if I replace the following:

acl denied_mimetypes_req req_mime_type -i application/x-cab
acl denied_mimetypes_rep rep_mime_type -i application/x-cab

with

acl denied_mimetypes_req req_mime_type -i application/x-
acl denied_mimetypes_rep rep_mime_type -i application/x-

then the cab file downloads are correctly blocked. This is obviously too 
restrictive.

This must be a dumb mistake on my behalf.
What am I missing?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid sslbump and certificates

2017-05-29 Thread Vieri
Hi,

When a client browser gets the Squid error page as shown below, what does it 
mean?
Does it mean that Squid doesn't trust the CA mentioned below?
If I wanted to allow the connection anyway, what options would I have?


The system returned:

(71) Protocol error (TLS code: X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY)

SSL Certficate error: certificate issuer (CA) not known: /C=US/O=GeoTrust, 
Inc./OU=Domain Validated SSL/CN=Secure Site Starter DV SSL CA - G2


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid block by Content-Type or Content-Disposition

2017-05-29 Thread Vieri

__
From: Amos Jeffries 
>
> 1) http_access is tested only for requests.
>
> response/reply messages are controlled though http_reply_access.


I knew it was going to be a dumb question. Thanks Amos! It works now.

I suppose it's preferable to be more specific with ACL entries such as:
^application/x-cab$

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid sslbump and certificates

2017-05-29 Thread Vieri


From: Rafael Akchurin 
>
> This article tries to explain why it happens.
> https://docs.diladele.com/faq/squid/fix_unable_to_get_issuer_cert_locally.html#ssl-certificate-test-tool-in-web-safety-5
> 

> To fix it - better use what Yuri recommended in 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Howto-fix-X509-V-ERR-UNABLE-
> TO-GET-ISSUER-CERT-LOCALLY-Squid-error-td4682015.html

Thanks Raf. That really helped.

I successfully installed the intermediate certificate as a trusted CA 
system-wide with openssl (used 'update-ca-certificates').

However, I tried using the Squid config directive for intermediate certs 
instead, but failed.

This is what I did:

# wget http://somewhere/intermediate.crt -O intermediate.der
# openssl x509 -inform der -in intermediate.der -out intermediate.crt
# cat intermediate.crt >> /usr/local/share/proxy-settings/allowed.certs
In squid.conf:
sslproxy_foreign_intermediate_certs 
"/usr/local/share/proxy-settings/allowed.certs"
Restarted Squid but still had the same error page.

I guess I can stick to the system-wide openssl solution for now.

Thanks again,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-31 Thread Vieri


From: Alex Rousskov 
>
> You need to figure out why. Two common reasons are SSL-level errors and
> http_access denials. Both should be reflected in access.log and
> debugging cache.log.


I finally found out it was an http_access deny on an ACL match with url_regex.

Thanks Alex.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid sslbump and certificates

2017-05-31 Thread Vieri


From: Amos Jeffries 
>
> Which version of Squid are you using now?


I still haven't found the time to update my systems but the squid version I was 
running this on was/is 3.5.14.
I probably need to catch up for this feature to work correctly.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] failed to bump Twitter

2017-05-31 Thread Vieri
Hi,

I can't seem to be able to bump Twitter.

Whenever a client tries to browse https://twitter.com there's a connection 
refusal error page (111).

Any clue as to what I could try?

# grep -v ^# squid.test.conf | grep -v ^$
http_access allow localhost manager
http_access deny manager
http_port 3227
http_port 3228 tproxy
https_port 3229 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl interceptedhttp myportname 3228
acl interceptedhttps myportname 3229
http_access deny interceptedhttp !localnet
http_access deny interceptedhttps !localnet
sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db_test -M 
16MB
sslcrtd_children 10
reply_header_access Alternate-Protocol deny all
ssl_bump stare all
ssl_bump bump all
cache_dir diskd /var/cache/squid.test 100 16 256
http_access allow localnet
http_access allow localhost
http_access deny all
coredump_dir /var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
pid_filename /run/squid.test.pid
access_log daemon:/var/log/squid/access.test.log squid
cache_log /var/log/squid/cache.test.log
debug_options rotate=1 ALL,5

# cat /var/log/squid/access.test.log

1496266616.296200 10.215.144.48 TAG_NONE/200 0 CONNECT 199.16.156.6:443 - 
ORIGINAL_DST/199.16.156.6 -
1496266616.322  2 10.215.144.48 TAG_NONE/503 3902 GET https://twitter.com/ 
- HIER_NONE/- text/html

# cat /var/log/squid/cache.test.log

2017/05/31 23:36:55.778 kid1| 41,5| AsyncCall.cc(38) make: make call 
logfileFlush [call140]
2017/05/31 23:36:55.778 kid1| 41,5| AsyncCallQueue.cc(57) fireNext: leaving 
logfileFlush(0x80945048*?)
2017/05/31 23:36:56.093 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New connection 
on FD 33
2017/05/31 23:36:56.093 kid1| 5,2| TcpAcceptor.cc(295) acceptNext: connection 
on local=[::]:3229 remote=[::] FD 33 flags=25
2017/05/31 23:36:56.093 kid1| 51,3| fd.cc(198) fd_open: fd_open() FD 13 HTTP 
Request
2017/05/31 23:36:56.093 kid1| 89,5| Intercept.cc(375) Lookup: address BEGIN: 
me/client= 199.16.156.6:443, destination/me= 10.215.144.48:42597
2017/05/31 23:36:56.093 kid1| 89,5| Intercept.cc(169) TproxyTransparent: 
address TPROXY: local=199.16.156.6:443 remote=10.215.144.48 FD 13 flags=17
2017/05/31 23:36:56.093 kid1| 28,4| Eui48.cc(178) lookup: id=0x80cefb18 query 
ARP table
2017/05/31 23:36:56.093 kid1| 28,4| Eui48.cc(221) lookup: id=0x80cefb18 query 
ARP on each interface (512 found)
2017/05/31 23:36:56.093 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface lo
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface enp1s7
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(236) lookup: id=0x80cefb18 looking 
up ARP address for 10.215.144.48 on enp1s7
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface enp1s7
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(236) lookup: id=0x80cefb18 looking 
up ARP address for 10.215.144.48 on enp1s7
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface enp1s8
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(236) lookup: id=0x80cefb18 looking 
up ARP address for 10.215.144.48 on enp1s8
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface enp2s0f0
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(236) lookup: id=0x80cefb18 looking 
up ARP address for 10.215.144.48 on enp2s0f0
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface enp2s0f1
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(236) lookup: id=0x80cefb18 looking 
up ARP address for 10.215.144.48 on enp2s0f1
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(227) lookup: id=0x80cefb18 found 
interface enp0s8
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(236) lookup: id=0x80cefb18 looking 
up ARP address for 10.215.144.48 on enp0s8
2017/05/31 23:36:56.094 kid1| 28,4| Eui48.cc(279) lookup: id=0x80cefb18 got 
address 64:31:50:17:9a:fd on enp0s8
2017/05/31 23:36:56.094 kid1| 5,5| TcpAcceptor.cc(287) acceptOne: Listener: 
local=[::]:3229 remote=[::] FD 33 flags=25 accepted new connection 
local=199.16.156.6:443 remote=10.215.144.48 FD 13 flags=17 handler 
Subscription: 0x80cef9d8*1
2017/05/31 23:36:56.094 kid1| 5,5| AsyncCall.cc(26) AsyncCall: The AsyncCall 
httpsAccept constructed, this=0x80d66d88 [call141]
2017/05/31 23:36:56.094 kid1| 5,5| AsyncCall.cc(93) ScheduleCall: 
TcpAcceptor.cc(317) will call httpsAccept(local=199.16.156.6:443 
remote=10.215.144.48 FD 13 flags=17, MXID_2) [call141]
2017/05/31 23:36:56.094 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 33, type=1, 
handler=1, client

Re: [squid-users] squid sslbump and certificates

2017-06-01 Thread Vieri


From: Eliezer Croitoru 
>
> What OS?


Linux 4.8.17-hardened
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] failed to bump Twitter

2017-06-01 Thread Vieri


From: Amos Jeffries 
>
> Squid is simply not able to make outbound TCP connections to twitter.com 

> (which according to your OS is hosted by 199.16.156.6). 


It seems to be a DNS issue.

Thanks

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid ssl bump and Adobe Connect

2017-06-05 Thread Vieri
376) mayStartSwapOut: 
already allowed
2017/06/05 14:18:06.175 kid1| 20,5| store_swapout.cc(47) storeSwapOutStart: 
storeSwapOutStart: Begin SwapOut 
'https://emeacmsd.acms.com/common/intro/test.swf' to dirno -1, fileno 
2017/06/05 14:18:06.175 kid1| 73,3| HttpRequest.cc(689) storeId: sent back 
canonicalUrl:https://emeacmsd.acms.com/common/intro/test.swf
2017/06/05 14:18:06.175 kid1| 20,3| store_swapmeta.cc(54) storeSwapMetaBuild: 
storeSwapMetaBuild URL: https://emeacmsd.acms.com/common/intro/test.swf
2017/06/05 14:18:06.175 kid1| 20,2| store_io.cc(42) storeCreate: storeCreate: 
Selected dir 0 for e:=w1p2DV/0x80d8d2e8*4
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStrategy.cc(100) create: fileno 
002D
2017/06/05 14:18:06.175 kid1| 79,3| DiskIO/DiskDaemon/DiskdFile.cc(40) 
DiskdFile: DiskdFile::DiskdFile: /var/cache/squid.test/00/00/002D
2017/06/05 14:18:06.175 kid1| 79,3| DiskIO/DiskDaemon/DiskdFile.cc(86) create: 
DiskdFile::create: 0x80e08bf0 creating for 0x80e0585c
2017/06/05 14:18:06.175 kid1| 47,4| ufs/UFSSwapDir.cc(1206) replacementAdd: 
added node 0x80d8d2e8 to dir 0
2017/06/05 14:18:06.175 kid1| 20,3| store.cc(484) lock: storeSwapOutStart 
locked key B04729E56EF0FD97349B176C475C4F1B e:=w1p2DV/0x80d8d2e8*5
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(161) write: 
UFSStoreState::write: dirn 0, fileno 002D
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(469) queueWrite: 
0x80e05828 UFSStoreState::queueWrite: queueing write of size 125
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(184) doWrite: 
0x80e05828 UFSStoreState::doWrite
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(219) doWrite: 
0x80e05828 calling theFile->write(125)
2017/06/05 14:18:06.175 kid1| 79,3| DiskIO/DiskDaemon/DiskdFile.cc(278) write: 
DiskdFile::write: this 0x80e08bf0, buf 0x80e05480, off 0, len 125
2017/06/05 14:18:06.175 kid1| 20,3| store_swapout.cc(132) doPages: 
storeSwapOut: swap_buf_len = 4096
2017/06/05 14:18:06.175 kid1| 20,3| store_swapout.cc(136) doPages: 
storeSwapOut: swapping out 4096 bytes from 0
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(161) write: 
UFSStoreState::write: dirn 0, fileno 002D
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(469) queueWrite: 
0x80e05828 UFSStoreState::queueWrite: queueing write of size 4096
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(184) doWrite: 
0x80e05828 UFSStoreState::doWrite
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(219) doWrite: 
0x80e05828 calling theFile->write(4096)
2017/06/05 14:18:06.175 kid1| 79,3| DiskIO/DiskDaemon/DiskdFile.cc(278) write: 
DiskdFile::write: this 0x80e08bf0, buf 0x80daa56c, off -1, len 4096
2017/06/05 14:18:06.175 kid1| 20,3| store_swapout.cc(132) doPages: 
storeSwapOut: swap_buf_len = 4096
2017/06/05 14:18:06.175 kid1| 20,3| store_swapout.cc(136) doPages: 
storeSwapOut: swapping out 4096 bytes from 4096
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(161) write: 
UFSStoreState::write: dirn 0, fileno 002D
2017/06/05 14:18:06.175 kid1| 79,3| ufs/UFSStoreState.cc(469) queueWrite: 
0x80e05828 UFSStoreState::queueWrite: queueing write of size 4096
2017/06/05 14:18:06.176 kid1| 79,3| ufs/UFSStoreState.cc(184) doWrite: 
0x80e05828 UFSStoreState::doWrite
2017/06/05 14:18:06.176 kid1| 79,3| ufs/UFSStoreState.cc(219) doWrite: 
0x80e05828 calling theFile->write(4096)
2017/06/05 14:18:06.176 kid1| 79,3| DiskIO/DiskDaemon/DiskdFile.cc(278) write: 
DiskdFile::write: this 0x80e08bf0, buf 0x80da954c, off -1, len 4096
2017/06/05 14:18:06.176 kid1| 90,3| store_client.cc(732) invokeHandlers: 
InvokeHandlers: B04729E56EF0FD97349B176C475C4F1B
2017/06/05 14:18:06.176 kid1| 90,3| store_client.cc(738) invokeHandlers: 
StoreEntry::InvokeHandlers: checking client #0
2017/06/05 14:18:06.176 kid1| 11,3| http.cc(1054) persistentConnStatus: 
local=10.215.145.187:60291 remote=54.247.125.57:443 FD 15 flags=25 eof=0
2017/06/05 14:18:06.176 kid1| 11,5| http.cc(1074) persistentConnStatus: 
persistentConnStatus: content_length=492826
2017/06/05 14:18:06.176 kid1| 11,5| http.cc(1078) persistentConnStatus: 
persistentConnStatus: clen=492826
2017/06/05 14:18:06.176 kid1| 11,5| http.cc(1091) persistentConnStatus: 
persistentConnStatus: body_bytes_read=8192 content_length=492826
2017/06/05 14:18:06.176 kid1| 11,5| http.cc(1428) processReplyBody: 
processReplyBody: INCOMPLETE_MSG from local=10.215.145.187:60291 
remote=54.247.125.57:443 FD 15 flags=25

Any ideas?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid ssl bump and Adobe Connect

2017-06-05 Thread Vieri

From: Alex Rousskov 
>

>> 1496665088.143  6 10.215.145.187 TAG_NONE/400 4428 NONE 
>> error:invalid-request - HIER_NONE/- 

>> text/html>
> I recommend finding the place in the debugging cache.log where Squid

> generates the above error response and then going backwards to find the
> cause.

OK Alex, got it.
In the meantime, I searched for the events around the time this happened.
BTW as a side question I'd like to know if I can change the timestamp in 
cache.log so it can print the unixtime as in access.log.

In any case, here's the relevant part:

[Mon Jun  5 14:18:08 2017].143  6 10.215.145.187 TAG_NONE/400 4428 NONE 
error:invalid-request - HIER_NONE/- text/html

cache.log within 14:18:08:

2017/06/05 14:18:08 kid1| hold write on SSL connection on FD 30
2017/06/05 14:18:08.000 kid1| 28,3| Checklist.cc(70) preCheck: 0x80d21df8 
checking slow rules
2017/06/05 14:18:08.000 kid1| 28,5| Acl.cc(138) matches: checking (ssl_bump 
rules)
2017/06/05 14:18:08.000 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/4 is  banned
2017/06/05 14:18:08.000 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/5is not banned
2017/06/05 14:18:08.000 kid1| 28,5| Acl.cc(138) matches: checking (ssl_bump 
rule)
2017/06/05 14:18:08.000 kid1| 28,5| Acl.cc(138) matches: checking all
2017/06/05 14:18:08.000 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'10.215.145.187' found
2017/06/05 14:18:08.000 kid1| 28,3| Acl.cc(158) matches: checked: all = 1
2017/06/05 14:18:08.000 kid1| 28,3| Acl.cc(158) matches: checked: (ssl_bump 
rule) = 1
2017/06/05 14:18:08.000 kid1| 28,3| Acl.cc(158) matches: checked: (ssl_bump 
rules) = 1
2017/06/05 14:18:08.000 kid1| 28,3| Checklist.cc(63) markFinished: 0x80d21df8 
answer ALLOWED for match
2017/06/05 14:18:08.000 kid1| 28,3| Checklist.cc(163) checkCallback: 
ACLChecklist::checkCallback: 0x80d21df8 answer=ALLOWED
2017/06/05 14:18:08.000 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x80d21df8
2017/06/05 14:18:08.000 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x80d21df8
2017/06/05 14:18:08.000 kid1| 83,5| PeerConnector.cc(418) 
checkForPeekAndSpliceMatched: Will check for peek and splice on FD 30
2017/06/05 14:18:08.000 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 30, type=2, 
handler=1, client_data=0x80d36e50, timeout=0
2017/06/05 14:18:08.000 kid1| 83,5| PeerConnector.cc(436) 
checkForPeekAndSpliceMatched: Retry the fwdNegotiateSSL on FD 30
2017/06/05 14:18:08.000 kid1| 83,5| bio.cc(95) write: FD 30 wrote 150 <= 150
2017/06/05 14:18:08.000 kid1| 83,5| bio.cc(576) squid_bio_ctrl: 0x80e60900 
11(0, 0)
2017/06/05 14:18:08.000 kid1| 83,5| bio.cc(118) read: FD 30 read -1 <= 5
2017/06/05 14:18:08.000 kid1| 83,5| bio.cc(123) read: error: 11 ignored: 1
2017/06/05 14:18:08.000 kid1| 5,3| comm.cc(553) commSetConnTimeout: 
local=10.215.145.187:39368 remote=46.51.187.18:443 FD 30 flags=25 timeout 59
2017/06/05 14:18:08.000 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 30, type=1, 
handler=1, client_data=0x80d36e50, timeout=0
2017/06/05 14:18:08.000 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 30, type=2, 
handler=0, client_data=0, timeout=0
2017/06/05 14:18:08.096 kid1| 83,5| bio.cc(118) read: FD 30 read 5 <= 5
2017/06/05 14:18:08.096 kid1| 83,5| bio.cc(118) read: FD 30 read 1 <= 1
2017/06/05 14:18:08.096 kid1| 83,5| bio.cc(118) read: FD 30 read 5 <= 5
2017/06/05 14:18:08.096 kid1| 83,5| bio.cc(118) read: FD 30 read 64 <= 64
2017/06/05 14:18:08.096 kid1| 83,5| bio.cc(576) squid_bio_ctrl: 0x80e60900 7(0, 
0x80e6f868)
2017/06/05 14:18:08.096 kid1| 83,5| PeerConnector.cc(307) 
serverCertificateVerified: HTTPS server CN: *.acms.com bumped: 
local=10.215.145.187:39368 remote=46.51.187.18:443 FD 30 flags=25
2017/06/05 14:18:08.096 kid1| 5,5| comm.cc(1038) comm_remove_close_handler: 
comm_remove_close_handler: FD 30, AsyncCall=0x80e60820*2
2017/06/05 14:18:08.096 kid1| 9,5| AsyncCall.cc(56) cancel: will not call 
Ssl::PeerConnector::commCloseHandler [call2554] because 
comm_remove_close_handler
2017/06/05 14:18:08.096 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: 
PeerConnector.cc(742) will call FwdState::ConnectedToPeer(0x80d4b730, 
local=10.215.145.187:39368 remote=46.51.187.18:443 FD 30 flags=25, 0/0) 
[call2552]
2017/06/05 14:18:08.096 kid1| 93,5| AsyncJob.cc(137) callEnd: 
Ssl::PeerConnector::negotiateSsl() ends job [ FD 30 job59]
2017/06/05 14:18:08.096 kid1| 83,5| PeerConnector.cc(58) ~PeerConnector: Peer 
connector 0x80d36e50 gone
2017/06/05 14:18:08.096 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob 
destructed, this=0x80d36e74 type=Ssl::PeerConnector [job59]
2017/06/05 14:18:08.096 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
FwdState::ConnectedToPeer(0x80d4b730, local=10.215.145.187:39368 
remote=46.51.187.18:443 FD 30 flags=25, 0/0)
2017/06/05 14:18:08.096 kid1| 17,4| AsyncCall.cc(38) make: make call 
FwdState::ConnectedToPeer [call2552]
2017/06/05 14:18:08.096 kid1| 17

[squid-users] ACLs allow/deny logic

2017-06-26 Thread Vieri
 kid1| 90,3| store_client.cc(297) storeClientCopy2: 
storeClientCopy2: CCEA5776796B6352934736B5664CDAEA
2017/06/26 09:51:24.483 kid1| 33,5| store_client.cc(329) doCopy: 
store_client::doCopy: co: 0, hi: 3960
2017/06/26 09:51:24.483 kid1| 90,3| store_client.cc(433) scheduleMemRead: 
store_client::doCopy: Copying normal from memory
2017/06/26 09:51:24.483 kid1| 88,5| client_side_reply.cc(2154) sendMoreData: 
clientReplyContext::sendMoreData: http://149.154.165.120/api, 3960 bytes (3960 
new bytes)
2017/06/26 09:51:24.483 kid1| 88,5| client_side_reply.cc(2158) sendMoreData: 
clientReplyContext::sendMoreData:local=149.154.165.120:80 remote=10.215.144.237 
FD 56 flags=17 'http://149.154.165.120/api' out.offset=0
2017/06/26 09:51:24.483 kid1| 88,2| client_side_reply.cc(2001) 
processReplyAccessResult: The reply for POST http://149.154.165.120/api is 
ALLOWED, because it matched allowed_restricted1_ips
2017/06/26 09:51:24.483 kid1| 20,3| store.cc(484) lock: 
ClientHttpRequest::loggingEntry locked key CCEA5776796B6352934736B5664CDAEA 
e:=XIV/0x80ba5460*3
2017/06/26 09:51:24.483 kid1| 88,3| client_side_reply.cc(2039) 
processReplyAccessResult: clientReplyContext::sendMoreData: Appending 3711 
bytes after 249 bytes of headers
2017/06/26 09:51:24.484 kid1| 87,3| clientStream.cc(162) clientStreamCallback: 
clientStreamCallback: Calling 1 with cbdata 0x8172e184 from node 0x80b74508
2017/06/26 09:51:24.484 kid1| 11,2| client_side.cc(1391) sendStartOfMessage: 
HTTP Client local=149.154.165.120:80 remote=10.215.144.237 FD 56 flags=17
2017/06/26 09:51:24.484 kid1| 11,2| client_side.cc(1392) sendStartOfMessage: 
HTTP Client REPLY:

I see 2 apparently contradictory log messages (well, for me that is -- I'm 
still learning how to read the log):
The reply for POST http://149.154.165.120/api is DENIED, because it matched 
allowed_restricted1_ips
The reply for POST http://149.154.165.120/api is ALLOWED, because it matched 
allowed_restricted1_ips

Why is this happening?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACLs allow/deny logic

2017-06-26 Thread Vieri

From: Amos Jeffries 
>> I'd like to allow by default and deny only according to the ACLs I define.
>> 

>> Here's an example with Telegram. I'd like to deny all 
>> application/octet-stream mime types in requests 

>> and replies except for a set of IP addresses or domains.>
> Er, deny is the opposite of allow. So your "example" is to demonstrate 
> the _opposite_ of what you want?
> 

> Not to mention that what you want is the opposite of a well-known 

> Security Best-Practice. Well, your call, but when things go terribly 
> wrong don't say you weren't warned.

My sentence was misleading, I suppose.
My squid.conf has the following structure (which I believe is close to the 
default for a caching http proxy):

ACL definitions

http_access deny ...
http_reply_access deny ...

http_access deny intercepted !localnet

http_access allow localnethttp_access deny all

Is there anything wrong with this?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACLs allow/deny logic

2017-06-26 Thread Vieri
Please bear with me because I still don't quite grasp the AND logic with ACLs.

Let's consider the logic "http_access deny (if) X (and) Y (and) Z" and the 
following squid configuration section:

[squid.conf - start]
acl denied_restricted1_mimetypes_req req_mime_type -i 
"/usr/local/proxy-settings/denied.restricted1.mimetypes"
acl denied_restricted1_mimetypes_rep rep_mime_type -i 
"/usr/local/proxy-settings/denied.restricted1.mimetypes"
acl allowed_restricted1_domains dstdomain -i 
"/usr/local/proxy-settings/allowed.restricted1.domains"
acl allowed_restricted1_ips dst 
"/usr/local/proxy-settings/allowed.restricted1.ips"

http_access deny denied_restricted1_mimetypes_req !allowed_restricted1_domains 
!allowed_restricted1_ips
http_reply_access deny denied_restricted1_mimetypes_rep 
!allowed_restricted1_domains !allowed_restricted1_ips

http_access deny intercepted !localnet

http_access allow localnet

http_access deny all
[squid.conf - finish]

In particular:

http_reply_access deny (if) denied_restricted1_mimetypes_rep (and not) 
allowed_restricted1_domains (and not) allowed_restricted1_ips

where 

denied_restricted1_mimetypes_rep: matches mime type application/octet-stream
allowed_restricted1_domains: matches DESTINATION domain .telegram.org
allowed_restricted1_ips: matches DESTINATION IP addresses (any one of 
149.154.167.91 or 149.154.165.120)

So, it should translate to something like this:

http_reply_access deny (if) (mime type is application/octet-stream) (and) 
(DESTINATION domain is NOT .telegram.org) (and) (DESTINATION IP address is NOT 
any of 149.154.167.91 or 149.154.165.120)

Correct?
If so, then I'm still struggling to understand the first message in the log:

"The reply for POST http://149.154.165.120/api is DENIED, because it matched 
allowed_restricted1_ips"

I don't think "the server's reply (application/octet-stream) should be denied" 
if it comes from one of 149.154.167.91 or 149.154.165.120.

Anyway, I'll try out the configuration directives you suggested and see if that 
logic applies correctly (at least to my undertsanding ;-) ).

Thanks for your valuable help,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid custom error pages and javascript/css url sources

2017-08-16 Thread Vieri
Hi,

I've created custom error pages with something like this in the header tag:


http://%h/common/jquery.mobile-1.4.5.min.css";>
http://%h/common/jquery-1.11.3.min.js"</a>;>
http://%h/common/jquery.mobile-1.4.5.min.js"</a>;>

The page displays fine when the client requested an http site.
However, for https sites the css and js files do not load.

What alternatives do I have? Should I always redirect with deny_info instead? 
Is there a "catch-all" for deny_info?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid custom error pages and javascript/css url sources

2017-08-16 Thread Vieri


From: Eliezer Croitoru 

>
> //%h/

It works great. Thanks Eliezer.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid stops replying

2017-08-23 Thread Vieri
Hi,

After a long time working correctly, Squid stops working all of a sudden.

No new requests/replies show up in the logs. Complete silence.

If I issue "squid -k reconfigure" I get this message in cache.log:
Set Current Directory to /var/cache/squid

If I set "debug_options rotate=1 ALL,9" and run "squid -k reconfigure" twice 
then I get this in cache.log:

2017/08/23 07:54:32.676| 21,3| tools.cc(610) enter_suid: enter_suid: PID 17797 
taking root privileges
2017/08/23 07:54:32.676| 13,3| mem.cc(473) Report: Memory pools are 'on'; 
limit: 5.000 MB
2017/08/23 07:54:32.676| Set Current Directory to /var/cache/squid
2017/08/23 07:54:32.676| 21,3| tools.cc(543) leave_suid: leave_suid: PID 17797 
called
2017/08/23 07:54:32.676| 21,3| tools.cc(565) leave_suid: leave_suid: PID 17797 
giving up root, becoming 'squid'
2017/08/23 07:55:01.605| 21,3| tools.cc(610) enter_suid: enter_suid: PID 17927 
taking root privileges
2017/08/23 07:55:01.605| 13,3| mem.cc(473) Report: Memory pools are 'on'; 
limit: 5.000 MB
2017/08/23 07:55:01.605| Set Current Directory to /var/cache/squid
2017/08/23 07:55:01.605| 21,3| tools.cc(543) leave_suid: leave_suid: PID 17927 
called
2017/08/23 07:55:01.605| 21,3| tools.cc(565) leave_suid: leave_suid: PID 17927 
giving up root, becoming 'squid'

However, any attempt to browse the web leads to nothing new in the logs.

Finally, stopping the squid service fails. I can list the squid processes: 

# ps -ae | grep squid
4439 ?05:29:57 squid
9059 ?00:00:00 squid
9160 ?00:00:00 squid
9162 ?00:11:42 squid
9206 ?00:00:00 squid
9208 ?00:02:04 squid
9254 ?00:00:00 squid
9257 ?00:00:28 squid
9313 ?00:00:00 squid
9315 ?00:00:55 squid

I have to kill these processes in order to start squid again.

# squid -version
Squid Cache: Version 3.5.26
Service Name: squid

This binary uses OpenSSL 1.0.2k  26 Jan 2017. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--prefix=/usr' '--build=x86_64-pc-linux-gnu' 
'--host=x86_64-pc-linux-gnu' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc' 
'--localstatedir=/var/lib' '--disable-dependency-tracking' 
'--disable-silent-rules' '--docdir=/usr/share/doc/squid-3.5.26' 
'--htmldir=/usr/share/doc/squid-3.5.26/html' '--libdir=/usr/lib64' 
'--sysconfdir=/etc/squid' '--libexecdir=/usr/libexec/squid' 
'--localstatedir=/var' '--with-pidfile=/run/squid.pid' 
'--datadir=/usr/share/squid' '--with-logdir=/var/log/squid' 
'--with-default-user=squid' '--enable-removal-policies=lru,heap' 
'--enable-storeio=aufs,diskd,rock,ufs' '--enable-disk-io' 
'--enable-auth-basic=MSNT-multi-domain,NCSA,POP3,getpwnam,SMB,LDAP,PAM,RADIUS' 
'--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-ntlm=smb_lm' 
'--enable-auth-negotiate=kerberos,wrapper' 
'--enable-external-acl-helpers=file_userip,session,unix_group,wbinfo_group,LDAP_group,eDirectory_userip,kerberos_ldap_group'
 '--enable-log-daemon-helpers' '--enable-url-rewrite-helpers' 
'--enable-cache-digests' '--enable-delay-pools' '--enable-eui' '--enable-icmp' 
'--enable-follow-x-forwarded-for' '--with-large-files' 
'--disable-strict-error-checking' '--disable-arch-native' 
'--with-ltdl-includedir=/usr/include' '--with-ltdl-libdir=/usr/lib64' 
'--with-libcap' '--enable-ipv6' '--disable-snmp' '--with-openssl' 
'--with-nettle' '--with-gnutls' '--enable-ssl-crtd' '--disable-ecap' 
'--disable-esi' '--enable-htcp' '--enable-wccp' '--enable-wccpv2' 
'--enable-linux-netfilter' '--with-mit-krb5' '--without-heimdal-krb5' 
'build_alias=x86_64-pc-linux-gnu' 'host_alias=x86_64-pc-linux-gnu' 
'CC=x86_64-pc-linux-gnu-gcc' 'CFLAGS=-O2 -pipe' 'LDFLAGS=-Wl,-O1 
-Wl,--as-needed' 'CXXFLAGS=-O2 -pipe' 'PKG_CONFIG_PATH=/usr/lib64/pkgconfig'

What can I try if this happens again?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid stops replying

2017-08-25 Thread Vieri
0  0.0  86908  5188 ?Ss   Aug24   0:00 
/usr/sbin/squid -YC -f /etc/squid/squid.owa2.conf
squid28109  0.0  0.2 141024 66372 ?SAug24   0:34 (squid-1) -YC 
-f /etc/squid/squid.owa2.conf
squid28113  0.0  0.0  19852  1716 ?SAug24   0:04 (pinger)


> If you run "squid -k shutdown ; squid -k shutdown" do they all fully stop?> 
> (exactly that command, shutdown twice in a row)
> 
> Once Squid is fully stopped, start it again. Is the problem resolved 
> when it comes back up?


I'd have to wait for Squid to stop replying. That usually takes several days, 
maybe more than a week.
The failing squid process is currently set up with:
debug_options rotate=1 ALL,1

Should I set a different level BEFORE it "stops working", ie. "now"?
I'm asking because it's going to take a long while to reproduce this issue, and 
I just want to make sure I'll have enough info when it happens.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   >