Re: [squid-users] Squid as an education tool

2024-02-09 Thread Marcus Kool

Hi Eliezer,

I am not aware of a tool that has all functionality that you seek so you 
probably have to make it yourself.
I know that you are already familiar with ufdbGuard for Squid to block access, but you can also use ufdbGuard for temporary access by including a time-restricted whitelist in the configuration file 
and doing a reload of the ufdbGuard configuration.  The reload does not interrupt the function of the web proxy or ufdbGuard itself.


Marcus

On 09/02/2024 03:41, ngtech1...@gmail.com wrote:

Hey Everybody,

I am just releasing the latest 6.7 RPMs and binaries while running couple tests 
and I was wondering if this was done.
As I am looking at proxy, in most cases it's being used as a policy enforcer 
rather than an education tool.
I believe in education as one of the top priorities compared to enforcing 
policies.
The nature of policies depends on the environment and the risks but eventually 
understanding the meaning of the policy
gives a lot to the cooperation of the user or an employee.

I have yet to see a solution like the next:
Each user has a profile/user which when receiving a policy block will be 
prompted with an option to allow temporarily
the specific site or domain.
Also, I have not seen an implementation which allows the user to disable or 
lower the policy strictness for a short period of time.

I am looking for such implementations if those exist already to run education 
sessions with teenagers.

Thanks,
Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-05-30 Thread Marcus Kool

Not sure if this message was meant for the Squid mailing list but for those who 
are interested, the DNS provider had an issue with DNSSEC resigning and all is 
well now.

Marcus


On 28/05/2024 15:23, Anton Kornexl wrote:

Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We get no 
updates to the urlfiter-DB and the homepage can´t be opned.

Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] urlfilterdb.com

2024-06-01 Thread Marcus Kool

I am not :-)

On 01/06/2024 06:24, Jonathan Lee wrote:

Marcus are you the same guy that does the pfSense Squid GUI package 
interference code??
Sent from my iPhone


On May 30, 2024, at 01:38, Marcus Kool  wrote:

Not sure if this message was meant for the Squid mailing list but for those 
who are interested, the DNS provider had an issue with DNSSEC resigning and all 
is well now.

Marcus



On 28/05/2024 15:23, Anton Kornexl wrote:
Hello,

since two days the domain urlfilterdb.com is not resolved to an IP.  We get no 
updates to the urlfiter-DB and the homepage can´t be opned.

Does someone know the reason?

Kind regards

Anton


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] filter NONE/000 NONE error:transaction-end-before-headers

2020-07-28 Thread Marcus Kool

bugs.squid-cache.org is not working now, but I think this is bug 4906.

Marcus



On 2020-07-28 15:01, Alex Rousskov wrote:

On 7/28/20 5:38 AM, ama...@tin.it wrote:

thank for your suggestion.

That specific suggestion was not mine :-)

For free Squid support, please keep the conversation on squid-users.



I have tried with:
acl noTransactionLvs src 10.xxx.xxx.xxx/32
acl noTransactionLvs src 10.xxx.xxx.xxx/32

acl hasRequest has request
acl dontLog all-of !hasRequest noTransactionLvs
access_log none dontLog
and with
acl noTransactionLvs src 10.xxx.xxx.xxx/32
acl noTransactionLvs src 10.xxx.xxx.xxx/32

access_log none noTransactionLvs
access_log /var/log/squid4/access.log combined !noTransactionLvs
but without result.


What is your Squid version?


None of the configs below is the right long-term solution, but just for
testing purposes, please try these three tests:

* Test 1 (should log nothing):

   access_log none all
   # and no other access_log lines


* Test 2 (should also log nothing):

   acl hasRequest has request
   access_log none !hasRequest
   access_log /var/log/squid4/access.log combined
   # and no other access_log lines


* Test 3 (should only log regular transactions):

   acl hasRequest has request
   access_log none !hasRequest
   access_log /var/log/squid4/access.log combined
   # and no other access_log lines

For each of the tests, please report whether regular transactions are
logged to /var/log/squid4/access.log _and_ whether the loadbalancer
probes are logged to /var/log/squid4/access.log


Thank you,

Alex.




Messaggio originale
Da: rousskov@measurement-
factory.com
Data: 27-lug-2020 15.19
A: "ama...@tin.it",

Ogg: Re: [squid-users] filter
NONE/000 NONE error:transaction-end-before-headers

On 7/27/20 6:30 AM,
ama...@tin.it wrote:


I would like to filter the message NONE/000

NONE error:

transaction-end-before-headers - HIER_NONE/- - - HTTP/0.0

"-" 0 0 that

it arrives from loadbalancer keep alived.



I have

red that It was/is a bug.

Those records are not a bug if your
loadbalancer does open connections
to Squid's http_port.



Please

could you give me a

practical example example that how it works the:
# acl aclname note [-m[=delimiters]] name [value ...] ?

The "note"
ACL tests prior annotations. It is unlikely to help in your
use case
because nothing will be able to annotate these half-baked
short-lived
transactions until they are logged.

Please see whether Amos' recent
suggestion works for you:

http://lists.squid-cache.org/pipermail/squid-users/2020-July/022461.html


HTH,

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL matches when it shouldn't

2020-10-02 Thread Marcus Kool

Of course this script is sluggish since it reads many category files and forks 
at least 3-6 times.

If you *really* want to implement this with a perl script, it should read all 
files at startup and the script does a lookup using perl data structures.

But I suggest to look at ufdbGuard which is a URL filter that is way faster and 
has all functionality that you need.

Marcus


On 2020-10-02 10:08, Vieri wrote:

Regarding the use of an external ACL I quickly implemented a perl script that "does 
the job", but it seems to be somewhat sluggish.

This is how it's configured in squid.conf:
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=10 children-idle=3 concurrency=8 %PROTO %DST %PORT %PATH 
/opt/custom/scripts/squid/ext_txt_blwl_acl.pl 
--categories=adv,aggressive,alcohol,anonvpn,automobile_bikes,automobile_boats,automobile_cars,automobile_planes,chat,costtraps,dating,drugs,dynamic,finance_insurance,finance_moneylending,finance_other,finance_realestate,finance_trading,fortunetelling,forum,gamble,hacking,hobby_cooking,hobby_games-misc,hobby_games-online,hobby_gardening,hobby_pets,homestyle,ibs,imagehosting,isp,jobsearch,military,models,movies,music,podcasts,politics,porn,radiotv,recreation_humor,recreation_martialarts,recreation_restaurants,recreation_sports,recreation_travel,recreation_wellness,redirector,religion,remotecontrol,ringtones,science_astronomy,science_chemistry,sex_education,sex_lingerie,shopping,socialnet,spyware,tracker,updatesites,urlshortener,violence,warez,weapons,webphone,webradio,webtv

I'd like to avoid the use of a DB if possible, but maybe someone here has an 
idea to share on flat file text searches.

Currently the dir structure of my blacklists is:

topdir
category1 ... categoryN
domains urls

So basically one example file to search in is topdir/category8/urls, etc.

The helper perl script contains this code to decide whether to block access or 
not:

foreach( @categories )
{
     chomp($s_urls = qx{grep -nwx '$uri_dst$uri_path' $cats_where/$_/urls | 
head -n 1 | cut -f1 -d:});

     if (length($s_urls) > 0) {
     if ($whitelist == 0) {
     $status = $cid." ERR message=\"URL ".$uri_dst." in BL ".$_." (line 
".$s_urls.")\"";
     } else {
     $status = $cid." ERR message=\"URL ".$uri_dst." not in WL ".$_." (line 
".$s_urls.")\"";
     }
     next;
     }

     chomp($s_urls = qx{grep -nwx '$uri_dst' $cats_where/$_/domains | head 
-n 1 | cut -f1 -d:});

     if (length($s_urls) > 0) {
     if ($whitelist == 0) {
     $status = $cid." ERR message=\"Domain ".$uri_dst." in BL ".$_." (line 
".$s_urls.")\"";
     } else {
     $status = $cid." ERR message=\"Domain ".$uri_dst." not in WL ".$_." (line 
".$s_urls.")\"";
     }
     next;
     }
}

There are currently 66 "categories" with around 50MB of text data in all.
So that's a lot to go through each time there's an HTTP request.
Apart from placing these blacklists on a ramdisk (currently on an M.2 SSD disk 
so I'm not sure I'll notice anything) what else can I try?
Should I reindex the lists and group them all alphabetically?
For instance should I process the lists in order to generate a dir structure as 
follows?

topdir
a b c d e f ... x y z 0 1 2 3 ... 7 8 9
domains urls

An example for a client requesting https://www.google.com/ would lead to 
searching only 2 files:
topdir/w/domains
topdir/w/urls

An example for a client requesting https://01.whatever.com/x would also lead to 
searching only 2 files:
topdir/0/domains
topdir/0/urls

An example for a client requesting https://8.8.8.8/xyz would also lead to 
searching only 2 files:
topdir/8/domains
topdir/8/urls

Any ideas or links to scripts that already prepare lists for this?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid domain block feature is at DNS level ?

2021-07-20 Thread Marcus Kool

DNS over HTTPS is used for privacy and also to circumvent filters.

If one wants to filter websites, one must block /all/ filter circumvention 
techniques as well (or the filter is useless).

shameless plug: the URL database of URLfilterDB has a category dnsoverhttps 
which can be used to block DNS over HTTPS.

Marcus


On 20/07/2021 06:45, Fennex wrote:
Hello, I'm looking to block some pages. I tried to block domains with a feature of my router, but it only works at DNS level. I can bypass it using a secure DNS in a browser like Firefox or Brave 
which accepts this "new" feature. I want to know if Squid blocks the domains at DNS level, or if it does a DNS lookup and blocks by ip or something similar. Thanks you.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to pass TeamViewer traffic

2021-10-23 Thread Marcus Kool

sslbump can be used in peek+splice and peek+bump modes.

Depending on what Squid finds in the peek (e.g. a teamviewer FQDN) Squid can 
decide to splice (not interfere) the connection.

Below is an example.

Marcus



# TLS/SSL bumping definitions

acl tls_s1_connect at_step SslBump1


# define acls for sites that must not be bumped

acl tls_server_is_bank ssl::server_name .abnamro.nl

acl tls_server_is_bank ssl::server_name .abnamro.com

acl tls_server_is_teamviewer ssl::server_name .teamviewer.com

acl tls_to_splice any-of tls_server_is_teamviewer tls_server_is_bank


# TLS/SSL bumping steps

ssl_bump peek tls_s1_connect    # /peek/at TLS/SSL connect data

ssl_bump splice tls_to_splice   # /splice //some/: no active bump

ssl_bump stare all   # /stare/(peek) at server

ssl_bump bump     # /bump/if we can (if the /stare///succeeded)




On 23/10/2021 17:41, Andrea Venturoli wrote:

On 10/22/21 17:24, Alex Rousskov wrote:


I do not know much about TeamViewer, ...
You do not need SslBump and https_port for this.


AFAIK you *cannot* use SslBump, as TeamViewer pinpoints certificates.
If someone can prove me wrong, I'd be curious to know how they manage this.

 bye
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem in squid log

2021-11-09 Thread Marcus Kool

Hi, I am the author of ufdbGuard and ufdbGuard supports Squid 5.x

The SARG error in access.log has nothing to do with ufdbGuard.


On 09/11/2021 08:45, Majed Zouhairy wrote:

hmmm, this started happening after the last squid update.. i just noticed it is 
now version 5.2
i have ufdbguard but i don't think i have smp..

the last line of squid conf are

url_rewrite_extras "%>a/%>A %un %>rm bump_mode=%ssl::bump_mode sni=\"%ssl::>sni\" 
referer=\"%{Referer}>h\""
url_rewrite_program /usr/local/ufdbguard/bin/ufdbgclient -m 4 -l /var/log/squid/
url_rewrite_children 16 startup=8 idle=2 concurrency=4 queue-size=64

i think ufdbguard does not support squid version 5 yet, which might be the 
problem

On 11/8/21 10:42 PM, Alex Rousskov wrote:

On 11/8/21 5:30 AM, Majed Zouhairy wrote:

when i run sarg

SARG: sarg version: 2.4.0 Jan-16-2020
SARG: Reading access log file: /var/log/squid/access.log
SARG: Log format identified as "squid log format" for
/var/log/squid/access.log
SARG: The following line read from /var/log/squid/access.log could not
be parsed and is ignored
1636349341.484 12 10.184.0.2 NONE_NONE/400 20417 GET
https://zen.yandex.by/lz5XeGt8f/ir4w02684/13f5fd2qrAJ2/p_CMhOoMLrxy4M2QFtQI-HLBvD5tHT6JdGbykwp9eDzBNcrpN2RIqcyiFH9pWekXwFsAEtIMz3_5FVo5y8zXIrAwGER6-e4cM0VckNJR_CjjEd2OObzKrHDSM2ZrfFzJ9CELTSJAeFt45wBcaGm_VqdcIXKVKFp7THc-uX7PdjLGAUpRv63aKSdE2OOnMXyOt0SJK0vNXql0thIirh9cGORGu31DYR9cCKZAW9gYjiGgfTFlxfgLOitwTohOyMZzx3ZNcK_K-rk2Vb_ 




UPVydoTW1636349696.714    629 10.106.0.2 NONE_NONE/200 0 CONNECT
azscus1-client-s.gateway.messenger.live.com:443 -
HIER_DIRECT/40.74.219.49 -
SARG: 4 consecutive errors found in the input log file
/var/log/squid/access.log

so i think the solution would be to exclude zen.yandex.by from processing ?


The correct solution would depend on what you are trying to accomplish
(with sarg), but that solution is unlikely to include disabling logging
of requests to any domains IMHO.

Based on the above output (that could have been changed by multiple mail
agents), it is difficult for me to guess what sarg did not like, but if
you are suffering from Squid SMP workers corrupting each-other
access.log entries, then please see Bug 5173:
https://bugs.squid-cache.org/show_bug.cgi?id=5173


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] The status of AIA ie: TLS code: X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY ?

2022-02-05 Thread Marcus Kool



I would have expected that the remote host ip:port and sni would be logged
as well in the above mentioned line.



SNI is one of the details TLS/1.3 encrypts now  :(


To prevent misunderstandings, TLS 1.3 does not encrypt the SNI.

See https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni :
Although TLS 1.3 [RFC8446  ] 
encrypts most of the handshake, including
the server certificate, there are several ways in which an on-path
attacker can learn private information about the connection.  The
plaintext Server Name Indication (SNI) extension in ClientHello
messages, which leaks the target domain for a given connection, is
perhaps the most sensitive, unencrypted information in TLS 1.3.

However, there is an optional TLS 1.3 extension that may encrypt the SNI and 
refers to it as ESNI.

Marcus


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance recommendation

2022-09-21 Thread Marcus Kool


On 20/09/2022 20:52, Pintér Szabolcs wrote:


Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.

Parameters: I need HA, caching(little objects only not like big windows 
updates), scaling(It is just secondly), and I want to use and modify(in 
production,in working hours) complex black- and whitelists

[snip]


To modify the Squid config in production during working hours is a requirement 
that needs careful thought since the web proxy is unavailable when it reloads 
its configuration.

HA can resolved this with
1. change config squid node 1
2. load balancer stops new connections to node 1
3. wait X minutes, maybe 15 minutes, for most connections to node 1 to disappear
4. reload the config on node 1 - existing connections are closed
5. wait until Squid on node 1 is operational again
6. load balancer allows new connections to node 1 and stops new connections to 
node 2
7. change config squid node 2
8. wait X minutes, maybe 15 minutes, for most connections to node 2 to disappear
9. reload the config on node 2 - existing connections are closed
10. wait until Squid on node 2 is operational again
11. load balancer allows new connections to node 2

Depending on what your requirements are, you may consider using ufdbGuard for 
Squid since ufdbGuard can reload its configuration without interrupting clients 
of the web proxy.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Counting unique devices connected to squid proxy

2023-01-19 Thread Marcus Kool

The squid log file contains the IP address of clients and could be a good field 
to use for counting users.  But a NAT shows 1 IP for all users behind the NAT...

Marcus


On 19/01/2023 15:48, Ben Goz wrote:

By the help of God.

Hello,
I have a certain task to count the number of unique devices connected (Could be 
also transparently) to squid proxy server. While the users can be on different 
networks and behind NAT.
Is it possible?
What is the best approach of implement it?

Thanks.
Ben


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL intercept in explicit mode

2018-03-13 Thread Marcus Kool

"SSL bump" is the name of a complex Squid feature.
With ssl_bump ACLs one can decide which domains can be 'spliced' (go through 
the proxy untouched) or can be 'bumped' (decrypted).

Interception is not a requirement for SSL bump.

Marcus

On 13/03/18 11:44, Danilo V wrote:

I mean SSL bump in explicit mode.
So intercept is a essencial requirement for running SSL bump?

Em ter, 13 de mar de 2018 às 11:10, Matus UHLAR - fantomas mailto:uh...@fantomas.sk>> escreveu:

On 13.03.18 13:44, Danilo V wrote:
 >Is it possible/feasible to configure squid in explicit mode with ssl
 >intercept?

explicit is not intercept, intercept is not explicit.

explicit is where browser is configured (manually or automatically via WPAD)
to use the proxy.

intercept is where network device forcifully redirects http/https 
connections
to the proxy.

maybe you mean SSL bump in explicit mode?

 >Due to architecture of my network it is not possible to implement
 >transparent proxy.

excuse me?
by "transparent" people mean what we usually call "intercept".

 >What would be the behavior of applications that dont support proxy - i.e.
 >dont forward requests to proxy?

they mest be intercepted.

--
Matus UHLAR - fantomas, uh...@fantomas.sk  ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...
___
squid-users mailing list
squid-users@lists.squid-cache.org 
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Marcus Kool

ufdbGuard is the tool that you need.
It is an old fork of ufdbGuard with many new features, very good performance 
and it has regular maintenance.
If you have a question, you can ask the support desk at www.urlfilterdb.com.
You will get an answer from me or a colleague.

Marcus


On 14/03/18 09:39, Nicolas Kovacs wrote:

Le 14/03/2018 à 13:33, Amos Jeffries a écrit :

You do not need SG or any fancy redirector helpers at all for that.


Yes, I do. Because this is part of a step-by-step course about
SquidGuard, which worked perfectly under Slackware Linux. And my
filtering rules are becoming increasingly complex.

Niki



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Marcus Kool


On 14/03/18 10:55, Nicolas Kovacs wrote:

Le 14/03/2018 à 14:46, Marcus Kool a écrit :

ufdbGuard is the tool that you need.
It is an old fork of ufdbGuard with many new features, very good
performance and it has regular maintenance.
If you have a question, you can ask the support desk at
www.urlfilterdb.com.
You will get an answer from me or a colleague.


Thanks for the heads-up.

On the school server running SquidGuard, I'm using the blacklist
collection of the University of Toulouse, which has several millions (!)
of URLS/domains in about a hundred different categories.

Will I be able to use these blacklists with ufdbGuard ?

Niki


yes, no problem.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

2018-05-15 Thread Marcus Kool

pcmag.com also does not load here, although my config parameters are slightly 
different.
The certificate is indeed huge...
Do you have
   ERROR: negotiating TLS on FD NNN: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)
or other errors in cache.log ?

Marcus

On 15/05/18 10:15, Ahmad, Sarfaraz wrote:

Hi Folks,

I am using Squid as a HTTPS interception proxy. When I try to access 
https://www.pcmag.com , (which is supposed to be bumped in my environment ), I 
get

“unable to forward request at this time” even though the website is perfectly 
accessible outside of the proxy.

A packet capture suggests that after Client Hello -> ServerHello -> ServerCertificate,Server Key Exchange, ServerHelloDone, the remote server just sends a FIN,ACK packet, killing off the TCP 
connection. Nothing else looks out of the ordinary.  ( Without squid, firefox successfully opens the site and the negotiation is TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS1.2)


The only weird thing that stands out about that website is that the list of 
SubjectAlternateNames is huge. Could this be a possible bug with Squid ?

My TLS options in Squid.conf :

tls_outgoing_options cafile=/etc/pki/tls/certs/ca-bundle.crt \

     options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE \

     
cipher=HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!EXPORT:!DES:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

https_port :

https_port 23129 intercept ssl-bump \

     generate-host-certificates=on \

     dynamic_cert_mem_cache_size=4MB \

     cert=/etc/squid/InternetCA/InternetCA.pem \

     key=/etc/squid/InternetCA/InternetCA.key \

     tls-cafile=/etc/squid/InternetCA/InternetCA.chain.pem \

     capath=/etc/pki/tls/certs/certs.d \

     options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE \

     tls-dh=prime256v1:/etc/squid/dhparam.pem

Please advise.

Regards,

Sarfaraz



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

2018-05-15 Thread Marcus Kool

The proxies that I used for the test have Squid 4.0.22 and Squid 4.0.23.

Marcus


On 15/05/18 15:40, Amos Jeffries wrote:

On 16/05/18 01:32, Marcus Kool wrote:

pcmag.com also does not load here, although my config parameters are
slightly different.
The certificate is indeed huge...
Do you have
    ERROR: negotiating TLS on FD NNN: error:14090086:SSL
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)
or other errors in cache.log ?

Marcus



Are these Squid-4.0.24 ? There is a regression[1] in the cafile=
parameter handling in the latest release.
  <https://bugs.squid-cache.org/show_bug.cgi?id=4831>

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kaspersky and ufdbguard

2018-05-17 Thread Marcus Kool

195.122.177.165 is an IP address of Kaspersky (see whois 195.122.177.165).
ufdbguardd blocks this IP address since it is configured to do so which is 
indicated by 'https-option', most likely because the config has
   option enforce-https-with-hostname on # default is off.

Marcus


On 17/05/18 08:03, Vacheslav wrote:

I have this:
acl {
allSystems  {
   ### EDIT THE NEXT LINE FOR LOCAL CONFIGURATION:
   pass
   alwaysallow
   # !always-block
!ms-data-collection
   !adult !security
!proxies !malware !warez
   !gambling !violence !drugs
  !phishtank !spyware
   chat dating !games religion  finance jobs shops sports travel news
   webmail forum socialnet youtube
!webtv webradio audiovideo
   !ads
searchengine
   # with "logall on" or "logpass on" it makes sense to have the category 
"checked" in the ACL.
   any
   # NOTE: ALL categories are part of the ACL for logging purposes.
   # Only when logall is off, one can remove the allowed categories 
from the ACL.
}

I don't have a similar config acl.

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, May 17, 2018 1:56 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] kaspersky and ufdbguard

On 17/05/18 17:45, Vacheslav wrote:

Peace,

When I configured Kaspersky to use proxy, I started getting as an example:

BLOCK -10.96.0.104 config https-option
195.122.177.165:443 CONNECT

I have require https hostname. Kaspersky is updating fine.

Anyone has an idea what Kaspersky is connecting ?



That is a custom log format, you have not provided any info about what each 
field is. So no, we don't have much of a clue what it means.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kaspersky and ufdbguard

2018-05-17 Thread Marcus Kool

I do not block my Kaspersky AV.
Do you want the Kaspersky software contact the servers of Kaspersky ?

On 17/05/18 09:30, Vacheslav wrote:

Yeah all that I know, The million dollar question is should I continue blocking 
it?

-Original Message-
From: squid-users  On Behalf Of 
Marcus Kool
Sent: Thursday, May 17, 2018 3:22 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] kaspersky and ufdbguard

195.122.177.165 is an IP address of Kaspersky (see whois 195.122.177.165).
ufdbguardd blocks this IP address since it is configured to do so which is 
indicated by 'https-option', most likely because the config has
 option enforce-https-with-hostname on # default is off.

Marcus


On 17/05/18 08:03, Vacheslav wrote:

I have this:
acl {
 allSystems  {
### EDIT THE NEXT LINE FOR LOCAL CONFIGURATION:
pass
   alwaysallow
   # !always-block
!ms-data-collection
   !adult !security
!proxies !malware !warez
   !gambling !violence !drugs
  !phishtank !spyware
   chat dating !games religion  finance jobs shops sports travel news
   webmail forum socialnet youtube
 !webtv webradio audiovideo
   !ads
 searchengine
   # with "logall on" or "logpass on" it makes sense to have the category 
"checked" in the ACL.
   any
   # NOTE: ALL categories are part of the ACL for logging purposes.
   # Only when logall is off, one can remove the allowed categories 
from the ACL.
 }

I don't have a similar config acl.

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, May 17, 2018 1:56 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] kaspersky and ufdbguard

On 17/05/18 17:45, Vacheslav wrote:

Peace,

When I configured Kaspersky to use proxy, I started getting as an example:

BLOCK -10.96.0.104 config https-option
195.122.177.165:443 CONNECT

I have require https hostname. Kaspersky is updating fine.

Anyone has an idea what Kaspersky is connecting ?



That is a custom log format, you have not provided any info about what each 
field is. So no, we don't have much of a clue what it means.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and systemd

2018-06-13 Thread Marcus Kool

I have seen systemd killing daemons when it times out waiting for the pid file 
to appear.
I suggest to doublecheck that the pid filename in the service file and in 
squid.conf are the same.

Marcus

On 13/06/18 09:27, James Lay wrote:

WellI'll just say up front that systemd is not my friend. When running 
squid via cli: sudo /opt/squid/sbin/squid it runs like a champ. But using the 
service file at:

https://raw.githubusercontent.com/squid-cache/squid/master/tools/systemd/squid.service

it times out after a few:

06:20:11 gateway squid[3669]: Created PID file (/opt/squid/var/run/squid.pid)
06:20:11 gateway squid[3669]: Squid Parent: will start 1 kids
06:20:11 gateway squid[3669]: Squid Parent: (squid-1) process 3678 started
06:20:11 gateway squid[3678]: Set Current Directory to /opt/squid/var
06:20:11 gateway squid[3678]: Starting Squid Cache version 4.0.24 for 
x86_64-pc-linux-gnu...
06:20:11 gateway squid[3678]: Service Name: squid
06:20:11 gateway squid[3678]: Process ID 3678
06:20:11 gateway squid[3678]: Process Roles: worker
06:20:11 gateway squid[3678]: With 1024 file descriptors available
06:20:11 gateway squid[3678]: Initializing IP Cache...
06:20:11 gateway squid[3678]: DNS Socket created at [::], FD 5
06:20:11 gateway squid[3678]: DNS Socket created at 0.0.0.0, FD 10
06:20:11 gateway squid[3678]: Adding nameserver 192.168.1.253 from 
/etc/resolv.conf
06:20:11 gateway squid[3678]: Adding nameserver 205.171.3.65 from 
/etc/resolv.conf
06:20:11 gateway squid[3678]: Adding nameserver 205.171.2.65 from 
/etc/resolv.conf
06:20:11 gateway squid[3678]: Adding domain slave-tothe-box.net from 
/etc/resolv.conf
06:20:11 gateway squid[3678]: Adding domain slave-tothe-box.net from 
/etc/resolv.conf
06:20:11 gateway squid[3678]: helperOpenServers: Starting 5/5 
'security_file_certgen' processes
06:20:11 gateway squid[3678]: Logfile: opening log syslog:daemon.info
06:20:11 gateway squid[3678]: Store logging disabled
06:20:11 gateway squid[3678]: Swap maxSize 0 + 262144 KB, estimated 20164 
objects
06:20:11 gateway squid[3678]: Target number of buckets: 1008
06:20:11 gateway squid[3678]: Using 8192 Store buckets
06:20:11 gateway squid[3678]: Max Mem  size: 262144 KB
06:20:11 gateway squid[3678]: Max Swap size: 0 KB
06:20:11 gateway squid[3678]: Using Least Load store dir selection
06:20:11 gateway squid[3678]: Set Current Directory to /opt/squid/var
06:20:11 gateway squid[3678]: Finished loading MIME types and icons.
06:20:11 gateway squid[3678]: HTCP Disabled.
06:20:11 gateway squid[3678]: Squid plugin modules loaded: 0
06:20:11 gateway squid[3678]: Adaptation support is off.
06:20:11 gateway squid[3678]: Accepting HTTP Socket connections at 
local=x.x.x.x:3127 remote=[::] FD 21 flags=9
06:20:11 gateway squid[3678]: Accepting NAT intercepted HTTP Socket connections 
at local=x.x.x.x:3128 remote=[::] FD 22 flags=41
06:20:11 gateway squid[3678]: Accepting NAT intercepted SSL bumped HTTPS Socket 
connections at local=x.x.x.x:3129 remote=[::] FD 23 flags=41
06:20:12 gateway squid[3678]: storeLateRelease: released 0 objects
06:21:41 gateway systemd[1]: squid.service: Start operation timed out. 
Terminating.
06:21:41 gateway systemd[1]: squid.service: Killing process 3669 (squid) with 
signal SIGKILL.
06:21:41 gateway sudo: pam_unix(sudo:session): session closed for user root
06:21:41 gateway systemd[1]: squid.service: Killing process 3678 (squid) with 
signal SIGKILL.
06:21:41 gateway jlay[2415] 192.168.1.2 46692 192.168.1.252 22: sudo systemctl 
start squid
06:21:41 gateway systemd[1]: squid.service: Killing process 3680 
(security_file_c) with signal SIGKILL.
06:21:41 gateway systemd[1]: squid.service: Killing process 3682 
(security_file_c) with signal SIGKILL.
06:21:41 gateway systemd[1]: squid.service: Killing process 3683 
(security_file_c) with signal SIGKILL.
06:21:41 gateway systemd[1]: squid.service: Killing process 3684 
(security_file_c) with signal SIGKILL.
06:21:41 gateway systemd[1]: squid.service: Killing process 3685 
(security_file_c) with signal SIGKILL.
06:21:41 gateway systemd[1]: squid.service: Failed with result 'timeout'.
06:21:41 gateway systemd[1]: Failed to start Squid Web Proxy Server.

I've modded the service file to reflect different binary location, but that's 
about it. Thank you.

James


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.1 default queue-size should consider concurrency

2018-07-03 Thread Marcus Kool
The original intention of this default value is have a queue that is twice the size of the messages being processed, so for helpers with concurrency=NCONC and num_children=NCHILD it makes a lot of 
sense to set the default queue length to 2*NCONC*NCHILD.

I do not understand that "compatibility" with doing the wrong calculation is a 
good thing.

Marcus


On 03/07/18 05:16, Amos Jeffries wrote:

On 03/07/18 20:00, Amish wrote:

Hello,

In squid 4.1 new option "queue-size" was introduced.

In most (or all) cases default "queue-size" is set to children-max*2.

But I believe it should be higher of (children-max*2) OR (concurrency*2)

Or it can be some better formula but the point I am trying to make is
that, "concurrency" should be taken in to account for calculating
default value of "queue-size".

Please consider.


FYI; When we add a directive or option to control some behaviour that
already happens the default is usually set to the value all existing
Squid are using so nobody gets an unexpected surprise with upgrade.

That can change later once more people have experimented to find what a
better value actually is.

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.1 default queue-size should consider concurrency

2018-07-03 Thread Marcus Kool

If an admin finds it necessary to configure
   url_rewrite_children 16 concurrency=4
the helper subsystem is theoretically capable of processing 64 messages 
simultaneously.
It does not makes sens to use max(2*4,2*16)=32 for queue-size, but should be 
_at least_ 64.

Since Squid (before introducing the concurrency parameter) wanted to have a queue that is twice the capacity of the helper subsystem (most likely since this was found to be a good value), the logical 
thing to do is what I proposed and use 2*NCONC*NCHILD, which in the above example is 2*4*16 = 128.


There are many messages on this mailing list about problems with helpers.
One refers to
   external_acl_type session concurrency=100 ttl=3 negative_ttl=0 
children-max=1 %LOGIN /usr/lib64/squid/ext_session_acl -a -T 60 -b 
/var/lib/squid/sessions/
with NCONC=100, NCHILD=1, it is not hard to imagine that a default queue-size 
of 2 is problematic.
Another one has
   store_id_children 100 startup=0 idle=1 concurrency=1000
with NCONC=1000 and NCHILD=100 the admin wants to process max 1 million 
requests simultaneously.
*I have a suspicion that admins use these large numbers in an attempt to have a 
good performance but they never reach it since the queue-size stays small.*
So, the impact of queue-size should be very clearly documented, better is to 
use a sane default.

Marcus


On 03/07/18 12:07, Amish wrote:

2*NCONC*NCHILD will possibly lead to too high value as a default and the 
busy-ness will never be logged.

My proposal of higher of (2*NCONC) and (2*NCHILD) would mean that load is now 
regularly high enough that atleast 2 more children are needed.

We can start with that and then find a better formula.

Amish


On Tuesday 03 July 2018 07:49 PM, Marcus Kool wrote:
The original intention of this default value is have a queue that is twice the size of the messages being processed, so for helpers with concurrency=NCONC and num_children=NCHILD it makes a lot of 
sense to set the default queue length to 2*NCONC*NCHILD.

I do not understand that "compatibility" with doing the wrong calculation is a 
good thing.

Marcus


On 03/07/18 05:16, Amos Jeffries wrote:

On 03/07/18 20:00, Amish wrote:

Hello,

In squid 4.1 new option "queue-size" was introduced.

In most (or all) cases default "queue-size" is set to children-max*2.

But I believe it should be higher of (children-max*2) OR (concurrency*2)

Or it can be some better formula but the point I am trying to make is
that, "concurrency" should be taken in to account for calculating
default value of "queue-size".

Please consider.


FYI; When we add a directive or option to control some behaviour that
already happens the default is usually set to the value all existing
Squid are using so nobody gets an unexpected surprise with upgrade.

That can change later once more people have experimented to find what a
better value actually is.

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.1 default queue-size should consider concurrency

2018-07-03 Thread Marcus Kool

Thanks for the clarification.  The squid.conf.documented file says
   The queue-size=N option sets the maximum number of queued requests to N.
which, for me at least, is hard to translate into
   maximum number of requests buffered because no helper can accept it.


On 03/07/18 13:09, Alex Rousskov wrote:

Marcus,

 Based on your examples, I suspect that you are misinterpreting what
the queue is. The request is queued only when no helper can accept it.
The queue is not used for requests sent to helpers.

Alex.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.1 default queue-size should consider concurrency

2018-07-03 Thread Marcus Kool



On 03/07/18 12:54, Alex Rousskov wrote:

On 07/03/2018 08:19 AM, Marcus Kool wrote:



If you think Squid should use a different default for all or some helper
categories, please post a proposal that documents pros and cons and
justifies the change. The URL above can be used as your guide to helper
categories.


With your clarification of what the queue is used for, I no longer consider my 
proposal for a different default queue size valid.

I do like to see better documentation for the new queue-size option.
Including your one-liner in squid.conf.documented is enough for me.

Marcus



Thank you,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.1 default queue-size should consider concurrency

2018-07-03 Thread Marcus Kool

I read the changes and like them.

I also looked at the error messages that Squid produces when helpers are 
overloaded.
It would be nice if in external_acl.cc, helper.cc and redirect.cc the debugs( 
... DBG_IMPORTANT ... ) messages have additional text like
   #children, concurrency or queue-size may need adjustment

Thanks
Marcus

On 03/07/18 17:50, Alex Rousskov wrote:

On 07/03/2018 10:52 AM, Marcus Kool wrote:


I do like to see better documentation for the new queue-size option.
Including your one-liner in squid.conf.documented is enough for me.


I wish it were that simple! For starters, there are at least six
independent and slightly different contexts where this queuing should be
documented.

Please proof read: https://github.com/squid-cache/squid/pull/238

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + Squidguard Youtube URL video filtering

2018-08-16 Thread Marcus Kool

yes, with ufdbguard you put
   youtube.com/watch?v=VIDEOID
in a urls file and create a URL table with ufdbGenTable.
ufdbGenTable adds many URLs automagically, i.e.
   youtube.com/embed/VIDEOID
   youtube.com/get_video_info?video_id=VIDEOID
   ytimg.googleusercontent.com/vi/VIDEOID
and many more.

Marcus

On 16/08/18 11:01, Vacheslav wrote:

Wouldn't it be better to try it in ufdbguard?

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, August 16, 2018 4:18 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid + Squidguard Youtube URL video filtering

On 17/08/18 00:43, Roberto Carna wrote:

Dear, I have Squid + Squidguard working OK.

Squidguard is filtering the entire www.youtube.com website.

But now I have to permit just one video from Youtube:

https://www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0

I have added the below URL as an exception in Squidguard:

www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0

but after that I can't see it, still blocked.

How can I enable just this URL from Squidguard preferently blocking
the rest of Youtube ???



Unfortunately only with a great deal of difficulty.



The "?v=..." and "/embed/..." URLs are just public identifiers to access the 
YouTube APIs. At the HTTP level they result in a quite long series of sub-requests, redirections 
and the like bouncing all over the

youtube.* and googlevideos.* and googleapis.* domains.
  Yes all of them are involved multiple times. So whitelisting is an 
all-or-nothing prospect, with other G services being implicitly whitelisted as 
side effects.



Also, whenever the way to decipher the above maze of traffic gets published so 
we can do things like what you ask. YT shortly afterwards change how it 
operates - usually towards even more complexity. This has happened too many 
times to be coincidence IMO.




Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + Squidguard Youtube URL video filtering

2018-08-17 Thread Marcus Kool

OP asked about blocking Youtube but allowing a single Youtube video.
How would you do that with a couple of DNS entries ?

Marcus

On 16/08/18 22:11, SQUIDBLACKLIST.ORG wrote:

This might be painfully obvious to some who are in the know, but, filtering 
youtube video content can be done with a lot less effort by simply adding a 
couple dns entries for Googles safesearch servers.

#justsayin



Signed,

Benjamin E. Nichols
Founder &  Chief Architect
http://www.squidblacklist.org
1-405-301-9516

 Original message 
From: Marcus Kool 
Date: 8/16/18 7:53 PM (GMT-06:00)
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid + Squidguard Youtube URL video filtering

yes, with ufdbguard you put
     youtube.com/watch?v=VIDEOID
in a urls file and create a URL table with ufdbGenTable.
ufdbGenTable adds many URLs automagically, i.e.
     youtube.com/embed/VIDEOID
     youtube.com/get_video_info?video_id=VIDEOID
     ytimg.googleusercontent.com/vi/VIDEOID
and many more.

Marcus

On 16/08/18 11:01, Vacheslav wrote:
 > Wouldn't it be better to try it in ufdbguard?
 >
 > -Original Message-
 > From: squid-users  On Behalf Of 
Amos Jeffries
 > Sent: Thursday, August 16, 2018 4:18 PM
 > To: squid-users@lists.squid-cache.org
 > Subject: Re: [squid-users] Squid + Squidguard Youtube URL video filtering
 >
 > On 17/08/18 00:43, Roberto Carna wrote:
 >> Dear, I have Squid + Squidguard working OK.
 >>
 >> Squidguard is filtering the entire www.youtube.com website.
 >>
 >> But now I have to permit just one video from Youtube:
 >>
 >> https://www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0
 >>
 >> I have added the below URL as an exception in Squidguard:
 >>
 >> www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0
 >>
 >> but after that I can't see it, still blocked.
 >>
 >> How can I enable just this URL from Squidguard preferently blocking
 >> the rest of Youtube ???
 >
 >> Unfortunately only with a great deal of difficulty.
 >
 >> The "?v=..." and "/embed/..." URLs are just public identifiers to access the YouTube APIs. At the HTTP level they result in a quite long series of sub-requests, redirections and the like bouncing 
all over the

 > youtube.* and googlevideos.* and googleapis.* domains.
 >   Yes all of them are involved multiple times. So whitelisting is an 
all-or-nothing prospect, with other G services being implicitly whitelisted as 
side effects.
 >
 >
 >> Also, whenever the way to decipher the above maze of traffic gets published so we can do things like what you ask. YT shortly afterwards change how it operates - usually towards even more 
complexity. This has happened too many times to be coincidence IMO.

 >
 >
 >> Amos
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org
 > http://lists.squid-cache.org/listinfo/squid-users
 >
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org
 > http://lists.squid-cache.org/listinfo/squid-users
 >
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + Squidguard Youtube URL video filtering

2018-08-17 Thread Marcus Kool

I cannot tell you how to do it with DNS entries since I think it is impossible 
and therefore I asked Benjamin to explain.

Allowing one single video and blocking all other videos on Youtube is not easy.
One cannot block by domain but must filter by full URL.
When HTTPS is used, full URLs can only be obtained/filtered using ssl_bump in 
peek+bump mode which is doable but not easy.
Once you have peek+bump working you can make two categories in ufdbGuard:
   category youtube with
  youtube.com/watch
   category allowedyoutubevideos with
  youtube.com/watch?v=ff9sDLGtnK8
and an acl like
   acl {
  allSystems {
 pass allowedyoutubevideos !youtube ...
  }
   ...

The above allows access to www.youtube.com but not to the blocked videos.
This is necessary since the youtube site also uses a set of URLs like
https://www.youtube.com/sw.js
https://www.youtube.com/service_ajax?name=signalServiceEndpoint
etc.
which all must be allowed to be able to display/allow your single video.

Marcus


On 17/08/18 11:27, Roberto Carna wrote:

Dear Marcus, please can you tell me the way to do what you suggest?

Suppose I want to block youtube.com but enable only one URL video
"https://www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0";.

How should I set te DNS entries please?

Regards,

2018-08-17 9:51 GMT-03:00 Marcus Kool :

OP asked about blocking Youtube but allowing a single Youtube video.
How would you do that with a couple of DNS entries ?

Marcus

On 16/08/18 22:11, SQUIDBLACKLIST.ORG wrote:


This might be painfully obvious to some who are in the know, but,
filtering youtube video content can be done with a lot less effort by simply
adding a couple dns entries for Googles safesearch servers.

#justsayin



Signed,

Benjamin E. Nichols
Founder &  Chief Architect
http://www.squidblacklist.org
1-405-301-9516

 Original message ----
From: Marcus Kool 
Date: 8/16/18 7:53 PM (GMT-06:00)
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid + Squidguard Youtube URL video filtering

yes, with ufdbguard you put
  youtube.com/watch?v=VIDEOID
in a urls file and create a URL table with ufdbGenTable.
ufdbGenTable adds many URLs automagically, i.e.
  youtube.com/embed/VIDEOID
  youtube.com/get_video_info?video_id=VIDEOID
  ytimg.googleusercontent.com/vi/VIDEOID
and many more.

Marcus

On 16/08/18 11:01, Vacheslav wrote:
  > Wouldn't it be better to try it in ufdbguard?
  >
  > -Original Message-
  > From: squid-users  On Behalf
Of Amos Jeffries
  > Sent: Thursday, August 16, 2018 4:18 PM
  > To: squid-users@lists.squid-cache.org
  > Subject: Re: [squid-users] Squid + Squidguard Youtube URL video
filtering
  >
  > On 17/08/18 00:43, Roberto Carna wrote:
  >> Dear, I have Squid + Squidguard working OK.
  >>
  >> Squidguard is filtering the entire www.youtube.com website.
  >>
  >> But now I have to permit just one video from Youtube:
  >>
  >> https://www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0
  >>
  >> I have added the below URL as an exception in Squidguard:
  >>
  >> www.youtube.com/embed/ff9sDLGtnK8?rel=0&showinfo=0
  >>
  >> but after that I can't see it, still blocked.
  >>
  >> How can I enable just this URL from Squidguard preferently blocking
  >> the rest of Youtube ???
  >
  >> Unfortunately only with a great deal of difficulty.
  >
  >> The "?v=..." and "/embed/..." URLs are just public identifiers to
access the YouTube APIs. At the HTTP level they result in a quite long
series of sub-requests, redirections and the like bouncing all over the
  > youtube.* and googlevideos.* and googleapis.* domains.
  >   Yes all of them are involved multiple times. So whitelisting is an
all-or-nothing prospect, with other G services being implicitly whitelisted
as side effects.
  >
  >
  >> Also, whenever the way to decipher the above maze of traffic gets
published so we can do things like what you ask. YT shortly afterwards
change how it operates - usually towards even more complexity. This has
happened too many times to be coincidence IMO.
  >
  >
  >> Amos
  > ___
  > squid-users mailing list
  > squid-users@lists.squid-cache.org
  > http://lists.squid-cache.org/listinfo/squid-users
  >
  >
  > ___
  > squid-users mailing list
  > squid-users@lists.squid-cache.org
  > http://lists.squid-cache.org/listinfo/squid-users
  >
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Marcus Kool



On 04/09/18 11:20, Amos Jeffries wrote:

On 4/09/18 7:33 PM, Ahmad, Sarfaraz wrote:

With debug_options ALL,9 and retrieving just this page, I found the following 
relevant loglines (this is with an explicit CONNECT request) ,



... skip TLS/1.2 clientHello arriving



Later on after about 10 secs

2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: 
Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf67698
2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: 
SSL_connect session=0x122c430...
2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 
memAlloc: requested=82887, received=82887
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new store 
capacity: 82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 
start appending up to 65535 bytes
2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535
2018/09/04 12:45:58.124 kid1| 83,5| NegotiationHistory.cc(83) 
retrieveNegotiatedInfo: SSL connection info on FD 12 SSL version NONE/0.0 
negotiated cipher
2018/09/04 12:45:58.124 kid1| ERROR: negotiating TLS on FD 12: 
error::lib(0):func(0):reason(0) (5/0/0)


... the server delivered 82KB of something which was not TLS/SSL syntax
according to OpenSSL.


I ran 'ufdbpeek', an OpenSSL-based utility that I wrote that peeks at the TLS certificate of a website and it displays a large correct certificate and that (in my case) cipher 
ECDHE-RSA-AES256-GCM-SHA384 is used.

OpenSSL 1.0.2k and 1.1.0g  have no issues with the certificate nor handshake.

Also sslLabs shows that all is well and that all popular modern browsers and 
OpenSSL 0.9.8 and 1.0.1 can connect to the site:
https://www.ssllabs.com/ssltest/analyze.html?d=www.extremetech.com

Marcus

[...]
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-19 Thread Marcus Kool



On 18/09/18 23:03, Amos Jeffries wrote:

On 19/09/18 1:54 AM, neok wrote:

Thank you very much Amos for putting me in the right direction.
I successfully carried out the modifications you indicated to me.
Regarding ufdbGuard, if I understood correctly, what you recommend is to use
the ufdbConvertDB tool to convert my blacklists in plain text to the
ufdbGuard database format? And then use that/those databases in normal squid
ACL's?


No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
replacement which works better while you improve your config.

You should work towards less complexity. Squid / squid.conf is where
HTTP access control takes place. The helper is about re-writing the URL
(only) - which is a complex and destructive process.


ufdbGuard is a simple tool that has the same syntax in its configuration file 
as squidGuard has.
It is far from complex, has a great Reference Manual, exmaple config file and a 
responsive support desk.
Amos, I have never seen you calling a URL writer being a complex and 
destructive process.  What do you mean?

URL rewriters have been used for decades for HTTP access control but you state 
"squid.conf is where HTTP access control takes place".
Are you saying that you want it is the _only_ place for HTTP access control?

Marcus



Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Marcus Kool



On 20/09/18 08:46, Amos Jeffries wrote:

On 19/09/18 11:49 PM, Marcus Kool wrote:


On 18/09/18 23:03, Amos Jeffries wrote:

On 19/09/18 1:54 AM, neok wrote:

Thank you very much Amos for putting me in the right direction.
I successfully carried out the modifications you indicated to me.
Regarding ufdbGuard, if I understood correctly, what you recommend is
to use
the ufdbConvertDB tool to convert my blacklists in plain text to the
ufdbGuard database format? And then use that/those databases in
normal squid
ACL's?


No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
replacement which works better while you improve your config.

You should work towards less complexity. Squid / squid.conf is where
HTTP access control takes place. The helper is about re-writing the URL
(only) - which is a complex and destructive process.


ufdbGuard is a simple tool that has the same syntax in its configuration
file as squidGuard has.
It is far from complex, has a great Reference Manual, exmaple config
file and a responsive support desk.
Amos, I have never seen you calling a URL writer being a complex and
destructive process.  What do you mean?


Re-writing requires Squid to:
  * fork external helpers, and
  * maintain queues of lookups to those helpers, and
  * maintain cache of helper responses, and
  * maintain a whole extra copy of HTTP-request state, and
  * copy some (not all) of that state info between the two "client" requests.

  ... lots of complexity, memory, CPU time, traffic latency, etc.


Squid itself is complex and for any feature of Squid one can make a list like 
above to say that it is complex.
The fact that one can make such a list does not mean much to me.
One can make the same or a similar list for external acl helpers and even 
native acls.


Also when used for access control (re-write to an "error" URL) the
re-write helper needs extra complexity in itself to act as the altered
origin server for error pages, or have some fourth-party web server.


Squid cannot do everything that a URL writer, and specifically ufdbGuard, can.
For example, Squid must restart and break all open connections when a tiny 
detail of the configuration changes.  With ufdbGuard this does not happen.
ufdbGuard supports dynamic lists of users, domains and source ip addresses 
which are updated every X minutes without any service interruption.
When other parameters change, ufdbGuard resets itself with zero service 
interruption for Squid and its users.
ufdbGuard can decide to probe a site to make a decision, and hence detect 
Skype, Teamviewer and other types of sites that an admin might want to block.  
Squid cannot.
ufdbGuard can decide to do a lookup of a reverse IP lookup to make a decision.  
Squid cannot.
ufdbGuard supports complex time restrictions for access. Squid support simple 
time restrictions.
ufdbGuard supports flat file domain/url lists and a commercial URL database.  
Squid does not.
And the list goes on.

So when you state on the mailing list that users should unconditionally stop using a URL writer in favor of using Squid acls, you may be causing troubles for admins who do not know the implications of 
your advice.




URL rewriters have been used for decades for HTTP access control but you
state "squid.conf is where HTTP access control takes place".


Once upon a time, back at the dawn of the WWW (before the 1990s) Squid
lacked external_acl_type and modular ACLs.

That persisted for the first decade or so of Squid's life, with only the
re-write API for admin to use for complicated permissions.

Then one day about 2 decades or so ago, external ACL was added and the
ACLs were also made much easier to implement and plug in new checks.
Today we have hundreds of native ACLs and even a selection of custom ACL
helpers. Making the need for these abuses of the poor re-writers.

Old habits and online tutorials however are hard to get rid of.


If you want to get rid of habits that in your view are old/obsolete, then why 
not start a discussion?
And in the event that at the end of the discussion, the decision is made that a 
particular interface should be removed, why not phase it out ?


Are you saying that you want it is the _only_ place for HTTP access
control?



I'm saying the purpose of the url_rewrite_* API in Squid is to tell
Squid whether the URL (only) needs some mangling in order for the
server/origin to understand it.
  It can re-write transparently with all the problems that causes to
security scopes and URL sync between the endpoints. Or redirect the
client to the "correct" URL.


The Squid http_access and similar *access controls* are the place for
access control - hint is in the naming. With external ACL type for
anything Squid does not support natively or well. As Flashdown mentioned
even calls to SquidGuard etc. can be wrapped and used as external ACLs.


Wrapping and externals ACLs adds the same complexity, memory, CPU time, traf

Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-24 Thread Marcus Kool

The sub-thread starts with "do not use the url rewriter helper because of 
complexity"
and ends with that the (not less complex) external acl helpers are fine to use.
And in between there is an attempt to kill the URL rewriter interface.

It would be a lot less confusing if you started with something like
   I do not like the URL rewriter interface, use the external acl one

>> ufdbGuard supports dynamic lists of users, domains and source ip
>> addresses which are updated every X minutes without any service
>> interruption.
>
> So does Squid, via external ACL and/or authentication.

Aren't you confusing what Squid itself and what Squid+helpers can do?

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is this the next step of SSL encryption? Fwd: Encrypted SNI

2018-10-19 Thread Marcus Kool



On 19/10/18 14:09, Alex Rousskov wrote:

On 10/19/2018 10:47 AM, Matus UHLAR - fantomas wrote:

On 10/19/2018 02:01 AM, Amish wrote:

Looks like ssl_bump is going to break once ESNI and Encrypted DNS are
universal. (Ofcourse it may be few years away)

Probably only way out to detect the domain name would be by implementing
CONNECT proxy instead of transparent one.



On 19.10.18 09:51, Alex Rousskov wrote:

Using forward proxies may not help as much: A CONNECT request that uses
an IP address (instead of a domain name) is pretty much as uninformative
as a TCP connection intercepted by a transparent proxy.



disabling DNS in the internal network could help that a bit.


... until the browser starts using DNS over HTTPS (with a pinned
certificate of the "resolving" HTTPS server)?
  Alex.


It is relatively easy to block DNS over HTTPS and I think there will be demand 
for that.
And I predict that Squid will have a feature to selectively block connections 
with ESNI to force clients to use the plain text SNI.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] bank blocked

2018-10-31 Thread Marcus Kool

When there is an issue with a certificate, it is good practice to go to ssllabs 
to verify what is going on.

https://www.ssllabs.com/ssltest/analyze.html?d=i.bps%2dsberbank.by&hideResults=on&latest
shows that there is an incomplete certificate chain issue (in orange) which 
means that the server of the bank does not send all (intermediate) certificates.
Click on the blue '+' of certification paths and it shows that the 'GeoTrust 
RSA CA 2018' (intermediate certificate) had to be downloaded.

The messages are not from Squid but from ufdbGuard which apparently is 
configured with an option to block the URL is case of a certificate issue.
Since Squid already checks for valid certificate chains, I suggest to turn this 
option off in ufdbGuard.

Marcus


On 31/10/2018 11:48, Vacheslav wrote:

I do not use bump or splice if that is what you mean. I do not import 
certificates.. it works without proxy.

-Original Message-
From: squid-users  On Behalf Of 
Matus UHLAR - fantomas
Sent: Wednesday, October 31, 2018 5:46 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] bank blocked

On 31.10.18 17:41, Vacheslav wrote:

2018-10-31 17:34:45 [4270] TLSv1.2 certificate for i.bps-sberbank.by:443: 
UNRECOGNISED ISSUER  (maybe a certificate chain issue)  *
2018-10-31 17:34:45 [4270]issuer: /C=US/O=DigiCert 
Inc/OU=www.digicert.com/CN=GeoTrust RSA CA 2018


does your system recopgnize this authority? Do have actual list of CAs?


2018-10-31 17:34:45 [4270]subject: /C=BY/L=Minsk/O=BPS-Sberbank OAO/OU=Head 
Office/CN=*.bps-sberbank.by
2018-10-31 17:34:45 [4270] TLSv1.2 connection to i.bps-sberbank.by:443 has 
error code 12. It is marked as a TLS/SSL certificate issue
2018-10-31 17:34:45 [4270] BLOCK -10.17.10.17 config 
https-option  i.bps-sberbank.by:443 CONNECT

What is wrong?



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] access_log acls

2018-11-27 Thread Marcus Kool

I have an issue with access_log acls when a load balancer sends a TCP probe.

The goal is to not log errors caused by the TCP probes of the load balancer.  
All other errors must be logged.

I did a test with the following acls on one of our test systems to illustrate 
the issue:

logformat combha %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %ha
acl src_lb src 10.2.2.254/32
acl src_lb src 10.2.2.107/32
access_log stdio:/local/squid4/logs/*lb*access.log combha src_lb
access_log stdio:/local/squid4/logs/access.log   combha !src_lb


The logging is almost as expected: all HTTP(S) traffic from 10.2.2.107 goes to 
lbaccess.log and all other traffic to access.log,
*but* imitating the TCP probe of the LB with a telnet session from 10.2.2.107 to the squid server which is immediately terminated or sends garbage, is logged with transaction-end-before-headers to 
access.log, not lbaccess.log.


It seems that Squid, at the moment that it logs the 
transaction-end-before-headers error, does not consider the access_log acls or 
maybe has not yet processed the source IP to make the right decision.

Should the above acls send the errors to lbaccess.log ?  If not, what set of 
acls can do it?

Thanks,

Marcus




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] access_log acls

2018-11-27 Thread Marcus Kool


On 27/11/2018 13:58, Alex Rousskov wrote:

On 11/27/18 5:21 AM, Marcus Kool wrote:


logformat combha %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %ha
acl src_lb src 10.2.2.254/32
acl src_lb src 10.2.2.107/32
access_log stdio:/local/squid4/logs/lbaccess.log combha src_lb
access_log stdio:/local/squid4/logs/access.log   combha !src_lb
The logging is almost as expected: all HTTP(S) traffic from 10.2.2.107
goes to lbaccess.log and all other traffic to access.log,
*but* imitating the TCP probe of the LB with a telnet session from
10.2.2.107 to the squid server which is immediately terminated or sends
garbage, is logged with transaction-end-before-headers to access.log,
not lbaccess.log.
Should the above acls send the errors to lbaccess.log?

Yes, src ACLs should work for all transactions associated with to-Squid
connections, including transaction-end-before-headers errors. If they do
not work, it is a Squid bug.

Alex.


Thanks, I filed bug 4906: https://bugs.squid-cache.org/show_bug.cgi?id=4906

Is it serious enough to get a fix in Squid 4?

Marcus


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] access_log acls

2018-11-27 Thread Marcus Kool

4.5 would be nice.  4.6 would also be nice.

On 27/11/2018 14:47, Matus UHLAR - fantomas wrote:

On 11/27/18 5:21 AM, Marcus Kool wrote:

logformat combha %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %ha
acl src_lb src 10.2.2.254/32
acl src_lb src 10.2.2.107/32
access_log stdio:/local/squid4/logs/lbaccess.log combha src_lb
access_log stdio:/local/squid4/logs/access.log   combha !src_lb
The logging is almost as expected: all HTTP(S) traffic from 10.2.2.107
goes to lbaccess.log and all other traffic to access.log,
*but* imitating the TCP probe of the LB with a telnet session from
10.2.2.107 to the squid server which is immediately terminated or sends
garbage, is logged with transaction-end-before-headers to access.log,
not lbaccess.log.
Should the above acls send the errors to lbaccess.log?



On 27/11/2018 13:58, Alex Rousskov wrote:

Yes, src ACLs should work for all transactions associated with to-Squid
connections, including transaction-end-before-headers errors. If they do
not work, it is a Squid bug.


On 27.11.18 14:42, Marcus Kool wrote:

Thanks, I filed bug 4906: https://bugs.squid-cache.org/show_bug.cgi?id=4906

Is it serious enough to get a fix in Squid 4?


which "squid 4" exactly?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] access_log acls

2018-11-28 Thread Marcus Kool
On Wed, Nov 28, 2018 at 12:24:30PM +0100, Matus UHLAR - fantomas wrote:
> On 27.11.18 15:04, Marcus Kool wrote:
> > 4.5 would be nice.  4.6 would also be nice.
> 
> OK, I will rephrase my question: which squid version do you find this in?

This issue was found in Squid 4.3

> 
> > On 27/11/2018 14:47, Matus UHLAR - fantomas wrote:
> > > > > On 11/27/18 5:21 AM, Marcus Kool wrote:
> > > > > > logformat combha %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs % > > > > > %Ss:%Sh %>ha
> > > > > > acl src_lb src 10.2.2.254/32
> > > > > > acl src_lb src 10.2.2.107/32
> > > > > > access_log stdio:/local/squid4/logs/lbaccess.log combha src_lb
> > > > > > access_log stdio:/local/squid4/logs/access.log   combha !src_lb
> > > > > > The logging is almost as expected: all HTTP(S) traffic from 
> > > > > > 10.2.2.107
> > > > > > goes to lbaccess.log and all other traffic to access.log,
> > > > > > *but* imitating the TCP probe of the LB with a telnet session from
> > > > > > 10.2.2.107 to the squid server which is immediately terminated or 
> > > > > > sends
> > > > > > garbage, is logged with transaction-end-before-headers to 
> > > > > > access.log,
> > > > > > not lbaccess.log.
> > > > > > Should the above acls send the errors to lbaccess.log?
> > > 
> > > > On 27/11/2018 13:58, Alex Rousskov wrote:
> > > > > Yes, src ACLs should work for all transactions associated with 
> > > > > to-Squid
> > > > > connections, including transaction-end-before-headers errors. If they 
> > > > > do
> > > > > not work, it is a Squid bug.
> > > 
> > > On 27.11.18 14:42, Marcus Kool wrote:
> > > > Thanks, I filed bug 4906: 
> > > > https://bugs.squid-cache.org/show_bug.cgi?id=4906
> > > > 
> > > > Is it serious enough to get a fix in Squid 4?
> > > 
> > > which "squid 4" exactly?
> 
> 
> -- 
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> - Holmes, what kind of school did you study to be a detective?
> - Elementary, Watson.  -- Daffy Duck & Porky Pig
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Whitelisting youtube

2018-12-28 Thread Marcus Kool

Wolfgang, why don't you stop using squidguard which has no support for 5+ years 
and switch to ufdbGuard?

ufdbGuard is regularly maintained and has a Reference Manual that explains what 
and how to whitelist domains.

Marcus


On 28/12/2018 07:18, Wolfgang Paul Rauchholz wrote:

Problem staqtement: can't whitelist youtube.com 

I run squid 3.5 and squiguard on a CENTOS 7 home linux server.
The blacklist database is created by a publicly available script called 
getlists.sh. This script downloads and compiles blacklists  from several sites 
(e.g. squidguard website)
To whitelist youtube which is blocked too,  I created the directory 'white' 
within 'blacklist'. The squidguard config looks like this:

dest white {
        domainlist      white/domains
        urllist         white/urls
}

acl {
        default {
                pass    white !adv !porn !warez all
                redirect http://localhost/block.html
                }
}

the domaon file withi nwhite has these entries:
.2mdn.net:443 
.accounts.google.com 
.accounts.youtube.com 
.dnld.googlevideo.com 
.gmail.com:443-
.googleads4.g.doubleclick.net 
.googlevideo.com 
.i.ytimg.com 
.nek.googlevideo.com 
.play.google.com 
.sb.scorecardresearch.com 
.s.ytimg.com 
.youtube.com 
.ytimg.com 

The entry I find in access.lof file reads like this:
1545988674.026      0 10.5.2.96 TAG_NONE/503 0 CONNECT www.youtube.com:443 
 - HIER_NONE/- -


I still cannot unblock youtube.
I'd appreciate your help in resolving this.

Wolfgang




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Whitelisting youtube

2018-12-29 Thread Marcus Kool

Hi Eliezer,

If you mean compiler errors on debian 9 which has OpenSSL 1.1 ...

We will release ufdbGuard 1.34 soon which supports OpenSSL 1.1 since OpenSSL 
1.1 is not compatible with OpenSSL 1.0.

Marcus


On 29/12/2018 15:22, elie...@ngtech.co.il wrote:


Markus,

Does ufdbGuard have a Debian package or build instructions?
The last time I tried to compile it on both Debian and Ubuntu I have 
encountered couple issues.

Thanks,

Eliezer



Eliezer Croitoru <http://ngtech.co.il/main-en/>
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il <mailto:elie...@ngtech.co.il>

cid:image001.png@01D2675E.DCF360D0

*From:* squid-users  *On Behalf Of 
*Marcus Kool
*Sent:* Friday, December 28, 2018 12:14
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] Whitelisting youtube

Wolfgang, why don't you stop using squidguard which has no support for 5+ years 
and switch to ufdbGuard?

ufdbGuard is regularly maintained and has a Reference Manual that explains what 
and how to whitelist domains.

Marcus

On 28/12/2018 07:18, Wolfgang Paul Rauchholz wrote:

Problem staqtement: can't whitelist youtube.com <http://youtube.com>

I run squid 3.5 and squiguard on a CENTOS 7 home linux server.

The blacklist database is created by a publicly available script called 
getlists.sh. This script downloads and compiles blacklists  from several sites 
(e.g. squidguard website)

To whitelist youtube which is blocked too,  I created the directory 'white' 
within 'blacklist'. The squidguard config looks like this:

dest white {

      domainlist      white/domains

      urllist         white/urls

}

acl {

      default {

              pass    white !adv !porn !warez all

              redirect http://localhost/block.html

              }

}

the domaon file withi nwhite has these entries:

.2mdn.net:443 <http://2mdn.net:443>

.accounts.google.com <http://accounts.google.com>

.accounts.youtube.com <http://accounts.youtube.com>

.dnld.googlevideo.com <http://dnld.googlevideo.com>

.gmail.com:443-

.googleads4.g.doubleclick.net <http://googleads4.g.doubleclick.net>

.googlevideo.com <http://googlevideo.com>

.i.ytimg.com <http://i.ytimg.com>

.nek.googlevideo.com <http://nek.googlevideo.com>

.play.google.com <http://play.google.com>

.sb.scorecardresearch.com <http://sb.scorecardresearch.com>

.s.ytimg.com <http://s.ytimg.com>

.youtube.com <http://youtube.com>

.ytimg.com <http://ytimg.com>

The entry I find in access.lof file reads like this:

1545988674.026     0 10.5.2.96 TAG_NONE/503 0 CONNECT www.youtube.com:443 
<http://www.youtube.com:443> - HIER_NONE/- -

I still cannot unblock youtube.

I'd appreciate your help in resolving this.

Wolfgang



___

squid-users mailing list

squid-users@lists.squid-cache.org  
<mailto:squid-users@lists.squid-cache.org>

http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sslbump with multiple users and multiple ACLs for each

2019-01-03 Thread Marcus Kool

For those who do not know it yet: ufdbGuard is free.

ufdbGuard supports user-defined URL databases, 3rd party plain-text URL 
databases, and a commercial database from www.urlfilterdb.com.

Marcus


On 03/01/2019 13:45, Benjamin E. Nichols wrote:

Why are you asking support questions about a commercial product, on the squid 
proxy email users list?

On 1/3/2019 9:40 AM, stressedtux wrote:

With ufdbguard is possible to allow one user to have an acl and other user a
different acl? Im trying to completly block access to inet except for what i
should allow.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sslbump with multiple users and multiple ACLs for each

2019-01-03 Thread Marcus Kool

ufdbGuard supports blacklists, whitelists, large numbers of whitelists, users 
and acls.

The configuration file is intuitive and if the Reference Manual does not 
explain everything, one can also write to the support desk of URLfilterDB or 
the ufdbguard mailing list.

Just for the record, I am biased since I am the author of ufdbGuard.

Marcus



On 03/01/2019 14:05, stressedtux wrote:

Sorry Guys, im not trying to start a witch hunt, Im just trying to understand
if squid alone or with squidguard or other plugin is able to do this:

- Blacklist all websites
- Allow a whitelist for "user1"
- Allow a different whitelist for "user2" and so on (whitelist3 for user3,
whitelist4 for user4...)
- And have a whitelist for everyone, logged users and not logged ones.
(i have to block all URLs, http and https)

Dont care about paid products... just trying to understand if im on the
correct path or trying to configure squid with these kind of rules is
imposible. Im new at squid and i been triying for 3 days already to
configure it this way with no success.

Thanks in advance
Tux



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] icap not answering

2019-03-03 Thread Marcus Kool

Squid is an ICAP client, not an ICAP server!, and does not repond on port 1344.
Marcus


On 02/03/2019 22:29, steven wrote:

Hi,


i would like todo modifications on https connections and therefore enabled ssl 
bump in squid 4.4, now i would like to see the real traffic and icap looks like 
a way to watch and change that traffic.

but squid is not answering to icap://127.0.0.1:1344 when using pyicap or telnet.

the telnet error is:

telnet 127.0.0.1 1344
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

which is imho good because it tells me that something is answering on that port 
after all.

did i misconfigure something?



config:

debug_options 28,9
#icap
icap_enable on
icap_service service_req reqmod_precache bypass=1 icap://127.0.0.1:1344/reqmod
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0 
icap://127.0.0.1:1344/respmod
adaptation_access service_resp allow all
acl localnet src 127.0.0.1/32 192.168.10.0/24
http_access allow localnet
acl SSL_ports port 443
acl CONNECT method CONNECT
#http_access deny !Safe_ports
#http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
include /etc/squid/conf.d/*
http_access allow localhost
coredump_dir /var/spool/squid
refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .        0    20%    4320
# default end
# my config
http_port 3128 accel ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem
https_port 3129 ssl-bump intercept generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem
sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db -M 4MB
acl step1 at_step SslBump1

ssl_bump peek step1
ssl_bump bump all

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] attempting to disable (or mute) logs

2019-03-13 Thread Marcus Kool

I think you are suffering from this bug: 
https://bugs.squid-cache.org/show_bug.cgi?id=4906

Marcus


On 13/03/2019 10:09, Joey Officer wrote:


I’m running a squid instance in AWS behind a network load balancer.  As part of the health checks, at least that’s what I believe, we’re seeing this log entry spamming which is hiding the rest of 
the relevant log data. Sample log entry (repeating countless times)


1552419269.039 0 172.34.33.137 NONE/000 0 NONE 
error:transaction-end-before-headers - HIER_NONE/- -

I’ve added the following:

acl dontLog http_status 000 # tcp_denied (due to auth)

cache_store_log none

cache_log /dev/null

access_log stdio:/var/log/squid/access.log !dontLog

Any help on hiding that log entry so I can get back to useful data would be 
great.

Thanks,

Joey


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Replace SquidGuard with ufdbguard : configuration examples ?

2019-03-18 Thread Marcus Kool

The ufdbGuard source files and packages have an example config file.

If you have a ufdbGuard-specific issue I suggest to use the list of ufdbGuard 
or go directly to the support desk of URLfilterDB.

Marcus


On 18/03/2019 06:39, Nicolas Kovacs wrote:

Hi,

I've been running the Squid + SquidGuard combination for quite some time
in our local school. I'm also filtering HTTPS connections using the
Squid SSL Bump functionality.

I'd like to test ufdbguard, since SquidGuard doesn't seem to be
maintained anymore, and it's also quite RAM-consuming.

I've read the PDF manual of ufdbguard, but before going any further, I'd
like to ask. Do any of you guys here use the Squid + ufdbguard
combination ? And if this is the case, can you eventually send me a few
working configuration files ? I'm currently fiddling with a local
sandbox installation, and I have some trouble putting the pieces together.

Cheers from the sunny South of France,

Niki Kovacs

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to limit bandwidth (squid 4.7.2 )

2019-07-31 Thread Marcus Kool

On Linux you can use iptables to do qos and make sure that a single connection 
does not consume all bandwidth.

Marcus


On 30/07/2019 10:22, Service MV wrote:

Just to explain clearly, my goal is that no user of my LAN can download more 
than 15 megabits/s, because some downloads consume me 100 magabits/s leaving 
the rest of the users offline.
Since squid calculates in bytes, it would be: 1966080 bytes the limit that I 
want to establish for any user of my LAN
Thank you very much for your help.

El mar., 30 de jul. de 2019 a la(s) 09:57, Service MV (service...@gmail.com 
) escribió:

Thanks for patience.

I modify the line:
# All net setting Individual client setting
#                                 first 15MB of file download full speed, 
then continue at 10MB/s    first 10MB of file download full speed, then 
continue at 7MB/s
delay_parameters 1  1310720/1966080 917504/1310720

In this way I can make the Delay Pool work.
But I'm still not sure if I'm using my symmetrical 100Mb/s bandwidth 
correctly.

Any comments on that?


El lun., 29 de jul. de 2019 a la(s) 16:58, Service MV (service...@gmail.com 
) escribió:

Hello everyone!
I have a 100/100 Mbit/s internet link and I am trying unsuccessfully to 
limit downloads to a maximum of 15Mb/s of any IP on my network. Some downloads 
consume the entire link.
I copy my settings to help me see where I'm going wrong. Thank you very 
much!
Gabriel

PS.: squid -v '--enable-delay-pools'

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.10.8.0/22  # (My LAN)
#acl largefiledown src 10.10.8.0/22  # Limitar 
bajada y subida a 10Mbps
#acl localnet src 10.0.0.0/8  # RFC 1918 local 
private network (LAN)
#acl localnet src 100.64.0.0/10  # RFC 6598 
shared address space (CGN)
#acl localnet src 169.254.0.0/16  # RFC 3927 
link-local (directly plugged) machines
#acl localnet src 172.16.0.0/12  # RFC 1918 local 
private network (LAN)
#acl localnet src 192.168.0.0/16  # RFC 1918 
local private network (LAN)
#acl localnet src fc00::/7       # RFC 4193 local private network range
#acl localnet src fe80::/10       # RFC 4291 link-local (directly 
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

acl LS_whitedomains dstdomain "/etc/squid/acl/whitedomains.txt"
acl LS_blackdomains dstdomain "/etc/squid/acl/blackdomains.txt"
acl LS_malicius dstdomain "/etc/squid/acl/malicius.txt"
acl LS_ads-tracking dstdomain "/etc/squid/acl/ads-tracking.txt"

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

delay_pools 1
delay_class 1 2
delay_parameters 1 103809024/103809024 15728640/15728640 # (98/98 
megabytes in bytes and 15/15 megabytes in bytes)
delay_access 1 allow localnet

http_access deny LS_blackdomains
http_access allow LS_whitedomains
http_access deny LS_malicius
http_access deny LS_ads-tracking


# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed


http_access allow localnet
http_access allow localhost

 

Re: [squid-users] [ext] Re: Squid and DoH

2020-03-02 Thread Marcus Kool

On 02/03/2020 08:46, Ralf Hildebrandt wrote:

* Andrea Venturoli :

On 2020-02-29 14:17, Matus UHLAR - fantomas wrote:


I guess DoH means dns over https and thus needs sslbump enabled.  the easy
but limited way would be to disable connections to publicly available DoH
servers.

Thanks.
Is someone maintaining such a list?

There's one in the wikipedia entry.

Ralf Hildebrandt
Charité - Universitätsmedizin Berlin
Geschäftsbereich IT | Abteilung Netzwerk

One can also use the URL database of URLfilterDB which includes the 
dnsoverhttps category.
See also https://www.urlfilterdb.com/suggestentries/lookup_url.html for an 
online database query.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] [ADVISORY] SQUID-2019:4 Multiple Issues in HTTP Request processing

2020-04-18 Thread Marcus Kool

Amos,
The latest version of Squid is 4.10.  Do you mean "fixed in 4.10" instead of "fixed 
in 4.8" ?

Thanks,
Marcus

On 18/04/2020 14:10, Amos Jeffries wrote:

__

 Squid Proxy Cache Security Update Advisory SQUID-2019:4
__

Advisory ID:SQUID-2019:4
Date:   April 18, 2020
Summary:Multiple Issues
 in HTTP Request processing.
Affected versions:  Squid 3.5.18 -> 3.5.28
 Squid 4.0.10 -> 4.7
Fixed in version:   Squid 4.8
__

 http://www.squid-cache.org/Advisories/SQUID-2019_4.txt
 http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12520
 http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12524
__

Problem Description:

  Due to incorrect URL handling Squid is vulnerable to access
  control bypass, cache poisoning and cross-site scripting attacks
  when processing HTTP Request messages.

__

Severity:

  A remote client can deliver crafted URLs to bypass cache manager
  security controls and retrieve confidential details about the
  proxy and traffic it is handling.

  A remote client can deliver crafted URLs which cause arbitrary
  content from one origin server to be stored in cache as URLs
  within another origin. This opens a window of opportunity for
  clients to be tricked into fetching and XSS execution of that
  content via side channels.

__

Updated Packages:

  This bug is fixed by Squid version 4.8.

  In addition, patches addressing this problem for the stable
  releases can be found in our patch archives:

Squid 4:
  

  If you are using a prepackaged version of Squid then please refer
  to the package vendor for availability information on updated
  packages.

__

Determining if your version is vulnerable:

  All Squid-2.x are not vulnerable.

  All Squid-3.x up to and including 3.5.17 are not vulnerable.

  All Squid-3.5.18 up to and including 3.5.28 are vulnerable.

  All Squid-4.x up to and including 4.0.9 are not vulnerable.

  All Squid-4.x up to and including 4.7 without HTTPS support are
  not vulnerable.

  All Squid-4.0.10 up to and including 4.7 with HTTPS support are
  vulnerable.

__

Workarounds:

  There are no workarounds for Squid-3.5.

  For Squid-4 build using --without-openssl --without-gnutls


__

Contact details for the Squid project:

  For installation / upgrade support on binary packaged versions
  of Squid: Your first point of contact should be your binary
  package vendor.

  If your install and build Squid from the original Squid sources
  then the squid-users@lists.squid-cache.org mailing list is your
  primary support point. For subscription details see
  .

  For reporting of non-security bugs in the latest STABLE release
  the squid bugzilla database should be used
  .

  For reporting of security sensitive bugs send an email to the
  squid-b...@lists.squid-cache.org mailing list. It's a closed
  list (though anyone can post) and security related bug reports
  are treated in confidence until the impact has been established.

__

Credits:

  This vulnerability was discovered by Jeriko One
  .

  Fixed by Amos Jeffries of Treehouse Networks Ltd.

__

Revision history:

  2019-05-14 14:56:49 UTC Initial Report
  2019-06-23 15:15:56 UTC Patches Released
  2019-06-05 15:52:17 UTC CVE Assignment
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and cross-signed certificates

2020-05-31 Thread Marcus Kool
yes, I have seen this with Squid _with_ ssl_bump.  In trying to resolve the issue I also upgraded to Squid 4.11, removed the certificate cache and still had messages that the certificate expired on 
May 30 2020.  Doublechecked all certificates but none has this expiry date.


We have a wildcard certificate of sectigo that we use for *.urlfilterdb.com   
The really strange thing is that the issue does not appear for all subdomains:

'www' subdomain is OK

'files' subdomain has expired certificate

www.sectigo.com also has an expiration issue when used with the Squid proxy and 
sslbump (peek+bump mode).

My *guess* is that the certificate checking code used by ssl_bump does not 
check all certificate signing paths.

Marcus


On 2020-05-31 00:58, Garbacik, Joe wrote:

Has anyone else noticed that any issues with the expiration of the Sectigo 
certificates today that appear to be related to this issue:
https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA03l0117LT
https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA01N00rgSZ

I started see this in my logs today for a site that has always worked.

... cert_errors="X509_V_ERR_CERT_HAS_EXPIRED@depth=3" ...

I also noticed that with a browser, bypassing the proxy,  the certificate is 
fine.
I also noticed that testing with openssl, it indicates expired as well.

    Verify return code: 10 (certificate has expired)


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ubiquiti: Anyone interested in instructions how to route traffic to a squid box?

2016-11-20 Thread Marcus Kool

Is it an EdgeRouter ?
I am interested since Ubiquiti has poor documentation.

Marcus


On 11/20/2016 05:31 PM, Eliezer Croitoru wrote:

I have a tiny Ubiquiti edge router here and I can publish the rules for
routing ports 80 and 443 and 53 into the squid\dns box.
Any interest in such a guide in the wiki?

Eliezer


Eliezer Croitoru 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-23 Thread Marcus Kool



On 23/01/17 15:31, Alex Rousskov wrote:

On 01/23/2017 04:28 AM, Yuri wrote:


1. How does it work?


My response below and the following commit message might answer some of
your questions:

http://bazaar.launchpad.net/~squid/squid/5/revision/14769


This seems that the feature only goes to Squid 5.  Will it be ported to Squid 4 
?


I.e., where downloaded certs stored, how it
handles, does it saves anywhere to disk?


Missing certificates are fetched using HTTP[S]. Certificate responses
should be treated as any other HTTP[S] responses with regard to caching.
For example, if you have disk caching enabled and your caching rules
(including defaults) allow certificate response caching, then the
response should be cached. Similarly, the cached certificate will
eventually be evicted from the cache following regular cache maintenance
rules. When that happens, Squid will try to fetch the certificate again
(if it becomes needed again).



2. How this feature is related to sslproxy_foreign_intermediate_certs,
how it can interfere with it?


AFAICT by looking at the code, Squid only downloads certificates that
Squid is missing when trying to build a complete certificate chain for a
given server connection. Any sslproxy_foreign_intermediate_certs are
used as needed during the chain building process (i.e., they are _not_
"missing").


I created bug report http://bugs.squid-cache.org/show_bug.cgi?id=4659
a week ago but there has not been any activity.
Is there someone who has sslproxy_foreign_intermediate_certs
working in Squid 4.0.17 ?

Thanks,
Marcus

[snip]


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-23 Thread Marcus Kool



On 23/01/17 17:23, Yuri Voinov wrote:
[snip]


I created bug report http://bugs.squid-cache.org/show_bug.cgi?id=4659
a week ago but there has not been any activity.
Is there someone who has sslproxy_foreign_intermediate_certs
working in Squid 4.0.17 ?

Seems works as by as in 3.5.x. As I can see.


3.5.x works fine but 4.0.17 fails on my servers.



Thanks,
Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_bump and source IP

2017-02-02 Thread Marcus Kool

The terminology may be confusing:
ssl_bump means more or less "looking at HTTPS traffic"
ssl_bump splice  means "do not bump/intercept HTTPS traffic. No fake CA certificates 
are used"
ssl_bump bumpmeans "bump/intercept HTTPS traffic and use a fake CA 
certificate"

So the question is not about ssl_bump but about "ssl_bump bump".
To prevent the active bump, you need an acl to splice (leave the connection 
alone)
Something like this:

acl tls_s1_connect  at_step SslBump1

acl tls_vip_usersfill-in-your-details

ssl_bump splicetls_vip_users# do not peek/bump vip users
ssl_bump peek  tls_s1_connect   # peek at connections of other users
ssl_bump stare all  # peek/stare at the server side of 
connections of other users
ssl_bump bump  all  # bump connections of other users

Marcus


On 11/01/17 09:50, Matus UHLAR - fantomas wrote:

On 11.01.17 11:37, FredB wrote:

I'm searching a way to exclude an user (account) or an IP from my lan
I can exclude a destination domain to decryption with SSL_bump


simply define an ACL and deny bumping it.


but not all requests from a specific source


what do you mean here?


, maybe because I'm using x-forwarded ?


x-forwarded-for has nothing to do with this

Maybe you should rephrase the question so we understant you better.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] URL encoding in squid

2017-02-21 Thread Marcus Kool



On 21/02/17 17:17, Amos Jeffries wrote:


Is it possible to path %-encoded URL to squidGuard ?


Not with Squid-3.4. The 3.5 releases have a url_rewrite_extras directive
which takes logformat codes. You could use that to send an extra
%-encoded copy of the URL to the helper in addition to the normal URL
input. (sorry there is no package yet in Debian 8 for 3.5).

Amos


ufdbGuard has a database format that supports UTF8 characters but
only the latest beta (ufdbguard 1.32.5beta9) fully supports it.
I can send you a link to the beta software if you are interested.

how it works:
ufdbGuard as a utility to convert domains+urls files into a
database file which converts all %-encoded characters.
The URLs that Squid sends to ufdbGuard are also all converted
which means that URLs with %-encoded URLs and URLs without %-encoding
match.

Marcus


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Marcus Kool



On 10/03/17 16:27, Yosi Greenfield wrote:

Thanks!

Netflow is much larger.

I really want to know exactly what site is costing my users data. Many of
our users are on metered connections and are paying for overage, but I can't
tell where that overage is being used. Are they using youtube, webmail,
wetransfer? I see only a fraction of their actual proxy usage in my squid
logs.

Data compression would give the opposite result, so that's not what I'm
seeing.

Any other ideas?


Is there any traffic that is not directed to Squid?

Do you use ssl-bump in bump mode ?
If not, Squid has no idea how many bytes go through the (HTTPS) tunnels.

Marcus



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of Antony Stone
Sent: Friday, March 10, 2017 2:21 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Data usage reported in log files

On Friday 10 March 2017 at 20:14:36, Yosi Greenfield wrote:


Hello all,

I'm analyzing my squid logs with sarg, and I see that the number of
bytes reported as used by any particular user are often nowhere near
the bytes reported by netflow and tcpdump.


Which is larger?


I'm trying to trace my users' data usage by site, but I'm unable to do
so from the log files because of this.


Well, what is it you really want to know?

netflow / tcpdump will give you accurate numbers for the quantity of data on
your Internet link - I assume this is what you're most interested in?

Squid will show you what quantity of data goes to/from the clients, but is
that really important?


Can someone please explain to me what I might be missing? Why does
squid log report one thing and netflow and tcpdump show something
else?


Data compression?

HTTP responses are often gzipped, so if tcpdump is showing you smaller
numbers of bytes than Squid reports, that's what I'd look at first.


Antony.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ufdbGuard 1.33.1 is released

2017-03-17 Thread Marcus Kool

ufdbGuard, the free URL filter for Squid, has a new release.
The highlights of this release are:
+ full UTF8 support for URLs
+ IPv6 support for sources
+ performance improvement for large systems
+ all reported issues have been fixed.

ufdbGuard was forked from squidGuard in 2005 and is actively maintained,
uses less resources and has more features than squidGuard.

ufdbGuard can be downloaded from https://sourceforge.net and 
https://www.urlfilterdb.com

Marcus Kool
author of ufdbGuard
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SMP and AUFS

2017-03-19 Thread Marcus Kool

The root cause of why admins configure SMP + [A]UFS is the lack of good 
documentation.
A few lines in the wiki and squid.conf.documented should be enough.

Marcus


On 19/03/17 06:11, Eliezer  Croitoru wrote:

I think that some warning message like "WARNING: be sure you know that UFS\AUFS 
doesn't support SMP\MultiWorkers" should be added to the stderr or cache.log.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of senor
Sent: Sunday, March 19, 2017 7:12 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SMP and AUFS

On 3/17/2017 7:45, Alex Rousskov wrote:

On 03/16/2017 10:42 PM, senor wrote:


I understand that AUFS is not SMP aware but if each worker has its own
AUFS cache is there any problem other than the inefficiencies of
duplicate cache?


Yes. Clients may get stale cached entries, possibly breaking advanced
HTTP transactions that rely on a more-or-less compliant proxy cache.

Also, I do not know exactly how local and shared cache indexes interact
when SMP-unaware store updates its local index without updating the
shared one. Most likely, such partial updates lead to bugs. You may
reduce bugs probability by not mixing shared and ufs-based stores in SMP
mode, but I doubt you can eliminate all problems that way.



I'm pretty sure that AUFS is used with squid running in SMP mode a lot.


I can think of many examples where a lot of people do things they should
not be doing and do not do things they should be doing. Just because
many use X to solve some problem, does not make using X a good idea and
certainly does not make it the best solution available.



The squid wiki even has a CARP configuration example for this combination.


I hope there are no official examples advertising SMP AUFS
configurations. If there are, they should be removed IMO.

Alex.


There are many references in the squid wiki, FAQ and Knowlegebase about
SMP but I don't see any of them reflecting the concerns you have brought
up. My point in mentioning that there are a lot of installations using
SMP and AUFS is that something widely used but buggy tends to be brought
up on this email list and I haven't seen it.

I'm not trying to claim there are no problems. I'm just making sure my
expectations are realistic. Your comments were the first I became aware
anyone thought poorly about the combination of AUFS with SMP. Rock is of
course preferred but it comes with more baggage than AUFS. My own
experience has been pretty good. Maybe just lucky.

Senor
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] blocking or allowing specific youtube videos

2017-03-21 Thread Marcus Kool

ufdbGuard is a URL filter which given the input
   www.youtube.com/watch?v=XX
blocks the following URLs:
   www.youtube.com/watch?v=XX
   www.youtube.com/embed/XX
   www.youtube.com/get_video_info?video_id=XX
   ytimg.googleusercontent.com/vi/XX/
   i.ytimg.com/vi/XX/
   ...
ufdbGuard also blocks users who try to circumvent the URL filter with URLs like
   www.youtube.com/watch?foo=1&v=XX&bar=2

The acls of ufdbGuard can block or allow any set of URLs.

Marcus


On 21/03/17 04:05, Sohan Wijetunga wrote:

Project subject is blocking or allowing specific youtube videos. For that 
research I hope to add more features but currently I’m stuck to take full urls 
from clients. According to my project,
environment should be client server environment. All the client’s youtube 
traffic should be manage through the gateway. I currently following squid 
helper programs it seems to be fulfil my requirement
but those examples are not enough for testing. Using of squid helper program is 
to do some development in my research future. I really need to do that project 
using squid.



 I look forward to hearing from you soon.

Thank you.

Best Regards,

Sohan.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] URL sometimes reurns empty response

2017-05-02 Thread Marcus Kool

Looks like MS uses multiple servers for msftconnecttest.com and that they send 
different content.

On 02/05/17 08:59, Ralf Hildebrandt wrote:

In some cases, our proxies (got 4 of them) return a empty result when
querying "http://www.msftconnecttest.com/ncsi.txt"; (whcih is used by
Microsoft Brwosers to check if they're online).

I'm using this incantation to check the URL:

watch -d curl --silent -v -x "http://proxy-cvk-1.charite.de:8080"; 
http://www.msftconnecttest.com/ncsi.txt

Usually, the URL should just return "Microsoft NCSI".
In some cases I get an empyt response, but curl reports:

< Age: 5
< X-Cache: HIT from proxy-cvk-1
< Via: 1.1 proxy-cvk-1 (squid/5.0.0-20170421-r15126)
< Connection: keep-alive
<
* Excess found in a non pipelined read: excess = 14 url = /ncsi.txt 
(zero-length body)
* Curl_http_done: called premature == 0
* Connection #0 to host (nil) left intact

As you can see, something is producing an excess of 14 Bytes (which
coincides with the 14 bytes length of "Microsoft NCSI").

< Cache-Control: max-age=30,must-revalidate

Immediatly after revalidating, the problem occurs.

I tried this with 5.0.0-20170421-r15126 as well as 4.0.19 - same result.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump and url_rewrite_program (like squidguard)

2017-05-04 Thread Marcus Kool

Hi Edouard,

To block GET https://www.example.com/foo.html and to pass CONNECT 
www,example.com you need
a) squid with ssl-bump in peek+bump mode
b) ufdbGuard

ufdbGuard can skip the CONNECT and waits for the GET request
which can be blocked without browser errors.

Since ssl-bump is not easy it is recommended to do this in two steps:
a) make sure that Squid with ssl-bump works fine,
b) then add ufdbGuard.

Marcus


On 04/05/17 06:03, Edouard Gaulué wrote:

Hi community,

Any news about this?

I've tried 3.5.25 but still observe this behaviour.

I understand it well since I read: 
https://serverfault.com/questions/727262/how-to-redirect-https-connect-request-with-squid-explicit-proxy

But how to let the CONNECT request succeed and later block/redirect next HTTP 
request coming through this established connection tunnel?

Best Regards,

Le 03/11/2015 à 23:48, Edouard Gaulué a écrit :

Hi community,

I've followed
http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit  to
set my server. It looks really interesting and it's said to be the more
common configuration.

I often observe (example here withwww.youtube.com) :
***
The following error was encountered while trying to retrieve the URL:
https://http/*

*Unable to determine IP address from host name "http"*

The DNS server returned:

Name Error: The domain name does not exist.


This happens while the navigator (Mozilla) is trying to get a frame at
https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386?


That's ads so I'm not so fond of it...

But this leads me to the fact I get this behavior each time the site is
banned by squidguard.

Is there something to do to avoid this behavior? I mean, squidguard
should send :

*
  Access denied

Supplementary info :
Client address = 192.168.XXX.XXX
Client name = 192.168.XXX.XXX
User ident =
Client group = XXX
URL = https://ad.doubleclick.net/
Target class = ads

If this is wrong, contact your administrator
**

squidguard is an url_rewrite_program that looks to respect squid
requirements. Redirect looks like this :
http://proxyweb.myserver.mydomain/cgi-bin/squidGuard-simple.cgi?clientaddr=...

I've played arround trying to change the redirect URL and it leads me to
the idea ssl_bump tries to analyse the part until the ":". Is there a way
to avoid this? Is this just a configuration matter?

Could putting a ssl_bump rule saying "every server that name match "http" or
"https" should splice" solve the problem?

Regards, EG


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid works with ssl bump in intercept mode and root certificate in browser, but apps does not work

2017-05-18 Thread Marcus Kool

You have not stated which version of Squid you are using but my guess is that 
it is 3.5.x.

facebook app and other apps use port 443 but do not use HTTPS and therefore 
Squid does not how to bump it and consequently the app does not work.

What you need is the not yet stable Squid 4.0 and use the option
   on_unsupported_protocol tunnel all
so that the non-HTTPS protocols get through without being bumped.

Marcus


On 18/05/17 07:26, arun.xavier wrote:

I have configured squid with ssl-bump (intercept mode) and it works as
expected while accessing secure sites from browsers.

What I have done so far.

 - Configured squid.
 - created a root& intermediate certificate for dynamic cert generation in
squid.
 installed the same root certificate in mobile device(iphone 6 -iOS-10).
 - Every website works on chrome/safari.

But apps like facebook,twitter are not working(showing network error).

When checking cache log of squid, I found the below log.

/Error negotiating SSL connection on FD 12: error:14094418:SSL
routines:ssl3_read_bytes:tlsv1 alert unknown ca (1/0)
/
It looks like initial CONNECT/Handshake is not working.

what I have changed in squid.conf
-
acl localnet src 172.16.0.0/12
acl localnet src fe80::/10
acl allow localnet
ssl_bump bump all
always_direct allow all
http_port localhost:3128
http_port localhost:3129 intercept
https_port localhost:3130 intercept ssl-bump generate-host-certificates=on
cert=/etc/squid/cert/cert.pem
key=/etc/squid/cert/key.pem
strip_query_terms off


Any idea how to fix this? or where to check? What might be my mistake ?
PS:
I use squid to get logs of all internet traffic from mobile devices.
Overview of my intented system is like this:
SmartPhone>VPN--->Squid--->Internet



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-works-with-ssl-bump-in-intercept-mode-and-root-certificate-in-browser-but-apps-does-not-work-tp4682451.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bump, SSL intercept, explicit, secure proxy, what is it called?

2017-05-25 Thread Marcus Kool

If you use foxyproxy for firefox, you can use switchysharp for Chrome.

Marcus


On 25/05/17 09:00, j m wrote:

Thought I'd try getting this to work in Chrome too.  NOTHING I try makes it 
work in Chrome.  Isn't running this from the Windows command line supposed to 
work?

chrome --proxy-server=https://mydomain:myport

When I do this, it runs Chrome, but it's still not going through the proxy 
despite Firefox on the same computer working just fine!


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] YouTube Videos rating lists

2017-07-08 Thread Marcus Kool

Hi Eliezer,
what is the analyzer looking at?
Does it detect gambling and support other languages than English ?
Thanks
Marcus

On 08/07/17 18:47, Eliezer Croitoru wrote:

Hey All,

I have been working for quite some time on a basic YouTube videos filtering
integration into SquidBlocker.
I have a video and images analysis and categorizing system that I can use to
rate the videos and the images but I am lacking one thing:
YouTube URLS feeds.

I have a running server that is dedicated to receive youtube videos urls for
analysis and then que them for testing.
For this to work I added a feature for the an external_acl helper I wrote
which is called a "feeder" mode which first answers the request with and ERR
and in the background sends the url to the remote system.
The end result would be a publically available rating lists which will be
categorized in a similar way to what Netflix rate ie:
https://help.netflix.com/en/node/2064

ie:
Movies and TV:
Little Kids Older Kids  Teens   Adults
All  7+   13+   16+

I found that Netflix sometimes misses the exact match and adults content
being treated for "7+" I hope that I will not have this issue.
At the first step I will have the API set and the helper released with it's
sources.
When these will be ready I hope to start analyzing and categorizing youtube
videos for white and black listing.
After I will have a base line of black and white lists I will move on to a
weight based categorizing which will also return the matching age which the
video is allow to be watched by.

I need some help from anyone who is willing to send only specific url
patterns and leave the analysis and categorizing to the automated system.

Thanks In Advance,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] debugging ssl-bump

2017-07-18 Thread Marcus Kool


I am trying to debug ssl-bump and am looking specifically for decisions that 
Squid takes with regard to bumping, splicing and unsupported protocol.

The config file for Squid 4.0.21 has

debug_options ALL,1 33,9 83,9

http_port 10.10.10.1:3230ssl-bump ...

acl tls_is_skype ssl::server_name "/var/ufdbguard/blacklists/chat/skype/iplist"
acl tls_is_skype ssl::server_name .skype.com
acl tls_allowed_hsts ssl::server_name www.google.com
acl tls_urlfilterdb ssl::server_name www.urlfilterdb.com
acl tls_server_is_bank ssl::server_name .abnamro.nl
acl tls_server_is_bank ssl::server_name .abnamro.com
acl tls_to_splice any-of tls_allowed_hsts tls_urlfilterdb tls_server_is_bank 
tls_is_skype

ssl_bump splice tls_to_splice
ssl_bump stare  all
ssl_bump bump   all

on_unsupported_protocol tunnel all

But I fail to see in cache.log anything that gives a clue about
- squid decided to splice
- squid decided to bump
- squid decided to treat a connection as "unsupported protocol".

Are there other debug sections than 33 and 83 that need an increased debug 
level ?
what strings do I have to look for in cache.log to understand the above 
decisions that Squid takes ?

Thanks
Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upper limit on the number of regular expressions in url_regex?

2017-08-09 Thread Marcus Kool



On 09/08/17 05:15, Ralf Hildebrandt wrote:

* Marcus Kool :

I have only seen regex failing with such short RE on AIX.
what is your OS, distro, CPU and lib version ?


Ubuntu Linux LTS 16.04 (xenial)
x86_64 (amd64)

I guess you mean libc:
ii  libc6:amd642.23-0ubuntu9


I see no issues with the optimised RE so my first guess is a libc bug.

The RE optimisation in Squid is inspired by the RE optimisation in ufdbGuard.
ufdbGuard optimises the RE a bit different and it looks like this:
zizicamarda.com/7fg3g|zizzhaida.com/3m6ij|zizzhaida.com/98g4ubq|...
I have tested this optimised RE on Ubuntu 16.04 and it works so maybe it is not 
a libc bug but a Squid bug.


BTW: why use regular expressions for a list of 1+ _fixed_ URLs ?


What is the alternative?


ufdbGuard is a URL filter that converts a file with 1 URLs to a database 
file that is optimised for fast lookups.
So all you need to do is configure a URL rewriter and you can filter those 
URLs, using fixed URLs not REs.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Compiling with OpenSSL Support

2017-10-13 Thread Marcus Kool

Debian 9 has openssl 1.1.x while most platforms have older versions.

I noticed myself when I ported ufdbGuard to Debian 9 that openssl 1.1.x has 
many changes in the API.

Marcus

On 13/10/17 13:19, Sérgio Abrantes Junior wrote:

Hello,

I installed this package to resolve this: libssl1.0-dev

2017-10-13 12:18 GMT-03:00 Tyn Li mailto:ty...@yahoo.com>>:

Hello,

I am trying to compile squid on Debian 9 and include OpenSSL support.  Here 
are the configure options I am using:

./configure --with-openssl --enable-disk-io --enable-storeio --enable-icmp 
--enable-delay-pools --enable-linux-netfilter --enable-log-daemon-helpers 
--enable-external-acl-helpers
--enable-url-rewrite-helpers --enable-storeid-rewrite-helpers

The error that I'm getting during make is this:

../../src/ssl/gadgets.h:83:45: error: ‘CRYPTO_LOCK_X509’ was not declared 
in this scope
  typedef LockingPointer 
X509_Pointer;

I cannot find the CRYPTO_LOCK_X509 macro defined anywhere in the OpenSSL 
headers I've installed with libssl-dev (which is probably why I'm getting the 
error).

How do I get around this particular compilation error?  What additional 
software/steps do I need?

Thanks!


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Compiling with OpenSSL Support

2017-10-15 Thread Marcus Kool



On 15/10/17 14:17, Matus UHLAR - fantomas wrote:

2017-10-13 12:18 GMT-03:00 Tyn Li mailto:ty...@yahoo.com>>:
   ../../src/ssl/gadgets.h:83:45: error: ‘CRYPTO_LOCK_X509’ was not declared in 
this scope
 typedef LockingPointer X509_Pointer;



On 13/10/17 13:19, Sérgio Abrantes Junior wrote:

I installed this package to resolve this: libssl1.0-dev


why not libssl-dev?

On 13.10.17 15:16, Marcus Kool wrote:

Debian 9 has openssl 1.1.x while most platforms have older versions.


that means, you should use libssl-dev unless you know squid can't compile
with openssl-1.1


Openssl 1.1.x is not backwards compatible and does not have the symbol 
CRYPTO_LOCK_X509 while openssl 1.0.2 has.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't block streaming

2017-11-03 Thread Marcus Kool

It is not clear what exactly you want to achieve.
Block everything from youtube ?

Amos told you that squidGuard is not maintained for many years but forgot to 
mention that ufdbGuard does the same thing and has regular updates.
ufdbGuard has a feature to block a set of Youtube videos identified by the 
video ID and automagically block all related images too.

Marcus


On 03/11/17 07:42, Vacheslav wrote:



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Wednesday, November 1, 2017 3:52 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] can't block streaming

On 01/11/17 21:54, Vacheslav wrote:

Thanks for your time,

-Original Message-
From: Amos Jeffries
Sent: Tuesday, October 31, 2017 5:45 PM

On 31/10/17 22:05, Vacheslav wrote:

Peace,

I tired searching and debugging but I couldn’t find a solution,
whatever I do youtube keeps working.

Here is my configuration:

...

# Media Streams

## MediaPlayer MMS Protocol

acl media rep_mime_type mms

acl mediapr url_regex dvrplayer mediastream ^mms://

## (Squid does not yet handle the URI as a known proto type.)



Unsupported URI schemes should result in the client receiving an HTTP
error page instead of Squid handling the traffic.



Which also explains your problems: the Browser is either not using
the proxy at all for this traffic, or sending the traffic through a
CONNECT tunnel that is allowed to be created for other reasons.


Well I tried unchecking automatically detect proxy settings. There are
2 network cards on the squid, one with a gateway, the same  is used as
the proxy ip port 3128 and youtube is not in the bypass proxylist. I
tried using opera, the same result.



Things like YT do not have to be on any bypass list to avoid the proxy.
It just has to have a URL scheme for some protocol the browser detects as not able to go 
through the HTTP-only proxy. eg "mms:"



Since mms:// means a non-HTTP protocol and it is not commonly supported by HTTP 
proxies, the browsers usually send it directly >to the mms protocol port(s) 
AFAIK.


Well I tired switching the ip of the pc to one that can't do http and https at 
all without proxy. I tested it without proxy enabled and internet sites don't 
open, I switched the proxy back on and youtube works when it is forbidden.



What do you mean by a connect tunnel?



Things like this:


"
   >CONNECT r1---sn-ntqe6n76.googlevideo.com:443 HTTP/1.1

   >... non-HTTP data stream.
"


Which tells Squid to open a TCP connection to the named server and port.

That is how a YouTube video I'm watching right now is currently going through a 
test Squid. The browser of course shows it as a GET request for some https: 
URI, but the proxy only sees that CONNECT.

To see what is inside that particular port 443 tunnel one has to use SSL_Bump 
feature to decrypt the HTTPS protocol that is supposed to be on that port.



...


# We strongly recommend the following be uncommented to protect
innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

#http_access deny to_localhost

# Deny all blocked extension

error_directory /usr/share/squid/errors/en

deny_info ERR_BLOCKED_FILES blockfiles

http_access deny blockfiles

#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS




Please read the above line, and consider all the custom rules you
placed above it.

I moved the below text to under
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

http_access deny mediapr
http_access deny mediapr1
http_access deny mediapr2
http_access deny mediapr3
http_reply_access deny media
...


#url_rewrite_program /usr/sbin/squidGuard

#url_rewrite_children 5

#debug_options ALL,1 33,2 28,9

And where must I place the before last 2 lines in order for squid
guard to work?




Right there where they are in your config will do.



What do you expect SquidGuard to do?


At first, I thought squid guard is needed to block file extension,
then I discovered that it blocks urls so it is not a bad idea to block
porn sites and porn search terms.



Ah, I see. Well, if you are new to it I advise to try using squid.conf ACLs 
first. Sending things to helpers is quite I/O and memory intensive and most of 
what SG does can be done better by modern Squid.


Also, SquidGuard specifically is very outdated software and no longer 
maintained. If you have to do access control in a helper at all it is better to 
use the external_acl_type interface and other helpers that meet the more 
specific need.

Well then, I'll go with your advice and not use prehistoric software.




If Squid itself cannot identify any URLs with "mms://" scheme there
is no hope of SG being passed the non-existent URLs.


This I didn't digest!




See above with the CONNECT example. *If* the request is actually going through the proxy, 
the URI as far as Squid can see would be somethin

Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-08 Thread Marcus Kool

There is definitely a problem with available memory because Squid cannot fork.
So start with looking at how much memory Squid and its helpers use.
Do do have other processes on this system that consume a lot of memory ?

Also note that ufdbGuard uses less memory that squidGuard.
If there are 30 helpers squidguard uses 300% more memory than ufdbGuard.

Look at the wiki for more information about memory usage:
https://wiki.squid-cache.org/SquidFaq/SquidMemory   (currently has an expired 
certificate but it is safe to go ahead)

Marcus


On 08/11/17 07:26, Bike dernikov1 wrote:

Hi, I hope that someone can explain what happened, why squid stopped working.
The problem is related to  memory/swap handling.

After we changed vm.swappiness parameter from 60 to 10 (tuning
attempt, to lower a disk usage, because we have only 4 disks in a
RAID10, so disk subsystem  is a weak link), we got a lot of errors in
cache.log.
The problems started after scheduled logrotate after  2AM.
Squid ran out of memory, auth helpers stopped working.
It's weird because we didn't disable swap, but behavior is like we did.
After an error, we increased parameter from 10 to 40.

The server has 24GB DDR3 memory,  disk swap set to 24GB, 12 CPU (24HT cores).
We have 2800 users, using  kerberos authentication, squidguard for
filtering, ldap authorization.
When problem appeared memory was still 3GB free (free column), ram
(caching) was filled to 15GB, so 21 GB ram filled, 3GB free.

Thanks for help,


errors from cache.log.

2017/11/08 02:55:27| Set Current Directory to /var/log/squid/
2017/11/08 02:55:27 kid1| storeDirWriteCleanLogs: Starting...
2017/11/08 02:55:27 kid1|   Finished.  Wrote 0 entries.
2017/11/08 02:55:27 kid1|   Took 0.00 seconds (  0.00 entries/sec).
2017/11/08 02:55:27 kid1| logfileRotate: daemon:/var/log/squid/access.log
2017/11/08 02:55:27 kid1| logfileRotate: daemon:/var/log/squid/access.log
2017/11/08 02:55:28 kid1| Pinger socket opened on FD 30
2017/11/08 02:55:28 kid1| helperOpenServers: Starting 1/1000
'squidGuard' processes
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run '/usr/bin/squidGuard' process.
2017/11/08 02:55:28 kid1| helperOpenServers: Starting 300/3000
'negotiate_kerberos_auth' processes
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run
'/usr/lib/squid/negotiate_kerberos_auth' process.
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run
'/usr/lib/squid/negotiate_kerberos_auth' process.
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run
'/usr/lib/squid/negotiate_kerberos_auth' process.

external ACL 'memberof' queue overload. Using stale result.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-08 Thread Marcus Kool


On 08/11/17 11:36, Bike dernikov1 wrote:

Hi,

We stumbled on ufdbGuard, but licence/price was problem, we didn't
read documentation carefully.

yes, ufdbguard is free.


We will definitely try ufdbGuard, but we are now in process of moving
squid/squidguard to production, so we can't test on production (angry
users, Internet must work :)).

Memory compsumption:squid use largest part of memory  (12GB now,
second proces use 300MB memory), 14GB used by all process. So squid
use over 80% of total used memory.
So no there are not any problematic process. But we changed swappiness
settings.

Did you monitor Squid for growth (it can start with 12 GB and grow slowly) ?

Squid cannot fork and higher swappiness increases the amount of memory that the 
OS can use to copy processes.
It makes me think that you have the memory overcommit set to 2 (no overcommit).
What is the output of the following command ?
   sysctl  -a | grep overcommit


Advice for some settings:
We have absolute max peak of  2500 users which user squid (of 2800),
what are recomended settings for:
negotiate_kerberos_children start/idle
squidguard helpers.


I have little experience with kerberos, but most likely this is not the issue.
When Squid cannot fork the helpers, helper settings do not matter much.

For 2500 users you probably need 32-64 squidguard helpers.

Marcus


Thanks for help,

On Wed, Nov 8, 2017 at 10:53 AM, Marcus Kool
 wrote:

There is definitely a problem with available memory because Squid cannot
fork.
So start with looking at how much memory Squid and its helpers use.
Do do have other processes on this system that consume a lot of memory ?

Also note that ufdbGuard uses less memory that squidGuard.
If there are 30 helpers squidguard uses 300% more memory than ufdbGuard.

Look at the wiki for more information about memory usage:
https://wiki.squid-cache.org/SquidFaq/SquidMemory   (currently has an
expired certificate but it is safe to go ahead)

Marcus



On 08/11/17 07:26, Bike dernikov1 wrote:


Hi, I hope that someone can explain what happened, why squid stopped
working.
The problem is related to  memory/swap handling.

After we changed vm.swappiness parameter from 60 to 10 (tuning
attempt, to lower a disk usage, because we have only 4 disks in a
RAID10, so disk subsystem  is a weak link), we got a lot of errors in
cache.log.
The problems started after scheduled logrotate after  2AM.
Squid ran out of memory, auth helpers stopped working.
It's weird because we didn't disable swap, but behavior is like we did.
After an error, we increased parameter from 10 to 40.

The server has 24GB DDR3 memory,  disk swap set to 24GB, 12 CPU (24HT
cores).
We have 2800 users, using  kerberos authentication, squidguard for
filtering, ldap authorization.
When problem appeared memory was still 3GB free (free column), ram
(caching) was filled to 15GB, so 21 GB ram filled, 3GB free.

Thanks for help,


errors from cache.log.

2017/11/08 02:55:27| Set Current Directory to /var/log/squid/
2017/11/08 02:55:27 kid1| storeDirWriteCleanLogs: Starting...
2017/11/08 02:55:27 kid1|   Finished.  Wrote 0 entries.
2017/11/08 02:55:27 kid1|   Took 0.00 seconds (  0.00 entries/sec).
2017/11/08 02:55:27 kid1| logfileRotate: daemon:/var/log/squid/access.log
2017/11/08 02:55:27 kid1| logfileRotate: daemon:/var/log/squid/access.log
2017/11/08 02:55:28 kid1| Pinger socket opened on FD 30
2017/11/08 02:55:28 kid1| helperOpenServers: Starting 1/1000
'squidGuard' processes
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run '/usr/bin/squidGuard'
process.
2017/11/08 02:55:28 kid1| helperOpenServers: Starting 300/3000
'negotiate_kerberos_auth' processes
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run
'/usr/lib/squid/negotiate_kerberos_auth' process.
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run
'/usr/lib/squid/negotiate_kerberos_auth' process.
2017/11/08 02:55:28 kid1| ipcCreate: fork: (12) Cannot allocate memory
2017/11/08 02:55:28 kid1| WARNING: Cannot run
'/usr/lib/squid/negotiate_kerberos_auth' process.

external ACL 'memberof' queue overload. Using stale result.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid and squidGuard redirect

2017-11-08 Thread Marcus Kool

Hi Vieri,

I suggest to replace squidGuard with ufdbGuard.
Then you can set
   ufdb-debug-filter 1
or
   ufdb-debug-filter 2  # very verbose
in ufdbGuard.conf and see exactly what happens.

Note that squidguard has no maintenance for over 5 years and ufdbGuard has 
regular maintenance.

Marcus


On 08/11/17 12:23, Vieri wrote:

Hi,

I have this in my SG config:

acl {
default {
pass allowed !disallowed all
redirect http://squidserver/proxy-error/
}
}

 From a LAN client browser I can access and display the page at 
http://squidserver/proxy-error/ (direct access).

However, when SG is triggered and should send that redirect to the client 
browser, the client times out after a while, and displays Squid's 
ERR_CONNECT_FAIL with squidserver's IP address in the details.

I don't see anything useful in both Squid and SquidGuard's logs.

What could I try?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-09 Thread Marcus Kool



On 09/11/17 11:04, Bike dernikov1 wrote:
[snip]

Memory compsumption:squid use largest part of memory  (12GB now,
second proces use 300MB memory), 14GB used by all process. So squid
use over 80% of total used memory.
So no there are not any problematic process. But we changed swappiness
settings.


Did you monitor Squid for growth (it can start with 12 GB and grow slowly) ?


Yes we are monitoring continuosly.
Now:
Output from free -m.

total   usedfree   shared  buff/cache  available
Mem:  24101 20507  256146  3337 3034
Swap: 24561  5040   19521

vm.swappiness=40

Memory by process:
squid  Virt   RES   SHR  MEM%
22,9G  18.7   8164   79,6


Hmm. Squid grew from 12 GB to 18.7 GB (23 GB virtual).

With vm.swappiness=40 Linux starts to page out parts of processes when they 
occupy more than 60% of the memory.
This is a potential bottleneck and I would have also decreased vm.swappiness to 
10 as you did.

My guess is that Squid starts too many helpers in a short time frame and that 
because of paging there are too many forks in progress simultaneously which 
causes the memory exhaustion.

I suggest to reduce the memory cache of Squid by 50% and set vm.swappiness to 
20.
And then observe:
- total memory use
- total swap usage (should be lower than the 5 GB that you have now)
- number of helper processes that are started in short time frames
And then in small steps increase the memory cache and maybe further reduce 
vm.swappiness to 10.


squidguard two process  300MB boths,.

CPU 0.33 0.37 0.43


Squid cannot fork and higher swappiness increases the amount of memory that
the OS can use to copy processes.
It makes me think that you have the memory overcommit set to 2 (no
overcommit).
What is the output of the following command ?
sysctl  -a | grep overcommit


Command output:

vm.nr_overcommit_hugepages = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50

cat /proc/sys/vm/overcommit_memory
0


The overcommit settings look fine.




Advice for some settings:
We have absolute max peak of  2500 users which user squid (of 2800),
what are recomended settings for:
negotiate_kerberos_children start/idle
squidguard helpers.



I have little experience with kerberos, but most likely this is not the
issue.
When Squid cannot fork the helpers, helper settings do not matter much.



For 2500 users you probably need 32-64 squidguard helpers.


Can you confirm: For 2500 users:

url_rewrite children X (squidguard)  32-64 will be ok ? We have set
much larger number.


Did I understand it correctly that earlier in this reply you said that there 
are two squidguard processes (300 MB each).
ufdbGuard is faster than squidGuard and has multithreaded helpers.  ufdbGuard 
needs less helpers than squidGuard.

If you have a much larger number than 64 url rewrite helpers than I suggest to 
switch to ufdbGuard as soon as possible since the memory usage is then at least 
600% less.


For  helper:
negotitate_kerberos_auth

auth_param negotiate children X startup Y idle Z. What X, Y, Z are
best for our user number ?

We disabled kerberos replay cache because of disk performance (4 SAS
DISK  15K, RAID 10) (iowait jumped high, and CPU load jumped to min
40 max 200).
We don't use disk caching.

Thanks for help,


Marcus



Thanks for help,

On Wed, Nov 8, 2017 at 10:53 AM, Marcus Kool
 wrote:


There is definitely a problem with available memory because Squid cannot
fork.
So start with looking at how much memory Squid and its helpers use.
Do do have other processes on this system that consume a lot of memory ?

Also note that ufdbGuard uses less memory that squidGuard.
If there are 30 helpers squidguard uses 300% more memory than ufdbGuard.

Look at the wiki for more information about memory usage:
https://wiki.squid-cache.org/SquidFaq/SquidMemory   (currently has an
expired certificate but it is safe to go ahead)

Marcus



On 08/11/17 07:26, Bike dernikov1 wrote:



Hi, I hope that someone can explain what happened, why squid stopped
working.
The problem is related to  memory/swap handling.

After we changed vm.swappiness parameter from 60 to 10 (tuning
attempt, to lower a disk usage, because we have only 4 disks in a
RAID10, so disk subsystem  is a weak link), we got a lot of errors in
cache.log.
The problems started after scheduled logrotate after  2AM.
Squid ran out of memory, auth helpers stopped working.
It's weird because we didn't disable swap, but behavior is like we did.
After an error, we increased parameter from 10 to 40.

The server has 24GB DDR3 memory,  disk swap set to 24GB, 12 CPU (24HT
cores).
We have 2800 users, using  kerberos authentication, squidguard for
filtering, ldap authorization.
When problem appeared memory was still 3GB free (free column), ram
(caching) was filled to 15GB, so 21 GB ram filled, 3GB free.

Thanks for help,


errors from cache.log.

2017/11/08 02:55:27| Set Current Directory to /var/log/squid

Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-10 Thread Marcus Kool



On 10/11/17 12:11, Bike dernikov1 wrote:

On Thu, Nov 9, 2017 at 5:13 PM, Marcus Kool  wrote:



On 09/11/17 11:04, Bike dernikov1 wrote:
[snip]


Memory compsumption:squid use largest part of memory  (12GB now,
second proces use 300MB memory), 14GB used by all process. So squid
use over 80% of total used memory.
So no there are not any problematic process. But we changed swappiness
settings.



Did you monitor Squid for growth (it can start with 12 GB and grow
slowly) ?



Yes we are monitoring continuosly.
Now:
Output from free -m.

 total   usedfree   shared  buff/cache  available
Mem:  24101 20507  256146  3337 3034
Swap: 24561  5040   19521

vm.swappiness=40

Memory by process:
squid  Virt   RES   SHR  MEM%
 22,9G  18.7   8164   79,6



Hmm. Squid grew from 12 GB to 18.7 GB (23 GB virtual).


Today problem appeared again after logrotate at 2.56AM.
Used memory was at peek 23,7GB.


ok. it is clear that Squid grows too much.
On a 24GB system with many helpers and a URL filter I think the maximum size 
should be 14GB.


Before logrorate started, cached was at 2GB, buffer at 1,5GB.
After logrorate started cache jumped to 3.7GB and buffer unchanged at 1,5GB.

Fork errors stopped after 1 minute. At 2:57.
cache memory dropped by 500MB  to 3.2GB and continued at same level
till morning, buffer  same at 1.5GB.

After 4 at 3:00 minutes new WARNING appeared. external ACL queue
overload. Using stale results.

We have night shift and they told us that Internet worked ok.

After restart at around 7.00AM used memory dropped from 22 GB to 7GB,
cache and buffer remain at same levels.


How come Squid uses 7 GB at startup when there is no disk cache ?


With vm.swappiness=40 Linux starts to page out parts of processes when they
occupy more than 60% of the memory.
This is a potential bottleneck and I would have also decreased vm.swappiness
to 10 as you did.

My guess is that Squid starts too many helpers in a short time frame and
that because of paging there are too many forks in progress simultaneously
which causes the memory exhaustion.


We are now testing with 100 helpers for negotiate_kerberos_auth.
vm.swappiness returned to 60.


I suggest to reduce the memory cache of Squid by 50% and set vm.swappiness
to 20.


Squid cache memory is set at 14GB reduced from 16GB from 20GB  in two turns.


are you saying that you have
   cache_mem 14G
If yes, you should read the memory FAQ and reduce this.
'cache_mem 14G' explains that Squid starts 'small' and grows over time.


And then observe:
- total memory use
- total swap usage (should be lower than the 5 GB that you have now)
- number of helper processes that are started in short time frames
And then in small steps increase the memory cache and maybe further reduce
vm.swappiness to 10.


If we survive with actual setup, we will continue with reducing as you suggest.
Last extreme will be swap disable swappof but just for test with 6
eyes on monitoring :)


squidguard two process  300MB boths,.

CPU 0.33 0.37 0.43


Squid cannot fork and higher swappiness increases the amount of memory
that
the OS can use to copy processes.
It makes me think that you have the memory overcommit set to 2 (no
overcommit).
What is the output of the following command ?
 sysctl  -a | grep overcommit



Command output:

vm.nr_overcommit_hugepages = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50

cat /proc/sys/vm/overcommit_memory
0



The overcommit settings look fine.


At least something right :)




Advice for some settings:
We have absolute max peak of  2500 users which user squid (of 2800),
what are recomended settings for:
negotiate_kerberos_children start/idle
squidguard helpers.




I have little experience with kerberos, but most likely this is not the
issue.
When Squid cannot fork the helpers, helper settings do not matter much.




For 2500 users you probably need 32-64 squidguard helpers.



Can you confirm: For 2500 users:

url_rewrite children X (squidguard)  32-64 will be ok ? We have set
much larger number.


Squidguard url_rewrite children was set to 64.


Did I understand it correctly that earlier in this reply you said that there
are two squidguard processes (300 MB each).


Yes (first two process in htop, two rewrite childrens) others was on 0.0%.


ufdbGuard is faster than squidGuard and has multithreaded helpers.
ufdbGuard needs less helpers than squidGuard.
If you have a much larger number than 64 url rewrite helpers than I suggest
to switch to ufdbGuard as soon as possible since the memory usage is then at
least 600% less.


UfdbGuard have few strong features. Development, kerberos,
concurency/multitreading.
As i wrote, if we read documentation slower we wouldn't
Do ufdbGuard supoort ldap secure auth ? We tried ldap secure with
squidguard without success.


ufdbGuard supports any user database with the "execuserlist" feature.
See the Referenc

Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-13 Thread Marcus Kool



On 13/11/17 07:46, Bike dernikov1 wrote:


are you saying that you have
cache_mem 14G
If yes, you should read the memory FAQ and reduce this.
'cache_mem 14G' explains that Squid starts 'small' and grows over time.


For our case, what do you recomend.  10GB or even lower ?
Plan reading today, i hope that I will have peace, to concentrate.


cache_mem does NOT define the total memory use of Squid.
The FAQ explains it.
On a 24G system you can start with 7 GB and only after 3 days of running without issues and verifying that the cache is 100% utilised (if not, Squid can grow) and there is sufficient free memory, you 
can increase it.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-13 Thread Marcus Kool



On 13/11/17 10:46, Bike dernikov1 wrote:

On Mon, Nov 13, 2017 at 12:15 PM, Marcus Kool
 wrote:



On 13/11/17 07:46, Bike dernikov1 wrote:


are you saying that you have
 cache_mem 14G
If yes, you should read the memory FAQ and reduce this.
'cache_mem 14G' explains that Squid starts 'small' and grows over time.



For our case, what do you recomend.  10GB or even lower ?
Plan reading today, i hope that I will have peace, to concentrate.



cache_mem does NOT define the total memory use of Squid.
The FAQ explains it.
On a 24G system you can start with 7 GB and only after 3 days of running
without issues and verifying that the cache is 100% utilised (if not, Squid
can grow) and there is sufficient free memory, you can increase it.



Read FAQ.
Now  trying to pass trough squid-internal-mgr/ reports/statistics.
For now we will stay at cache_mem 14GB, because we are modifying too
many settings at same time. Now at 99% used at 14GB if i read correctly.
Thanks for sugestions and help.


squid-internal-mgr/info output:
Cache information for squid:
Hits as % of all requests: 5min: 5.8%, 60min: 6.4%
Hits as % of bytes sent: 5min: 16.8%, 60min: 18.7%
Memory hits as % of hit requests: 5min: 64.1%, 60min: 64.9%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.1%
Storage Swap size: 0 KB
Storage Swap capacity: 0.0% used,  0.0% free
Storage Mem size: 14195856 KB
Storage Mem capacity: 99.0% used,  1.0% free
Mean Object Size: 0.00 KB
Requests given to unlinkd: 0


Beware that he storage mem size is not the same as total memory used.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Survey on assertions: When the impossible happens

2016-02-29 Thread Marcus Kool



* Choices.

Overall, there are three options for handling an impossible situation:

1. Quit Squid process. This is what Squid does today in most cases.
When the impossible happens, you get a crash. Very predictable.
No malformed/corrupted/misleading HTTP messages (some are truncated).
No memory leaks.

2. Quit the current processing sequence but keep Squid process running,
assuming that [most] other processing sequences are not affected.
[If you are familiar with programming, this is done by throwing
exceptions instead of asserting and catching those exceptions at
"processing sequence" boundaries].

3. Keep executing the current processing sequence, assuming that the
assertion was wrong or unimportant. This is what you might be
suggesting above. When the impossible happens, you may get a crash,
memory leaks, malformed/corrupted/misleading HTTP messages, or normal
behavior, depending on the assertion and traffic.

IMO, we should make #2 the default, but make the choice between all
three options configurable by the admin (without recompiling Squid).


Let me suggest #4 :

immediately execute an external program that calls gdb or any other debugger
which produces a stack trace of all squid processes and then do #1 or #2.

The stack dumps will be save in an assertion failure log file which admins
can send to Squid developers.

This will speedup the debugging and fixing procedure.

Finally, there must be a mechanism that warns admins that an assertion failure
happened.  This is not trivial since an admin does not look every day
in the squid log file.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Survey on assertions: When the impossible happens

2016-02-29 Thread Marcus Kool



This is not really #4. It is an enhancement for any of the three
options. IIRC, Squid even supported gdb stack tracing natively on some
platforms (but a script would arguably be better, except for busy
proxies that cannot be blocked for 2-4 seconds it takes to run that script).




This already exists. Squid does it *right now*.

You never received a "The Squid Cache (version %s) died." email ?


Nope.


When mail [1] is working on the proxy Squid will use it to send an email
to the configured administrator email address [2], root@ address for the
proxies private hostname [3], or root@ address for the proxies public
hostname [4] - in that order or preference.
  - Of course, far too many people dont use FQDN for those config settings...


[1] http://www.squid-cache.org/Doc/config/mail_program/
[2] http://www.squid-cache.org/Doc/config/mail_from/
[3] http://www.squid-cache.org/Doc/config/unique_hostname/
[4] http://www.squid-cache.org/Doc/config/visible_hostname/


I learned something today :-)  (does not happen every day)


The stack dumps will be save in an assertion failure log file which admins
can send to Squid developers.




If Squid is also built with --enable-stacktraces a stack trace will be
recorded in cache.log after FATAL messages.
- Of course. Speed at any cost "needs" prohibit doing anything that
might slow down the Squid restart process. So that gets disabled.


Hmm. are you suggesting that this wonderful feature is not widely used?

If not, then calling gdb is preferred.
gdb also prints parameters, local variables and contents of data structures etc.
hence superior than backtrace().

Marcus



Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] runing squid on second processor

2016-05-01 Thread Marcus Kool



On 04/29/2016 07:17 PM, joe wrote:

hi i have 2 cpu 4 core each
  i need to leave alone first processor and use the second one for squid and
its helper
is that will do ???   taskset 0x00f0 squid -YC -f /etc/squid/squid.conf
or other way around ??
so i can keep the kernel and other program running on first cpu not
interfere  with squid
cause wen i run Calamaris Log Analysis on cpl large log it take cpu % very
hi and it slow delay squid performance until it finish :(

tks


If you use Linux, I suggest to use numactl, e.g.
   numactl -m 1 -N 1 /full/path/to/squid ...
this makes sure that squid and all children run on CPU cores of node 1 only and 
use memory from node 1 only

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-06 Thread Marcus Kool



On 06/06/2016 04:27 AM, FredB wrote:

Hello all,

I'm trying to use a server with 64 Go of ram, but I'm faced with a problem, 
squid can't works with more than 50% of memory


What is cache_mem ?
See also http://wiki.squid-cache.org/SquidFaq/SquidMemory


After that the swap is totally full and kswap process gone mad ...
I tried with vm.swappiness = 0 but same problem, perhaps a little better, I 
also tried memory_pool off without any change.


I recommend vm.swappiness = 5 to have 5% of the memory be used for the file 
system cache and maintain good disk I/O.


As you can see in this picture linux are using 22 Go of cached memory
http://image.noelshack.com/fichiers/2016/22/1464965449-capture-squid.png


The values are too high (1024 times).  I think that you incorrectly set 
cache_mem.
Start with setting  cache_mem to 16 GB

Marcus


I'm using two caches (133 G each)with a dedicated sata (15 k) disk for each 
cache
Any advice will be very appreciate

OS Debian Jessie 64 bits and squid 3.5.19

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-06 Thread Marcus Kool



On 06/06/2016 07:27 AM, FredB wrote:

Thanks for your answer


What is cache_mem ?
See also http://wiki.squid-cache.org/SquidFaq/SquidMemory



Actually 25 Gb
I tried different values, but I guess no matter, the problem is that the squid 
limit is only 50% of ram


After that the swap is totally full and kswap process gone mad ...
I tried with vm.swappiness = 0 but same problem, perhaps a little
better, I also tried memory_pool off without any change.


I recommend vm.swappiness = 5 to have 5% of the memory be used for
the file system cache and maintain good disk I/O.


More I increase vm.swappiness more I swap and more I have problems, but I will 
try your value



The values are too high (1024 times).  I think that you incorrectly
set cache_mem.


ah, I misread the values. I interpreted the comma as a thousand separator.


Start with setting  cache_mem to 16 GB



Maybe I misunderstand your point, but when I reduce cache_mem yes there is no 
problem but Squid uses only 20/30 Go Max


As you can read in the memory FAQ, the value of cache_mem is a part (often 35%) 
of total memory use.
When you start with a clean install, you will see a "low memory use" since the 
disk cache is not yet fully populated
and perhaps because the number of connections is low.


With cache_men 15 Gb squid eats 36 % of memory
Htop and htops reports 30 Go of free memory

free -h
  total   used   free sharedbuffers cached
Mem:   63G62G   425M   122M   1,7G27G
-/+ buffers/cache:33G30G
Swap: 1,9G   102M   1,8G

All my RAM is consumed by cache/buffers and seems not be freed when it is 
needed by Squid


This is a healthy start.
When your disk cache is fully populated and when there is still room (e.g. 
'cached' column of free shows many gigabytes), you may increase cache_mem.
Note that connections and Squid buffers occupy memory so always be a bit 
conservative to prevent swapping.

2 GB swap for a 64 GB memory system is a bit small. If you only have Squid and 
no other applications on this system it may be sufficient.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Somewhat OT: Content Filter with https

2016-06-08 Thread Marcus Kool



On 06/08/2016 05:05 PM, Sergio Belkin wrote:

Hi,

I've been using a few years ago squid+dansguardian. But nowadays, DG is not 
maintained anymore. I know that exists squidGuard, ufdbGuard, and e2guardian.

Features should be:

- Blocking https url's


Blocking HTTPS URLs is easy.
However, providing an understandable message to the end user is a challenge.
This is because HTTPS, is designed to not be interfered with, and if a proxy interferes, 
a browser will display errors like "wrong certificate for this site".
If you want user-friendly error messages like "This site is blocked because 
..." instead of the certificate errors,
one needs sslbump with peek+bump for all blocked sites. This is doable but not 
straightforward.


- Not need of interception. is that possible?


It depends.  If you support smartphones, you most likely need interception 
since not all apps can be configured to use a proxy.
With only desktops, interception is not required but you may need to install 
the Squid CA certificate on all desktops.


- Simple for configure  and good perfomance


squidGuard is also not maintained for a long time so not recommendable.
ufdbGuard has regular updates, can be used with free and commercial URL 
databases, and is 3x faster than squidGuard.

Note that I am the author of ufdbGuard so you may find me biased :-)

Marcus


What do you recommend me?

Thanks in advance!

--
--
Sergio Belkin
LPIC-2 Certified - http://www.lpi.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Somewhat OT: Content Filter with https

2016-06-08 Thread Marcus Kool



On 06/08/2016 05:54 PM, Sergio Belkin wrote:


- Not need of interception. is that possible?

It depends.  If you support smartphones, you most likely need interception 
since not all apps can be configured to use a proxy.
With only desktops, interception is not required but you may need to 
install the Squid CA certificate on all desktops.

And what about authentication? Can a user authenticate to Active Directory at 
logon time to use squid?


With interception or regular proxy mode ?

Amos may correct me if I am wrong, but I understood that authentication is not 
compatible with interception.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Peek'n Splice (ssl_bump) and authentication Somewhat OT: Content Filter with https

2016-06-08 Thread Marcus Kool



On 06/08/2016 07:53 PM, Sergio Belkin wrote:


Thanks Eliezer, good summary. I've changed the subject to reflect better the 
issue. As far I undestand from documention one can bump https only by 
interception.


No.  ssl-bump works very well with regular proxy mode, i.e. the browsers 
configure the address and port of the proxy or use PAC.


But what about if one Windows user login against an Active Directory, will the 
authenticacion work to use the proxy?

I mean, what I'd want is:

- Only users of an Active Directory can use the proxy


In regular proxy mode, authentication and peek+splice works fine.
Note that peek+splice does not require Squid CA certificates on the clients.


- Block certains urls

Is that possible with squid+ufwdbguard?


ufdbGuard works always, independent if Squid uses interception or not.
The issue is the messages that a browser displays for the end user (see earlier 
email).

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Peek'n Splice (ssl_bump) and authentication Somewhat OT: Content Filter with https

2016-06-10 Thread Marcus Kool



On 06/09/2016 11:26 PM, Sergio Belkin wrote:



2016-06-08 20:30 GMT-03:00 Marcus Kool mailto:marcus.k...@urlfilterdb.com>>:



On 06/08/2016 07:53 PM, Sergio Belkin wrote:


Thanks Eliezer, good summary. I've changed the subject to reflect 
better the issue. As far I undestand from documention one can bump https only 
by interception.


No.  ssl-bump works very well with regular proxy mode, i.e. the browsers 
configure the address and port of the proxy or use PAC.

But what about if one Windows user login against an Active Directory, 
will the authenticacion work to use the proxy?

I mean, what I'd want is:

- Only users of an Active Directory can use the proxy


In regular proxy mode, authentication and peek+splice works fine.
Note that peek+splice does not require Squid CA certificates on the clients.




With peek+splce I block urls without CA certificates on the clients? Remember I 
mean urls, not only domains!


No. To block HTTPS URLs one needs ssl_bump with peek+bump mode for all blocked 
URLs (see my message of June 8).
With peek+bump ufdbGuard can block anything you like and produce understandable 
messages to the end user.

Marcus


- Block certains urls

Is that possible with squid+ufwdbguard?


ufdbGuard works always, independent if Squid uses interception or not.
The issue is the messages that a browser displays for the end user (see 
earlier email).

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Redirect after sslbump teminate

2016-06-12 Thread Marcus Kool



On 06/12/2016 12:34 PM, Eng Hooda wrote:

Hello Squid Users,
I have searched for this but I could not find an answer.
After I peek for media streaming sites using sslbump , I terminate the 
connection on match , which produces secure connection failed on the client 
browser .
Is there a way to redirect to another page like an error page or access denied 
page instead ?


Redirecting HTTPS is _only_ possible if you use ssl-bump in the peek+bump mode, 
meaning that all devices must have the Squid CA certificate.
On top of that, there are a number of sites which have several issues and do 
not work with peek+bump.

Marcus


Thank you all in advance.

Best Regards,
Eng Hooda

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-15 Thread Marcus Kool



On 06/15/2016 04:30 AM, FredB wrote:

Maybe I'm wrong, but the server is also using many memories for TCP

cat /proc/net/sockstat
sockets: used 13523
TCP: inuse 8612 orphan 49 tw 31196 alloc 8728 mem 18237
UDP: inuse 14 mem 6
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

netstat -lataupen | wc -l
38780


yes, and the OS also uses buffers for file system I/O.

I updated the memory FAQ to include these.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

2016-06-15 Thread Marcus Kool



On 06/15/2016 04:22 AM, reqman wrote:

Hello all,

I have been running squid 2.7.X alongside squidguard 1.4 on a FreeBSD
8.x box for years. Started out some 10 years ago, with a much older
squid/squidguard/FreeBSD combination.

Having to upgrade to FreeBSD 10.3, I examined my option regarding
squid. 3.5.19 was available which I assumed would behave the same as
2.7, regarding compatibility. Squidguard 1.4 was also installed.


A great decision to go to Squid 3.5.19, but it is a large leap so
you might expect some compatibility issues.

Squidguard has no support nor maintenance for many years and the patch
for squidguard to become compatible with squid 3.4+ was written by a Squid
developer.
Hence I recommend to install ufdbGuard, which is a fork of squidGuard and
does have support and updates.  ufdbGuard is also 3x faster and uses less
memory, so plenty of reasons to say goodbye to squidGuard.

Marcus


- Squid was configured to behave along the lines of what I had on 2.7.
- For squidguard I used the exact same blocklists and configurations.
Note that I do not employ an URL rewriting in squidguard, only
redirection.
- no SSL-bump or other SSL interception takes place
- the squidguard-related lines on squid are the following:

url_rewrite_program /usr/local/bin/squidGuard
url_rewrite_children 8 startup=4 idle=4 concurrency=0
url_rewrite_access allow all

- In squidGuard.conf, the typical redirect section is like:

  default {
 pass local-ok !block1 !block2 !blockN all
 redirect
301:http://localsite/block.htm?clientaddr=%a+clientname=%n+clientident=%i+srcclass=%s+targetclass=%t+url=%u
 }

I am now experiencing problems that I did not have. Specifically,
access to certain but *not* all HTTPS sites seems to timeout.
Furthermore, I see entries similar to the following in cache.log:

2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3446 FD 591 flags=1
2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3448 FD 592 flags=1
2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3452 FD 594 flags=1
2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3456 FD 596 flags=1
2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3454 FD 595 flags=1
2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3458 FD 597 flags=1
2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
remote=192.168.2.239:3462 FD 599 flags=1

Searching around, the closest I have come to an answer is the
following: http://www.squid-cache.org/mail-archive/squid-users/201211/0165.html
I am not sure though whether I am plagued by the same issue,
considering that the thread refers to a squid version dated 4 years
ago. And I definitely do not understand what the is meant by the
poster's proposal:

"If you can't alter the re-writer to perform redirection you can work
around that by using:

   acl foo ... some test to match the re-written URL ...
   deny_info 302:%s foo
   adapted_http_access deny foo "

Can someone help resolve this? Is the 2.7 series supported at all? As
is if everything fails, I'll have to go back to it if there's some
support.

BR,


Michael.-


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

2016-06-15 Thread Marcus Kool



On 06/15/2016 08:24 AM, reqman wrote:


I have been using squidGuard for 10+ years. Not the best one could
have, but I am accustomed to its use and idiosyncrasies. Furthermore,
it is package well supported on FreeBSD.

You are mentioning ufdbGuard. Are its lists free for government use?
If not, then I can not use it, since we have very strict purchasing
requirements, even if it costs $1. And of course, I would have to go
through evaluation, the usual learning curve etc.


ufdbGard is free software.
You can use it with any database you desire...  the free ones, your own or
a commercial one.

There is little learning curve since it is a fork of squidguard and there
is a Reference Manual and email support from URLfilterDB, even for those
who use a free database.

Marcus



Don't get me wrong here, I'm not saying no. I'm just saying that even
though it seems to be easy to say "yes", reality is much different.

M.-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

2016-06-16 Thread Marcus Kool



On 06/16/2016 02:19 AM, reqman wrote:

Seems nice. But I did not find any concrete documentation howto.


There is a Reference Manual at the download section of ufdbGuard:
https://www.urlfilterdb.com/downloads/software_doc.html

There is also a mailing list for ufdbGuard at sourceforge and
you can email the support desk.  We do answer and usually within a few hours.

Marcus


Furthermore, answering back SSL requests without any SSL-bumping is an
issue there as well. I'll try to have a better look, time allowed.

M.-

2016-06-15 14:36 GMT+03:00 FredB :




You are mentioning ufdbGuard. Are its lists free for government use?
If not, then I can not use it, since we have very strict purchasing
requirements, even if it costs $1. And of course, I would have to go
through evaluation, the usual learning curve etc.

Don't get me wrong here, I'm not saying no. I'm just saying that even
though it seems to be easy to say "yes", reality is much different.



You can also use E2guardian, a free web url and content filtering proxy
There is a package for Freebsd

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

2016-06-16 Thread Marcus Kool



On 06/16/2016 10:21 PM, Eliezer Croitoru wrote:

I have a non-public question but if you can share it will be nice.
What is the users size\capacity of the system?
I am asking since I have seen that many squidGuard based systems have acted 
slower then with ICAP.
By slower I mean that the initial squidGuard lookup response caused slower page 
display by ms to couple secs.
I have not researched the exact reasons since I will not try to fix what is 
already fine for many.


squidGuard is slow since each process opens the database and
has a private cache of 15% of the database.
So if a squidGuard process does a URL lookup it does a lookup in the
database cache and may read from disk (or the OS file system cache).

ufdbGuard does it differently: the url_rewriter process is a lightweight
process that forwards a URL lookup to a multithreaded daemon that holds
a single copy of the URL database in memory.  The database format is
also optimised for in-memory lookups.
ufdbGuard does 50,000 URL lookups/second on an
Intel(R) Xeon(R) CPU E5-1620 v2 with 4 cores/8 threads.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https antivirus proxy necessary?

2016-06-23 Thread Marcus Kool



On 06/22/2016 11:10 AM, hans.mey...@fn.de wrote:

Do you think it's necessary to have an additional https antivir proxy to normal 
client antivirus? We are using Avast Business that already offers a web 
protection. Can an additional antivir proxy
significant higher the level of protection? In general I think two different 
antivirus programms see more then one. But on the other hand an HTTP/HTTPS 
antivirus proxy is an additional attack surface.
Especially because its costly to build the latest squid version with https 
support from source on a debian jessi. So the proxy will not be  up a proxy or 
not?


There is not a single antivirus vendor that catches all viruses, especially not 
new or mutated viruses, so it is definitely recommendable to use two vendors.
One can use brand-1 on the PC and brand-2 on the web proxies and email servers.

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype Issues

2016-06-30 Thread Marcus Kool



On 06/30/2016 09:10 AM, Amos Jeffries wrote:
...

  The on_unsupported_protocol directive is about what its name says *any*
unsupported protocol. Not ICQ specific.

I think the issue here is that Skype looks at the binary level like TLS.
TLS being a supported protocol if it looks close enough then it would be
seen as invalid/broken TLS, not some non-TLS.


Applications may use any protocol that they desire to tunnel through a proxy.
They may use TLS+SMTP, TLS+HTTP, TLS+XYZ, RC4+FOO, SSH, VPN, BAR, TXT and
many others.
Since bumping is intended to only interfere with TLS+HTTP, Squid should bump
_only_ TLS+HTTP and not interfere with all other protocols.

Squid 3.5 finally made a lot of progress with bumping TLS+HTTP and the
missing piece to be able to use it in many environments is a
mechanism to deal with all other protocols (non TLS+HTTP).
The first step is to not break applications. The second step is
to have mechanisms to decide what to do with the other
protocols, since most admins want to block SSH and VPN,
while allowing Skype and BAR.

Marcus


Sory Renato, with that not working I'm not sure where to go next.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Marcus Kool



On 07/06/2016 11:36 AM, Steve Hill wrote:


I'm using a transparent proxy and SSL-peek and have hit a problem with an iOS 
app which seems to be doing broken things with the SNI.

The app is making an HTTPS connection to a server and presenting an SNI with a wildcard 
in it - i.e. "*.example.com".  I'm not sure if this behaviour is actually 
illegal, but it certainly doesn't seem
to make a lot of sense to me.

Squid then internally generates a "CONNECT *.example.com:443" request based on 
the peeked SNI, which is picked up by hostHeaderIpVerify(). Since *.example.com isn't a 
valid DNS name, Squid rejects the
connection on the basis that *.example.com doesn't match the IP address that 
the client is connecting to.

Unfortunately, I can't see any way of working around the problem - 
"host_verify_strict" is disabled, but according to the docs,
"For now suspicious intercepted CONNECT requests are always responded to with an 
HTTP 409 (Conflict) error page."

As I understand it, turning host_verify_strict on causes problems with CDNs 
which use DNS tricks for load balancing, so I'm not sure I understand the 
rationale behind preventing it from being turned
off for CONNECT requests?


An SNI with a wildcard indeed does not make sense.

Since Squid tries to mimic the behavior of the server and of the client,
it deserves a patch where instead of doing a DNS lookup and then doing a
connect (based on the result of the DNS lookup?),
Squid simply connects to the IP address that the client tries to connect to
and does the TLS handshake with the SNI (that does not make sense).
This way it mimics the client a bit better.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Marcus Kool



On 07/06/2016 10:07 PM, Alex Rousskov wrote:

On 07/06/2016 05:01 PM, Marcus Kool wrote:

On 07/06/2016 11:36 AM, Steve Hill wrote:

I'm using a transparent proxy and SSL-peek and have hit a problem with
an iOS app which seems to be doing broken things with the SNI.

The app is making an HTTPS connection to a server and presenting an
SNI with a wildcard in it - i.e. "*.example.com".  I'm not sure if
this behaviour is actually illegal, but it certainly doesn't seem
to make a lot of sense to me.


[snip]



Q3. What should Squid do when receiving a wildcard SNI?

The first two questions are not really important and each may not even
have a single "correct" answer. I am sure protocol purists can argue
about them forever. The last question is important, which brings us to:


Since Squid tries to mimic the behavior of the server and of the client,
it deserves a patch where instead of doing a DNS lookup and then doing a
connect (based on the result of the DNS lookup?),
Squid simply connects to the IP address that the client tries to connect to
and does the TLS handshake with the SNI (that does not make sense).
This way it mimics the client a bit better.


I believe that is what Squid does already but please correct me if I am
wrong.


Steve said that Squid rejects the connection because of a failed DNS lookup.
So what is Squid doing?  Is it doing  the following ?
- connect to the original IP
- use the presented SNI to the server
- do a DNS lookup and reject


When forming a fake CONNECT request, Squid uses SNI information because
that is what ACLs and adaptation services usually want to see. However,
I hope that intercepting Squid always connects to the intended
destination of the intercepted connection instead of trusting its own
fake CONNECT request.


It is interesting to know if an ICAP server or URL rewriter is called
and with which parameters when the ios app breaks.


Whether Squid should generate a fake CONNECT with a wildcard host name
is an interesting question:

1. A fake CONNECT targeting an wildcard name may break ACL-driven rules
and adaptation services (at least).

>

2. A fake CONNECT targeting an IP address instead of a wildcard name may
not give some ACL-driven rules and adaptation services enough
information to make the right decision.

3. A premature rejection of a connection with wildcard SNI does not
allow the admin to make the bump/splice/terminate decision.

#2 is probably the lesser of the three evils, but I wonder whether Squid
should also include the invalid host name as an X-SNI or similar HTTP
header in that CONNECT request, to give advanced ACLs and adaptation
services a better chance to work around known benign issues where the
admin knows the wildcard is not malicious (and to kill wildcard
transactions the admin knows to be malicious!).


I use url_rewrite_extras with "... sni=\"%ssl::>sni\" ..."
so the URL redirector receives both the original IP address and
the peeked SNI value to make its decisions.
I agree that an ICAP service needs X-SNI or perhaps X-Squid-SNI to
also get both the IP address and the SNI value.


A similar question can be asked about SNI names containing unusual
characters. At some point, it would be too dangerous to include SNI
information in the fake CONNECT request because it will interfere with
HTTP rules, but it is not clear where that point is exactly.


To support the weirdest apps Squid might have to simply copy all
unusual characters to present the same parameter values to the server.
But we do not want to break things... so which characters are
so unusual that Squid does not want to accept them?

Marcus


Cheers,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Marcus Kool



On 07/07/2016 07:15 AM, Amos Jeffries wrote:

On 7/07/2016 1:55 p.m., Marcus Kool wrote:



On 07/06/2016 10:07 PM, Alex Rousskov wrote:

On 07/06/2016 05:01 PM, Marcus Kool wrote:

On 07/06/2016 11:36 AM, Steve Hill wrote:

I'm using a transparent proxy and SSL-peek and have hit a problem with
an iOS app which seems to be doing broken things with the SNI.

The app is making an HTTPS connection to a server and presenting an
SNI with a wildcard in it - i.e. "*.example.com".  I'm not sure if
this behaviour is actually illegal, but it certainly doesn't seem
to make a lot of sense to me.


[snip]



Q3. What should Squid do when receiving a wildcard SNI?

The first two questions are not really important and each may not even
have a single "correct" answer. I am sure protocol purists can argue
about them forever. The last question is important, which brings us to:


Since Squid tries to mimic the behavior of the server and of the client,
it deserves a patch where instead of doing a DNS lookup and then doing a
connect (based on the result of the DNS lookup?),
Squid simply connects to the IP address that the client tries to
connect to
and does the TLS handshake with the SNI (that does not make sense).
This way it mimics the client a bit better.


I believe that is what Squid does already but please correct me if I am
wrong.


Steve said that Squid rejects the connection because of a failed DNS
lookup.
So what is Squid doing?  Is it doing  the following ?
- connect to the original IP
- use the presented SNI to the server
- do a DNS lookup and reject


No it is doing Host: header verification on the faked CONNECT request
which uses the SNI in the Host: header value. This is not strictly
required for CONNECT messages, but does potentially prevent Squid using
other IPs than the original one the client was contacting.


Squid _has_ the original IP so why would Squid potentially connect to an
other IP ?


If the SNI is a valid domain name (ie resolves in DNS). Then Squid
should continue on past the check.


Does Squid do a DNS lookup or only check the value for "valid" characters?


When forming a fake CONNECT request, Squid uses SNI information because
that is what ACLs and adaptation services usually want to see. However,
I hope that intercepting Squid always connects to the intended
destination of the intercepted connection instead of trusting its own
fake CONNECT request.


It is interesting to know if an ICAP server or URL rewriter is called
and with which parameters when the ios app breaks.


Whether Squid should generate a fake CONNECT with a wildcard host name
is an interesting question:

1. A fake CONNECT targeting an wildcard name may break ACL-driven rules
and adaptation services (at least).

2. A fake CONNECT targeting an IP address instead of a wildcard name may
not give some ACL-driven rules and adaptation services enough
information to make the right decision.

3. A premature rejection of a connection with wildcard SNI does not
allow the admin to make the bump/splice/terminate decision.

#2 is probably the lesser of the three evils, but I wonder whether Squid
should also include the invalid host name as an X-SNI or similar HTTP
header in that CONNECT request, to give advanced ACLs and adaptation
services a better chance to work around known benign issues where the
admin knows the wildcard is not malicious (and to kill wildcard
transactions the admin knows to be malicious!).


I use url_rewrite_extras with "... sni=\"%ssl::>sni\" ..."
so the URL redirector receives both the original IP address and
the peeked SNI value to make its decisions.
I agree that an ICAP service needs X-SNI or perhaps X-Squid-SNI to
also get both the IP address and the SNI value.


That is a problem for the service. Squid can already send anything in:
<http://www.squid-cache.org/Doc/config/adaptation_meta/>.


Which problem are you referring to?


Maybe you have mistaken it for the inability of most ICAP services to
send things back to Squid in the same way.


The ICAP server needs both the IP and the SNI which Squid can be configured
to do.  What do you suggest that an ICAP server needs to send back to Squid?


A similar question can be asked about SNI names containing unusual
characters. At some point, it would be too dangerous to include SNI
information in the fake CONNECT request because it will interfere with
HTTP rules, but it is not clear where that point is exactly.


To support the weirdest apps Squid might have to simply copy all
unusual characters to present the same parameter values to the server.


It is being mapped into the HTTP equivalent value. Which are Host:
header and authority-URI. Only valid FQDN names can make it through the
mapping.


Here things get complicated.
It is correct that Squid enforces apps to follow standards or
should Squid try to proxy connections for apps when it can?

Marcus


But we do not want to break things... 

Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Marcus Kool



On 07/07/2016 09:23 AM, Amos Jeffries wrote:

On 7/07/2016 11:30 p.m., Marcus Kool wrote:



On 07/07/2016 07:15 AM, Amos Jeffries wrote:

On 7/07/2016 1:55 p.m., Marcus Kool wrote:



On 07/06/2016 10:07 PM, Alex Rousskov wrote:

On 07/06/2016 05:01 PM, Marcus Kool wrote:

On 07/06/2016 11:36 AM, Steve Hill wrote:

I'm using a transparent proxy and SSL-peek and have hit a problem
with
an iOS app which seems to be doing broken things with the SNI.

The app is making an HTTPS connection to a server and presenting an
SNI with a wildcard in it - i.e. "*.example.com".  I'm not sure if
this behaviour is actually illegal, but it certainly doesn't seem
to make a lot of sense to me.


[snip]



Q3. What should Squid do when receiving a wildcard SNI?

The first two questions are not really important and each may not even
have a single "correct" answer. I am sure protocol purists can argue
about them forever. The last question is important, which brings us to:


Since Squid tries to mimic the behavior of the server and of the
client,
it deserves a patch where instead of doing a DNS lookup and then
doing a
connect (based on the result of the DNS lookup?),
Squid simply connects to the IP address that the client tries to
connect to
and does the TLS handshake with the SNI (that does not make sense).
This way it mimics the client a bit better.


I believe that is what Squid does already but please correct me if I am
wrong.


Steve said that Squid rejects the connection because of a failed DNS
lookup.
So what is Squid doing?  Is it doing  the following ?
- connect to the original IP
- use the presented SNI to the server
- do a DNS lookup and reject


No it is doing Host: header verification on the faked CONNECT request
which uses the SNI in the Host: header value. This is not strictly
required for CONNECT messages, but does potentially prevent Squid using
other IPs than the original one the client was contacting.


Squid _has_ the original IP so why would Squid potentially connect to an
other IP ?


Because the inbound and outbound connections are not related. The
outbound is normally done to any of the IPs that the request message
domain/Host header resolve to. It takes special circumstances (such as
failing the Host verification) to tie them together.


Oops.  An application wants to connect to A.B.C.D has an SNI for foo.example.com
which resolves to A.B.C.D and A.B.C.E and Squid may connect the stream
to A.B.C.E... It is easy to imagine that some applications break with this 
behavior.


If the SNI is a valid domain name (ie resolves in DNS). Then Squid
should continue on past the check.


Does Squid do a DNS lookup or only check the value for "valid" characters?


DNS lookup.


[snip]


A similar question can be asked about SNI names containing unusual
characters. At some point, it would be too dangerous to include SNI
information in the fake CONNECT request because it will interfere with
HTTP rules, but it is not clear where that point is exactly.


To support the weirdest apps Squid might have to simply copy all
unusual characters to present the same parameter values to the server.


It is being mapped into the HTTP equivalent value. Which are Host:
header and authority-URI. Only valid FQDN names can make it through the
mapping.


Here things get complicated.
It is correct that Squid enforces apps to follow standards or
should Squid try to proxy connections for apps when it can?


Squid isn't enforcing standards here. As Steve original messge says it:
"generates a "CONNECT *.example.com:443" request based on the peeked SNI"
  - which is arguably invalid HTTP syntax, but oh well.

It then is unable to do a DNS lookup for *.example.com to find out what
its IPs are and does the error handling action for a failure to verify
on a CONNECT message.


yes, the fake CONNECT is dealt with like a regular CONNECT including
DNS lookup.  I fear for other apps (besides the one ios app that Steve
refers to) to break because Squid may connect to a different IP than
the client/app is requesting.
If Squid uses the original IP to connect without doing a DNS lookup,
Steve's app will work and potential issues with other apps are
prevented.

Marcus


Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Marcus Kool



On 07/07/2016 10:49 AM, Yuri wrote:


A similar question can be asked about SNI names containing unusual
characters. At some point, it would be too dangerous to include SNI
information in the fake CONNECT request because it will interfere with
HTTP rules, but it is not clear where that point is exactly.


To support the weirdest apps Squid might have to simply copy all
unusual characters to present the same parameter values to the server.


It is being mapped into the HTTP equivalent value. Which are Host:
header and authority-URI. Only valid FQDN names can make it through the
mapping.


Here things get complicated.
It is correct that Squid enforces apps to follow standards or
should Squid try to proxy connections for apps when it can?


Squid isn't enforcing standards here. As Steve original messge says it:
"generates a "CONNECT *.example.com:443" request based on the peeked SNI"
  - which is arguably invalid HTTP syntax, but oh well.

It then is unable to do a DNS lookup for *.example.com to find out what
its IPs are and does the error handling action for a failure to verify
on a CONNECT message.


yes, the fake CONNECT is dealt with like a regular CONNECT including
DNS lookup.  I fear for other apps (besides the one ios app that Steve
refers to) to break because Squid may connect to a different IP than
the client/app is requesting.
If Squid uses the original IP to connect without doing a DNS lookup,
Steve's app will work and potential issues with other apps are
prevented.



Interestingly, Marcus. Does this mean that the CDN may be at different points 
in time different IP connection and it makes it impossible for client 
connections through Squid?


It all depends on the app/client: if it uses a servername/SNI that
resolves to multiple IP addresses but needs to connect to the one
that it specifically wants to CONNECT to, the app can fail since
Squid might choose an other IP address to connect to.

Or, apps might become slow since it might be faster when it reconnects
to the same server that it connected to before.
I think it is best to prevent issues and that Squid should connect
to the IP that the client is trying to connect to.

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Recommended Multi-CPU Configuration

2016-08-02 Thread Marcus Kool

Hi Michael,

Can you share with us what you ended up with?

Thanks
Marcus

On 06/18/2015 12:28 AM, Michael Pelletier wrote:

Which one would be good for capacity\load? I have a very, very large 
environment. I have 220,000 users on 8 Gig to the INTERNET. I am running a load 
balancer, ipvsadm (Direct Routing) with 20 proxies
behind it. I am interested in handling load.

Michael

On Wed, Jun 17, 2015 at 9:31 PM, Amos Jeffries mailto:squ...@treenet.co.nz>> wrote:

On 18/06/2015 8:53 a.m., Michael Pelletier wrote:
> Hello,
>
> I am looking to had some more power to squid. I have seen two different
> types of configurations to do this:
>
> 1. Adding workers directive equal to the number of cpus. Then adding a
> special wrapper around the AUFS disk cache so that the correct worker can
> only access the correct cache. Yes, I know rock is multi cpu capable.
>
> 2. Using the split configuration from the Squid Web page. This involved a
> front end and multiple backend squid servers on the same server.
> http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem
>
> My question is, which one is recommended? What are the pros and cons of
> each?
>

Both and neither. #1 improves bandwidth savings. #2 improves raw speed.
Pick your poison.

These are example configurations only. For real high performance mutiple
machines in a mix of the two setups is even better.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org 
http://lists.squid-cache.org/listinfo/squid-users



*Disclaimer: *Under Florida law, e-mail addresses are public records. If you do 
not want your e-mail address released in response to a public records request, 
do not send electronic mail to this
entity. Instead, contact this office by phone or in writing.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-03 Thread Marcus Kool



On 08/03/2016 12:30 AM, Amos Jeffries wrote:



If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assertions being hit.


I remember the thread about increasing the request buffer to 64K and it
looked so promising.
Is there any evidence of setting HTTP_REQBUF_SZ to 16K is stable in 3.5.x?

Marcus


You may find that memory becomes your bottleneck at higher speeds.
8-16GB sounds like a lot for most uses, but when you have enough
connections active to drive Gbps (with 4-6x 64KB I/O buffers) there are
is lot of parallel pressures on the RAM.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-03 Thread Marcus Kool



On 08/03/2016 10:27 AM, Amos Jeffries wrote:

On 3/08/2016 9:45 p.m., Marcus Kool wrote:



On 08/03/2016 12:30 AM, Amos Jeffries wrote:



If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some assertions being hit.


I remember the thread about increasing the request buffer to 64K and it
looked so promising.
Is there any evidence of setting HTTP_REQBUF_SZ to 16K is stable in 3.5.x?



It has not had much testing other than Nathan's use, so I'm a bit
hesitant to call it stable. But just raising the 4KB limit a bit to 64K
or less should not have much effect negative effect other than extra RAM
per transaction for buffering (bumped x8 from 256KB per client
connection to 2MB).


I am about to configure an array of squid servers to process 50 gbit of traffic
and the performance increase that Nathan originally reported is significant...
So if I understand it correctly, raising it to 16K in 3.5.20
will most likely have no issues.  I will give it a try.

Thanks
Marcus


We got a bit ambitious and made the main buffers dynamic and effectively
unlimited for Squid-4. But that hit an issue, so has been pulled out
while Nathan figures out how to avoid it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance not able to drive a 1Gbps internet link

2016-08-04 Thread Marcus Kool



On 08/04/2016 10:08 AM, Heiler Bemerguy wrote:


Sorry Amos, but I've tested with modifying JUST these two sysctl parameters and 
the difference is huge.

Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed, and 
with a 8MB kernel buffer I got a 9.5MB/s download speed (via squid, of course).

I think it has to do with the TCP maximum Window Size, the kernel can set on a 
connection.


With these tuning parameters it is always important to look at the 
bandwidth*latency product.
I see that you are from Brasil and I know from experience that latencies to 
Europe are 230+ ms and latencies to USA vary between 80 and 200 ms.
I believe that the large variation in latency is due to the limited 
international capacity of Brasil (the Level3 link from the SP-IX to USA is most 
of the day 90+% utilized).

Marcus

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   >