Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-02-12 Thread NgTech LTD
What distro are you using?

בתאריך יום ב׳, 12 בפבר׳ 2024, 13:47, מאת Stephen Borrill ‏<
sq...@borrill.org.uk>:

> On 16/01/2024 14:37, Alex Rousskov wrote:
> > On 2024-01-16 06:01, Stephen Borrill wrote:
> >> The problem is no different with 6.6. Is there any more debugging I
> >> can provide, Alex?
> >
> > Yes, but I need to give you a patch that adds that (temporary) debugging
> > first (assuming I fail to reproduce the problem in the lab). The ball is
> > on my side (unless somebody else steps in). Unfortunately, I do not have
> > any free time for any of that right now. If you do not hear from me
> > sooner, please ping me again on or after February 8, 2024.
>
> PING!
>
> I will get 6.7 compiled up so we can add debugging to it quickly. It
> would be good if we could get something in place this week as it is
> school holidays next week in the UK and so there will be little
> opportunity to test until afterwards.
>
> >> On 10/01/2024 12:40, Stephen Borrill wrote:
> >>> On 09/01/2024 15:42, Alex Rousskov wrote:
>  On 2024-01-09 05:56, Stephen Borrill wrote:
> > On 09/01/2024 09:51, Stephen Borrill wrote:
> >> On 09/01/2024 03:41, Alex Rousskov wrote:
> >>> On 2024-01-08 08:31, Stephen Borrill wrote:
>  I'm trying to determine why squid 6.x (seen with 6.5) connected
>  via IPv4-only periodically fails to connect to the destination
>  and then requires a restart to fix it (reload is not sufficient).
> 
>  The problem appears to be that a host that has one address each
>  of IPv4 and IPv6 occasionally has its IPv4 address go missing as
>  a destination. On closer inspection, this appears to happen when
>  the IPv6 address (not the IPv4) address is marked as bad.
> 
> > ipcache.cc(990) have: [2001:4860:4802:32::78]:443 at 0 in
> > 216.239.38.120 #1/2-0
> 
> 
>  Thank you for sharing more debugging info!
> >>>
> >>> The following seemed odd to. It finds an IPv4 address (this host does
> >>> not have IPv6), puts it in the cache and then says "No DNS records":
> >>>
> >>> 2024/01/09 12:31:24.020 kid1| 14,4| ipcache.cc(617) nbgethostbyname:
> >>> schoolbase.online
> >>> 2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(313) ipcacheRelease:
> >>> ipcacheRelease: Releasing entry for 'schoolbase.online'
> >>> 2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(670)
> >>> ipcache_nbgethostbyname_: ipcache_nbgethostbyname: MISS for
> >>> 'schoolbase.online'
> >>> 2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(480) ipcacheParse: 1
> >>> answers for schoolbase.online
> >>> 2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no
> >>> 20.54.32.34 in [no cached IPs]
> >>> 2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no
> >>> 20.54.32.34 in [no cached IPs]
> >>> 2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(549) updateTtl: use
> >>> first 69 from RR TTL 69
> >>> 2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(535) addGood:
> >>> schoolbase.online #1 20.54.32.34
> >>> 2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(253) forwardIp:
> >>> 20.54.32.34
> >>> 2024/01/09 12:31:24.020 kid1| 44,2| peer_select.cc(1174) handlePath:
> >>> PeerSelector72389 found conn564274 local=0.0.0.0
> >>> remote=20.54.32.34:443 HIER_DIRECT flags=1, destination #1 for
> >>> schoolbase.online:443
> >>> 2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(459) latestError:
> >>> ERROR: DNS failure while resolving schoolbase.online: No DNS records
> >>> 2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(586)
> >>> ipcacheHandleReply: done with schoolbase.online: 20.54.32.34 #1/1-0
> >>> 2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(236) finalCallback:
> >>> 0x1b7381f38  lookup_err=No DNS records
> >>>
> >>> It seemed to happen about the same time as the other failure, so
> >>> perhaps another symptom of the same.
> >>>
>  The above log line is self-contradictory AFAICT: It says that the
>  cache has both IPv6-looking and IPv4-looking address at the same
>  cache position (0) and, judging by the corresponding code, those two
>  IP addresses are equal. This is not possible (for those specific IP
>  address values). The subsequent Squid behavior can be explained by
>  this (unexplained) conflict.
> 
>  I assume you are running official Squid v6.5 code.
> >>>
> >>> Yes, compiled from source on NetBSD. I have the patch I refer to here
> >>> applied too:
> >>>
> https://lists.squid-cache.org/pipermail/squid-users/2023-November/026279.html
> >>>
>  I can suggest the following two steps for going forward:
> 
>  1. Upgrade to the latest Squid v6 in hope that the problem goes away.
> >>>
> >>> I have just upgraded to 6.6.
> >>>
>  2. If the problem is still there, patch the latest Squid v6 to add
>  more debugging in hope to explain what is going on. This may take a
>  few iterations, and it will take me some time to produce the
>  necessary debugging patch.
> >>>
> >>> Unfortunately, I don't have

Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread NgTech LTD
Depends on the config structure.
If you can send me a private email with the config reduced sensitive
details it will to understand the scenario.

Eliezer

בתאריך יום ד׳, 5 ביוני 2024, 17:31, מאת Akash Karki (CONT) ‏<
akash.ka...@capitalone.com>:

> Hi Team,
>
> We are running on squid ver 4.15 and want to update to n-1 of the latest
> ver(I believe 6.9 is the latest ver).
>
> I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest
> version) without any intermediary steps or do we have to  update to
> intermediary first and then move to the n-1 version of 6.9?
>
> Kindly send us the detailed guidance!
>
> On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) <
> akash.ka...@capitalone.com> wrote:
>
>> Hi Team,
>>
>> We are running on squid ver 4.15 and want to update to n-1 of the latest
>> ver(I believe 6.9 is the latest ver).
>>
>> I want to understand if we can go straight from 4.15 to 6.x (n-1 of
>> latest version) without any intermediary steps or do we have to  update to
>> intermediary first and then move to the n-1 version of 6.9?
>>
>> Kindly send us the detailed guidance!
>>
>> --
>> Thanks & Regards,
>> Akash Karki
>>
>>
>> Save Nature to Save yourself :)
>>
>
>
> --
> Thanks & Regards,
> Akash Karki
> UK Hawkeye Team
> *Slack : *#uk-monitoring
> *Confluence : *UK Hawkeye
> 
>
> Save Nature to Save yourself :)
> --
>
>
> The information contained in this e-mail may be confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-14 Thread NgTech LTD
Hey Amis,

Ok, so with the tools we have available, can we take this case and maybe
write a brief summary of changes between the squid features versions?

I can't guarantee time limit but it would be very helpful from the
community to get feedback in such cases.

If anyone have done this kind of task, please share with us the details so
others will be able to benefit from your invested time.

Thanks,
Eliezer

* I am well aware...

בתאריך יום ו׳, 14 ביוני 2024, 11:36, מאת Amos Jeffries ‏<
squ...@treenet.co.nz>:

>
> Regarding the OP question:
>
> Upgrade for all Squid-3 is to:
>   * read release notes of N thru M versions (as-needed) about existing
> feature changes
>   * install the new version
>   * run "squid -k parse" to identify mandatory changes
>   * fix all "FATAL" and "ERROR" identified
>   * run with new version
>
> ... look at all logged "NOTICE", "UPGRADE" etc, and the Release Notes
> new feature additions to work on operational improvements possible with
> the new version.
>
>
> HTH
> Amos
>
>
> On 10/06/24 19:43, ngtech1ltd wrote:
> >
> > @Alex and @Amos, can you try to help me compile a menu list of
> > functionalities that Squid-Cache can be used for?
> >
>
> The Squid wiki ConfigExamples section does that. Or at least is supposed
> to, with examples per use-case.
>
>
> FYI, this line of discussion you have is well off-topic for Akash's
> original question and I think is probably just adding confusion.
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPTABLES - Can't redirect HTTPS traffic to external Squid

2024-07-30 Thread NgTech LTD
Hey,

The dnat rule should be done on the squid itsef.
You will need to re-route the relevant traffic over the ipsec tunnel to the
squid ip.
It's possible to do that over ipip or gre tunnels.

Eliezer

בתאריך יום ג׳, 30 ביולי 2024, 15:41, מאת Bolinhas André ‏<
andre.bolin...@articatech.com>:

> I have a external proxy server connected by VPN (IPSEC) to my main branch,
> and i'm trying to redirect all users HTTP / HTTPS traffic to this proxy.
> Scenario Users -> Gateway (Main Branch) -> IPSEC -> Squid Proxy
> (transparent mode)
>
> In my Gateway (Main Branch) I have this test iptables rule, that is
> forwarding all the TPC / UDP traffic to the Proxy server.
>
> iptables -t nat -I PREROUTING -s 192.168.60.90 -p tcp -j DNAT 
> --to-destination 172.31.0.1
> iptables -t nat -I PREROUTING -s 192.168.60.90 -p udp -j DNAT 
> --to-destination 172.31.0.1
>
> In Squidd Proxy server I have the followed rules
>
> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport 443 
> -m comment --comment ArticaSquidTransparent -j REDIRECT --to-ports 8081
> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport 80 -m 
> comment --comment ArticaSquidTransparent -j REDIRECT --to-ports 8080
>
> Everything is working correctly, HTTP traffic is ok, DNS are also working,
> the only exeption is the HTTPS traffic, I can see the HTTPS traffic inside
> the squid access.log but on client side I got a timeout
>
> 1722265740.867  1 192.168.60.90 TCP_TUNNEL/200 0 CONNECT cnn.com:443 - 
> HIER_DIRECT/51.210.183.2:443 - mac="00:00:00:00:00:00" 
> webfilterpolicy:%200%0D%0A exterr="-|-"
>
> Anyone can help me to understant if I'm missing so iptable rule to handle
> the HTTPS traffic?
>
> Sent from Nine 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPTABLES - Can't redirect HTTPS traffic to external Squid

2024-07-30 Thread NgTech LTD
Hey,

Sorry I missed understand the scenario.
For now lets assume the packets are routed to the proxy properly but, lets
try to understand how do you route the traffic to the proxy?

Also what is defined on the proxy http_port

Are you using artica proxy?
Where do you implement the iptables rules?

Eliezer

בתאריך יום ג׳, 30 ביולי 2024, 23:54, מאת Bolinhas André ‏<
andre.bolin...@articatech.com>:

>
> Hi
>
> Do you mean user this
>
> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport
> 443 -m comment --comment ArticaSquidTransparent -j DNAT --to-destination
> 172.31.0.1:25976
>
> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport
> 80 -m comment --comment ArticaSquidTransparent -j DNAT --to-destination
> 172.31.0.1:52406
>
> Instead this
>
> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport
> 443 -m comment --comment ArticaSquidTransparent -j REDIRECT --to-ports 25976
>
> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport
> 80 -m comment --comment ArticaSquidTransparent -j REDIRECT --to-ports 52406
>
> ?
>
> Do I also need some kind of
>
> -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
>
> ?
>
> Best regards
> Sent from Nine <http://www.9folders.com/>
> --
> *De:* NgTech LTD 
> *Enviado:* terça-feira, 30 de julho de 2024 14:44
> *Para:* Bolinhas André
> *Cc:* squid-users@lists.squid-cache.org
> *Assunto* Re: [squid-users] IPTABLES - Can't redirect HTTPS traffic to
> external Squid
>
>
>
> Hey,
>
> The dnat rule should be done on the squid itsef.
> You will need to re-route the relevant traffic over the ipsec tunnel to
> the squid ip.
> It's possible to do that over ipip or gre tunnels.
>
> Eliezer
>
> בתאריך יום ג׳, 30 ביולי 2024, 15:41, מאת Bolinhas André ‏<
> andre.bolin...@articatech.com>:
>
>> I have a external proxy server connected by VPN (IPSEC) to my main
>> branch, and i'm trying to redirect all users HTTP / HTTPS traffic to this
>> proxy.
>> Scenario Users -> Gateway (Main Branch) -> IPSEC -> Squid Proxy
>> (transparent mode)
>>
>> In my Gateway (Main Branch) I have this test iptables rule, that is
>> forwarding all the TPC / UDP traffic to the Proxy server.
>>
>> iptables -t nat -I PREROUTING -s 192.168.60.90 -p tcp -j DNAT 
>> --to-destination 172.31.0.1
>> iptables -t nat -I PREROUTING -s 192.168.60.90 -p udp -j DNAT 
>> --to-destination 172.31.0.1
>>
>> In Squidd Proxy server I have the followed rules
>>
>> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport 443 
>> -m comment --comment ArticaSquidTransparent -j REDIRECT --to-ports 8081
>> iptables -t nat -I PREROUTING -s 192.168.60.90/32 -p tcp -m tcp --dport 80 
>> -m comment --comment ArticaSquidTransparent -j REDIRECT --to-ports 8080
>>
>> Everything is working correctly, HTTP traffic is ok, DNS are also
>> working, the only exeption is the HTTPS traffic, I can see the HTTPS
>> traffic inside the squid access.log but on client side I got a timeout
>>
>> 1722265740.867  1 192.168.60.90 TCP_TUNNEL/200 0 CONNECT cnn.com:443 - 
>> HIER_DIRECT/51.210.183.2:443 - mac="00:00:00:00:00:00" 
>> webfilterpolicy:%200%0D%0A exterr="-|-"
>>
>> Anyone can help me to understant if I'm missing so iptable rule to handle
>> the HTTPS traffic?
>>
>> Sent from Nine <http://www.9folders.com/>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> https://lists.squid-cache.org/listinfo/squid-users
>>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Threat feed like utility

2024-08-17 Thread NgTech LTD
I am testing couple options and I wanted to try and see what tool provides
a "threat feed" like for url filtering for Squid.

The available tools for url filtering are SquidGuard (which is ancient) and
ufdbguard.
There are also other tools out there which are not open source.
In FortiGate devices they have Threat feeds which can be either wildcard
which is dstdomain like and also full urls and ip addresses, couple types
of feeds.

In squid world I would call it an ACL feed which can be used in the context
of a real ACL or an external software.

An example feed can be seen at:
NgTech-LTD/youtube-urls-feed - youtube-urls-feed - Gitea: Git with a cup of
tea <https://git.ngtech.co.il/NgTech-LTD/youtube-urls-feed>

Usually youtube doing a nice (not the best) job with content filtering but
there are also other reasons like age rating systems.
We can generate a rating feed per age in a DB and then create a profile
which will include the allowed material for the specific age.
A profile can be built based on couple categories which are basically URL
patterns and Domains.

I am working on a technical video which will explain how SquidGuard
programatically does it work internally and with this anyone can build his
own external helper freely in any language and with any DB he likes.
There are other aspects which ufdbguard implements but regarding URL
filtering the systems are pretty simple to implement.
There is a 50k urls per sec check rate but it's pretty easy to say compared
to technically understand what it means and this needs to be demystified to
my opinion.
There are technical specs for queries and requests per seconds and these
have a limit.
Currently I have a 64GB Ram for the DB and it seems to perform well but
cannot reach 3k Queries per sec..



Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-19 Thread NgTech LTD
I am testing Squid 6.10 on Fedora 40 (their package).
And it seems that Squid is unable to bump clients (ESNI/ECH)?

I had couple iterations of pek stare and bump and I am not sure what is the
reason for that:
shutdown_lifetime 3 seconds
external_acl_type whitelist-lookup-helper ipv4 ttl=10 children-max=10
children-startup=2 \
children-idle=2 concurrency=10 %URI %SRC
/usr/local/bin/squid-conf-url-lookup.rb
acl whitelist-lookup external  whitelist-lookup-helper
acl ytmethods method POST GET
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network
(LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space
(CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly
plugged) machines
acl localnet src 172.16.0.0/12  # RFC 1918 local private network
(LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network
(LAN)
acl localnet src fc00::/7   # RFC 4193 local private network
range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny to_localhost
http_access deny to_linklocal
acl tubedoms dstdomain .ytimg.com .youtube.com .youtu.be
http_access allow ytmethods localnet tubedoms whitelist-lookup
http_access allow localnet
http_access deny all
http_port 3128
http_port 13128 ssl-bump tls-cert=/etc/squid/ssl/cert.pem
tls-key=/etc/squid/ssl/key.pem \
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 23128 tproxy ssl-bump tls-cert=/etc/squid/ssl/cert.pem
tls-key=/etc/squid/ssl/key.pem \
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 33128 intercept ssl-bump tls-cert=/etc/squid/ssl/cert.pem
tls-key=/etc/squid/ssl/key.pem \
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslcrtd_program /usr/lib64/squid/security_file_certgen -s
/var/spool/squid/ssl_db -M 4MB
sslcrtd_children 5
acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all
acl monitoredSites ssl::server_name .youtube.com .ytimg.com
acl monitoredSitesRegex ssl::server_name_regex \.youtube\.com \.ytimg\.com
acl serverIsBank ssl::server_name .visa.com
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump bump all
strip_query_terms off
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
logformat ssl_custom_format %ts.%03tu %6tr %>a %Ss/%03>Hs %sni
access_log daemon:/var/log/squid/access.log ssl_custom_format
##EOF

access.log from before:
1724028804.797486 192.168.78.15 TCP_TUNNEL/200 17764 CONNECT
40.126.31.73:443 - ORIGINAL_DST/40.126.31.73 - -
1724028805.413  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.028  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.028  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.029  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.030  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.085 57 192.168.78.15 TCP_TUNNEL/200 4513 CONNECT
104.18.72.113:443 - ORIGINAL_DST/104.18.72.113 - -
1724028806.086 56 192.168.78.15 TCP_TUNNEL/200 4513 CONNECT
104.18.72.113:443 - ORIGINAL_DST/104.18.72.113 - -
1724028806.086 56 192.168.78.15 TCP_TUNNEL/200 4512 CONNECT
104.18.72.113:443 - ORIGINAL_DST/104.18.72.113 - -
1724028806.208  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.213  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.338  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.469  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028806.596  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- HIER_NONE/- - -
1724028807.006  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request
- H

Re: [squid-users] squid5.5 restart failure due to domain list duplication

2024-09-10 Thread NgTech LTD
If yo need a helper that will resolve this issue ie cleanup it's pretty
simple to write one for you.

Eliezer

בתאריך יום ה׳, 5 בספט׳ 2024, 8:53, מאת YAMAGUCHI NOZOMI (JIT ICC) ‏<
nozomi.yamaguchi...@jalinfotec.co.jp>:

> To whom it may concern,
>
> If there were duplicate domains in the list of domains used, restarting
> the squid would cause the process to stop.
> Below is the error statement.
> ERROR: 'a.example.com' is a subdomain of 'example.com
> FATAL: /etc/squid/squid.conf
> I don't think the same thing happened with my previous squid3.5.
>
> I have a few questions.
> ・Is it possible to configure the process not to stop even if there are
> duplicates in the domain list?
> ・Are there any other user actions besides duplicate domains that would
> trigger a process stop?
>
> Any pointers would be really helpful.
>
> Thanks in advance.
>
> Regards,
> Nichole
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted first verification regarding cross root cert

2020-06-29 Thread NgTech LTD
Upgrading to 1.1 on a running os is a challenge for any sysadmin.

Eliezer

On Mon, Jun 29, 2020, 13:30  wrote:

> Hi Amos,
>
> >Ah. This is a feature of OpenSSL v1.1. Apparently your OpenSSL v1.0 has
> >had the feature *partially* backported to it.
> >I suggest you upgrade to Squid-4 and build against OpenSSL v1.1 where
> >this "feature" is the default behaviour.
>
> Yes, Exactly.  However, currently I am using CentOS7 which openssl package
> version is still 1.0.
> Upgrading  openssl to v1.1.1 is challenging for me. Could you please
> implement the rusted first option to squid-4 ? ...
>
> Regards,
> --
> Mikio Kishi
>
>
> On Mon, Jun 29, 2020 at 7:05 PM Amos Jeffries 
> wrote:
>
>> On 29/06/20 7:29 pm, mikio.kishi wrote:
>> > Hi Amos,
>> >
>> > Thank you for your reply and I apologize for the missing information.
>> > The following is the detailed one.
>> >
>> >> * Squid version
>> > * squid version 3.5.26 (probably, ver4.X also might have same issue)
>> > * OpenSSL 1.0.2k
>> >
>> >> * details of the chain being delivered to Squid
>> >> * details of the expected cross-signing chain(s).
>> >
>> > There are so many websites which are facing this issue.
>> > For instance, "sbv.gov.vn:443 ".
>> >
>> > # openssl s_client -connect sbv.gov.vn:443 
>> > -servername sbv.gov.vn  -showcerts -verify 5 -state
>> > verify depth is 5
>>
>> ...
>> >
>> > Could you please add the trusted_first option on squid ?
>> >
>>
>> Ah. This is a feature of OpenSSL v1.1. Apparently your OpenSSL v1.0 has
>> had the feature *partially* backported to it.
>>
>> I suggest you upgrade to Squid-4 and build against OpenSSL v1.1 where
>> this "feature" is the default behaviour. Squid-3 is no longer supported
>> for code updates.
>>
>>
>> Amos
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-08-05 Thread NgTech LTD
I think that the mgr:info or another page there contains the amount of
requests per second etc.
also netstat or ss -ntp might give some basic understanding about this
server size.

are you using dynamic memory on the hyper-v hypervisor?

Eliezer

On Wed, Aug 5, 2020, 19:59 Ivan Bulatovic  wrote:

> Hi Alex,
>
> Thank you very much for your help.
>
> I opened a bug on bugs.squid-cache.org
> (https://bugs.squid-cache.org/show_bug.cgi?id=5071).
>
> Best regards,
> Ivan
>
> On Mon, Aug 3, 2020 at 10:02 PM Alex Rousskov
>  wrote:
> >
> > On 8/3/20 9:11 AM, Ivan Bulatovic wrote:
> >
> > > Looks like squid has some serious memory issues when under heavy load
> > > (90 servers that crawl Internet sites).
> >
> > > Maximum Resident Size: 41500720 KB
> >
> > If the above (unreliable) report matches your observations using system
> > tools like "top", then it is indeed likely that your Squid is suffering
> > from a memory leak -- 41GB is usually too much for most non-caching
> > Squid instances.
> >
> > Identifying the leak may take some time, and I am not volunteering to do
> > the necessary legwork personally, but the Squid Project does fix
> > virtually all runtime leaks that we know about. If you want to speed up
> > the process, one of the best things you can do is to run Squid under
> > valgrind with a good suppression file. This requires building Squid with
> > a special ./configure option. Several testing iterations may be
> > necessary. If you are willing to do this, please file a bug report and
> > somebody will guide you through the steps.
> >
> >
> > > It just eats up memory, and
> > > does not free it up even days after it is being used (with no load on
> > > the proxy for days).
> >
> > Some memory retention is expected by default. See
> > http://www.squid-cache.org/Doc/config/memory_pools/
> >
> > Unfortunately, AFAICT, your mgr:mem output does not show any obvious
> > leaks -- all numbers are very small. If something is leaking a lot, then
> > it is probably not pooled by Squid.
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> > > On Mon, Jul 20, 2020 at 10:46 PM Ivan Bulatovic wrote:
> > >>
> > >> Hi all,
> > >>
> > >> I am trying to configure squid to run as a forward proxy with no
> > >> caching (cache deny all) with an option to choose the outgoing IP
> > >> address based on the username. So all squid has to do is to use a
> > >> certain outgoing IP address for a certain user, return the data from
> > >> the server to that user and cache nothing.
> > >>
> > >> For that I created a special authentication helper and used the ACLs
> > >> and tcp_outgoing_address to create a lot of users and outgoing IP
> > >> addresses (about 260 at the moment). Example (not the real IP I use,
> > >> of course):
> > >>
> > >> acl use_IP1 proxy_auth user1
> > >> tcp_outgoing_address 1.2.3.4   use_IP1
> > >>
> > >> I also configured the squid to use 4 workers, but this happens even
> > >> when I use only one worker (default)
> > >>
> > >> And this works. However, under heavy load, Squid eats all of the RAM
> > >> and then starts going to swap. And the memory usage does not drop when
> > >> I remove all the load from squid (I shut down all clients).
> > >>
> > >> I left it to see if the memory will be freed but even after leaving it
> > >> for an hour the info page reports this:
> > >> Cache information for squid:
> > >> Hits as % of all requests:  5min: 0.0%, 60min: 0.0%
> > >> Hits as % of bytes sent:5min: 0.0%, 60min: 1.1%
> > >> Memory hits as % of hit requests:   5min: 0.0%, 60min:
> 0.0%
> > >> Disk hits as % of hit requests: 5min: 0.0%, 60min: 100.0%
> > >> Storage Swap size:  0 KB
> > >> Storage Swap capacity:   0.0% used, 100.0% free
> > >> Storage Mem size:   0 KB
> > >> Storage Mem capacity:0.0% used, 100.0% free
> > >> Mean Object Size:   0.00 KB
> > >> Requests given to unlinkd:  0
> > >>
> > >> Resource usage for squid:
> > >> UP Time:255334.875 seconds
> > >> CPU Time:   7122.436 seconds
> > >> CPU Usage:  2.79%
> > >> CPU Usage, 5 minute avg:0.05%
> > >> CPU Usage, 60 minute avg:   37.66%
> > >> Maximum Resident Size: 41500720 KB
> > >> Page faults with physical i/o: 1003410
> > >>
> > >> And here is the listing of free and top commands (with no load on the
> server):
> > >>
> > >> # free -h
> > >>   totalusedfree  shared  buff/cache
>  available
> > >> Mem:11G 10G791M676K491M
>   1.0G
> > >> Swap:   11G5.5G6.5G
> > >>
> > >> # top
> > >> top - 14:12:32 up 3 days,  1:30,  1 user,  load average: 0.00, 0.00,
> 0.00
> > >> Tasks: 177 total,   1 running, 102 sleeping,   0 stopped,   0 zombie
> > >> %Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0
> si,  0.0 st
> > >> %Cpu1  :  0.0 us,  0.0 s

Re: [squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-08-05 Thread NgTech LTD
Hey Ivan,

>From what i remember there is a calculation for how much k per conn should
squid use.
another thing is that squid is not returning memory once ot took it.
Amos knows about this and might be able to respond.

Eliezer

On Wed, Aug 5, 2020, 20:34 Ivan Bulatovic  wrote:

> Hi Eliezer,
>
> In the original message I sent to the squid-users mail list, I
> attached listings from mgr:info and mgr:mem. The server is definitely
> using a lot of connections (close to 200K connections), which is why I
> increased the open files limits in linux as well as in squid.conf. And
> there are a lot of requests per second, when the server is running. I
> could understand that it needs a lot of memory for in-transit cache,
> but that memory should be later released  back to OS, once the
> requests load goes down. However, that is not the case, even days
> after there is no load on the server, it stays at 11GB of RAM and 5.5
> GB of swap used. The second I restart the squid process, everything
> goes to normal, memory is released. That is why I suspect there is
> some memory leak somewhere.
>
> The server is a Ubuntu 18.04 LTS VM (running on Hyper-V 2019 server),
> with 8 virtual processors and 12GB of RAM (although I can increase
> that if that is the problem, but I thought that without caching this
> would be more than enough).
>
> I am not using dynamic memory on Hyper-V (it is turned off for this VM).
>
> Best regards,
> Ivan
>
> On Wed, Aug 5, 2020 at 7:14 PM NgTech LTD  wrote:
> >
> > I think that the mgr:info or another page there contains the amount of
> requests per second etc.
> > also netstat or ss -ntp might give some basic understanding about this
> server size.
> >
> > are you using dynamic memory on the hyper-v hypervisor?
> >
> > Eliezer
> >
> > On Wed, Aug 5, 2020, 19:59 Ivan Bulatovic 
> wrote:
> >>
> >> Hi Alex,
> >>
> >> Thank you very much for your help.
> >>
> >> I opened a bug on bugs.squid-cache.org
> >> (https://bugs.squid-cache.org/show_bug.cgi?id=5071).
> >>
> >> Best regards,
> >> Ivan
> >>
> >> On Mon, Aug 3, 2020 at 10:02 PM Alex Rousskov
> >>  wrote:
> >> >
> >> > On 8/3/20 9:11 AM, Ivan Bulatovic wrote:
> >> >
> >> > > Looks like squid has some serious memory issues when under heavy
> load
> >> > > (90 servers that crawl Internet sites).
> >> >
> >> > > Maximum Resident Size: 41500720 KB
> >> >
> >> > If the above (unreliable) report matches your observations using
> system
> >> > tools like "top", then it is indeed likely that your Squid is
> suffering
> >> > from a memory leak -- 41GB is usually too much for most non-caching
> >> > Squid instances.
> >> >
> >> > Identifying the leak may take some time, and I am not volunteering to
> do
> >> > the necessary legwork personally, but the Squid Project does fix
> >> > virtually all runtime leaks that we know about. If you want to speed
> up
> >> > the process, one of the best things you can do is to run Squid under
> >> > valgrind with a good suppression file. This requires building Squid
> with
> >> > a special ./configure option. Several testing iterations may be
> >> > necessary. If you are willing to do this, please file a bug report and
> >> > somebody will guide you through the steps.
> >> >
> >> >
> >> > > It just eats up memory, and
> >> > > does not free it up even days after it is being used (with no load
> on
> >> > > the proxy for days).
> >> >
> >> > Some memory retention is expected by default. See
> >> > http://www.squid-cache.org/Doc/config/memory_pools/
> >> >
> >> > Unfortunately, AFAICT, your mgr:mem output does not show any obvious
> >> > leaks -- all numbers are very small. If something is leaking a lot,
> then
> >> > it is probably not pooled by Squid.
> >> >
> >> >
> >> > HTH,
> >> >
> >> > Alex.
> >> >
> >> >
> >> > > On Mon, Jul 20, 2020 at 10:46 PM Ivan Bulatovic wrote:
> >> > >>
> >> > >> Hi all,
> >> > >>
> >> > >> I am trying to configure squid to run as a forward proxy with no
> >> > >> caching (cache deny all) with an option to choose the outgoing IP
> >> > >> address based on the usernam

[squid-users] NgTech REPO is up for now.

2020-12-23 Thread NgTech LTD
Hey,

The ngtech repo is up and running.
I cannot guarantee that network bursts will not interrupt the service
from time to time.

However The repo now is at:
http://ngtech.co.il/repo/

There is no HTTPS at all on this service so.. if the browser forces
you to use HTTPS I recommend curl or wget.
If someone need a copy of the repo for CentOS it can be found at:
http://linuxsoft.cern.ch/mirror/www1.ngtech.co.il/repo/centos/7/x86_64/

The new release includes the fedora 33 RPMs' for Squid 4.13.
I will try to release an ansible for example usage in case someone
will want to install squid on Debian.
Maybe later I will add a CentOS/RHEL/Fedora playbooks.

Eliezer

* Maybe I will add a story in the next release.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How do I rotate access.log?

2020-12-29 Thread NgTech LTD
Hey Roee,

On what version of OS can it be tested, also is the package from the
distribution or self compiler?

Eliezer

On Tue, Dec 29, 2020 at 5:36 PM roee klinger  wrote:
>
> Hey,
>
> I know there is plenty of information on this online but for some reason, 
> this feature is simply not working for me. I have set logfile_rotate to 10 
> like so:
>
> logfile_rotate 10
>
>
> However, when I run "squid -k rotate" only the cache.log file rotates. I am 
> using a custom log format and have also tried setting it like so according to 
> the documentation:
>
> logformat  %ts.%03tu %6tr %>a %>lp %Ss/%03>Hs % %mt
> access_log daemon:/var/log/squid/access.log logformat= rotate=10
>
>
> However, running "squid -k rotate" still does nothing for the access.log file.
> I have also checked the proxy user has the proper permissions but it's still 
> not working, any tips on what is going on and how to get this to work?
>
> Best regards,
> Roee Klinger
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Anyone has experience with Windows clients DNS timeout

2020-12-29 Thread NgTech LTD
I have seen this issue on Windows clients over the past.
Windows nslookup shows that the query has timed out after 2 seconds.
On Linux and xBSD I have researched this issue and have seen that:
the DNS server is doing a recursive lookup and it takes from 7 to 10++
seconds sometimes.
When I pre-warn the DNS cache and the results are cached it takes
lower then 500 ms for a response to be on the client side and then
everything works fine.

I understand that Windows DNS client times out..
When using froward proxy with squid or any other it works as expected
since the DNS resolution is done on the proxy server.
However for this issue I believe that this timeout should be increased
instead of moving to DNS over HTTPS.

I would like to hear if anyone has any resolution for this issue on
the Windows clients side.

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anyone has experience with Windows clients DNS timeout

2020-12-30 Thread NgTech LTD
> >
> > Klaus Westkamp
> >
> >
> > On 30/12/2020 09:07, L.P.H. van Belle wrote:
> > > Hai Elizer
> > >
> > > Sorry, im not fully agreeing with Amos here..
> > >
> > > If you DNS is taking 7-10 sec, i would investigate why the dns is that
> > slow.
> > > Something is off, that simple.
> > >
> > >
> > > A small example of my dns resolving to internet and my lan dnsservers.
> > >
> > > time dig a www.google.nl @8.8.8.8  @internet dns
> > > real0m0.115s
> > >
> > > real0m0.031s@lan dns, lookup 1.
> > > real0m0.016s@lan dns, lookup 2. (cached one)
> > >
> > > So, in my opinion 7-10 seconds timeout is really off.
> > > In the last we..
> > >
> > > Is the lan dns set as an authoritive server.
> > > Are the pc's correctly registering in the dns with there primary DNS
> > domain.
> > >
> > > in resolv.conf make sure the primaryDns domain is first in resolv.conf
> > > primary.dnsdomain.tld = output of $(hostname -d)
> > >
> > > search primary.dnsdomain.tld  (optional extra, other.dnsdomain.tld
> > dnsdomain.tld )
> > > nameserver 192.168.1.1
> > > nameserver 192.168.1.2
> > > nameserver 192.168.1.3
> > > nameserver 192.168.1.4
> > > nameserver 192.168.1.5
> > >
> > > # these are the options to look into also. ( in this order )
> > > options edns0   # allowed 4096 byte packages.
> > > options rotate  # if you have more then 1 dns server this can
> > help.
> > > options timeout:3
> > > options no-check-names  # dont check for invalid characters such as
> > underscore (_), non-ASCII, or control characters.
> > >
> > >
> > > Check the following.
> > > - the DNS server tries to query first to the internet.
> > > fix might be, resolving (search line) in /etc/resolv.conf
> > >
> > > ipv4 / ipv6, try disableing ipv6 on the windows clients.
> > > Dns is Non authoritive where it might be needed to set it to
> > Authoritive.
> > > Dns server is missing forwaring to the authoritive server.
> > > Routing and routing orders
> > > Are EDNS (4096bytes) big packages allowed
> > > And is the firewall allowing UDP and TCP packages on port 53
> > >
> > > I run 3 samba-AD dns servers with Bind9_DLZ
> > > My proxy runs a Bind9 caching and forwarding setup.
> > > The primay DNS domain is forwarded to the Samba-AD dns server.
> > > These are the Authoritive servers.
> > >
> > > This is on average my slowest querie 0.1-0.2 sec  ( on the samba dns )
> > > i checked the last year in my monitoring.
> > > Normal is 0.03-0.01 sec
> > >
> > > If there are problems in samba these days its 80% of all cases a
> > resolving setup problem.
> > >
> > > I hope this gave you some ideas.
> > >
> > >
> > > Greetz,
> > >
> > > Louis
> > >
> > >> -Oorspronkelijk bericht-
> > >> Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
> > Namens
> > >> NgTech LTD
> > >> Verzonden: dinsdag 29 december 2020 21:02
> > >> Aan: Squid Users
> > >> Onderwerp: [squid-users] Anyone has experience with Windows clients DNS
> > >> timeout
> > >>
> > >> I have seen this issue on Windows clients over the past.
> > >> Windows nslookup shows that the query has timed out after 2 seconds.
> > >> On Linux and xBSD I have researched this issue and have seen that:
> > >> the DNS server is doing a recursive lookup and it takes from 7 to 10++
> > >> seconds sometimes.
> > >> When I pre-warn the DNS cache and the results are cached it takes
> > >> lower then 500 ms for a response to be on the client side and then
> > >> everything works fine.
> > >>
> > >> I understand that Windows DNS client times out..
> > >> When using froward proxy with squid or any other it works as expected
> > >> since the DNS resolution is done on the proxy server.
> > >> However for this issue I believe that this timeout should be increased
> > >> instead of moving to DNS over HTTPS.
> > >>
> > >> I would like to hear if anyone has any resolution for this issue on
> > >> the Windows clients side.
> > >>
> > >> Thanks,
> > >> Eliezer
> > >>
> > >> 
> > >> Eliezer Croitoru
> > >> Tech Support
> > >> Mobile: +972-5-28704261
> > >> Email: ngtech1...@gmail.com
> > >> ___
> > >> squid-users mailing list
> > >> squid-users@lists.squid-cache.org
> > >> http://lists.squid-cache.org/listinfo/squid-users
> > > ___
> > > squid-users mailing list
> > > squid-users@lists.squid-cache.org
> > > http://lists.squid-cache.org/listinfo/squid-users
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Connection occasionally not ending after adapting response with ICAP

2020-12-30 Thread NgTech LTD
An icap tcpdump pcap file might help to understand something.

Eliezer

On Wed, Dec 30, 2020, 16:10 Moti Berger  wrote:

> I have a setup with squid 5.0.4 with ICAP server handling responses. The
> ICAP server redirects based on some parameters of the response.
>
> To test this setup, I use cURL like this:
>
>> curl -k -s --proxy localhost:8000 -o /dev/null -v 
>
>
> Now, for some URLs, cURL hangs and for others it exits after receiving the
> 307 response.
> When it hangs, I see this as the output of cURL (I removed what seemed to
> me as non-related logs from the beginning):
>
>> } [5 bytes data]
>> * TLSv1.3 (OUT), TLS Unknown, Unknown (23):
>> } [1 bytes data]
>> > GET / HTTP/1.1
>> > Host: www.one.co.il
>> > User-Agent: curl/7.58.0
>> > Accept: */*
>> >
>> { [5 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
>> { [1 bytes data]
>> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
>> { [217 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
>> { [1 bytes data]
>> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
>> { [217 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Unknown (23):
>> { [1 bytes data]
>> < HTTP/1.1 307 Temporary Redirect
>> < Location: 
>> < Cache-Control: no-cache, no-store, must-revalidate
>> < Pragma: no-cache
>> < Date: Wed, 30 Dec 2020 13:58:51 GMT
>> < X-Cache: MISS from a0e59ea22cf8
>> < X-Cache-Lookup: MISS from a0e59ea22cf8:3128
>> < Transfer-Encoding: chunked
>> < Via: 1.1 a0e59ea22cf8 (squid/5.0.4)
>> < Connection: keep-alive
>> <
>
>
> Using tcpdump I didn't see squid send any other ICAP requests (besides
> OPTIONS which the ICAP server replied to properly).
> For some of the URLs where it hangs, I saw that running the same cURL
> command with the --compressed switch, makes cURL exit as expected:
>
>> } [5 bytes data]
>> * TLSv1.3 (OUT), TLS Unknown, Unknown (23):
>> } [1 bytes data]
>> > GET / HTTP/1.1
>> > Host: www.one.co.il
>> > User-Agent: curl/7.58.0
>> > Accept: */*
>> > Accept-Encoding: deflate, gzip
>> >
>> { [5 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
>> { [1 bytes data]
>> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
>> { [217 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
>> { [1 bytes data]
>> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
>> { [217 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Unknown (23):
>> { [1 bytes data]
>> < HTTP/1.1 307 Temporary Redirect
>> < Location: 
>> < Cache-Control: no-cache, no-store, must-revalidate
>> < Pragma: no-cache
>> < Date: Wed, 30 Dec 2020 14:00:31 GMT
>> < X-Cache: MISS from a0e59ea22cf8
>> < X-Cache-Lookup: MISS from a0e59ea22cf8:3128
>> < Transfer-Encoding: chunked
>> < Via: 1.1 a0e59ea22cf8 (squid/5.0.4)
>> < Connection: keep-alive
>> <
>> { [5 bytes data]
>> * TLSv1.3 (IN), TLS Unknown, Unknown (23):
>> { [1 bytes data]
>> * Connection #0 to host localhost left intact
>
>
> When I skip the adaptation in REQMOD, I get the page and the connection is
> terminated.
> When the ICAP works in REQMOD and redirects on the same URLs, everything
> seems to work properly.
>
> What could make squid not to terminate the connection? Could it be that it
> still holds connection with the HTTP server?
>
> Thanks
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Youtube and other search engines strict enfrocment in Squid?

2020-12-30 Thread NgTech LTD
I have seen this article at:
https://support.opendns.com/hc/en-us/articles/227986807-How-to-Enforcing-Google-SafeSearch-YouTube-and-Bing

Which offers a solution in the DNS resolution level from one domain to
another using CNAME records.
The basic example would be:
www.youtube.com
TO
restrict.youtube.com

The restrict.youtube.com host/service requires the destinations to be
legal ie not the CNAME itself.
This is a basc DNS based solution and I was wondering about a solution for this.
For now the real solution I have found was to install a local BIND
which forwards the queries to an upstream caching service.
On/In the local BIND we can define using RPZ these CNAMES like the example at:
https://www.cwssoft.com/?p=1577

I have not found another solution else then using hosts file on the
Squid host and updating it accordingly.
I have found a tiny update script at:
https://discourse.pi-hole.net/t/use-dns-to-force-youtube-into-restricted-mode-and-pi-hole/1996/7

Which seems to do the job and I am pretty sure it can work good enough for many.

Are there any other known ways to do this?

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PCI Certification compliance lists

2021-01-03 Thread NgTech LTD
I'm trying to figure out what can be done with 5.0.4.
I believe there is either a bug or misunderstanding by me what and how
things should be done or configured.

The first thing is to be able to bump all and add exceptions.
The second would be to bump specific sites.
As i noticed in the past it seems that for a good splice and or bump I need
the any-of acl to be used.

Its a bit different then the way squid acls work in general.

Eliezer

On Sun, Jan 3, 2021, 17:06 Amos Jeffries  wrote:

> On 4/01/21 3:12 am, ngtech1ltd wrote:
> > I am looking for domains lists that can be used for squid to be PCI
> > Certified.
> >
> > I have read this article:
> > https://www.imperva.com/learn/data-security/pci-dss-certification/
> >
> > And couple others to try and understand what might a Squid proxy ssl-bump
> > exception rules should contain.
> > So technically we need:
> > - Banks
> > - Health care
> > - Credit Cards(Visa, Mastercard, others)
> > - Payments sites
> > - Antivirus(updates and portals)
> > - OS and software Updates signatures(ASC, MD5, SHAx etc..)
> >
> > * https://support.kaspersky.com/common/start/6105
> > *
> >
> https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e
> > set-product-with-a-third-party-firewall
> > *
> >
> https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s
> >
> 55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc
> >
> p&articleId=TS100291&_afrLoop=641093247174514&leftWidth=0%25&showFooter=fals
> >
> e&showHeader=false&rightWidth=0%25¢erWidth=100%25#!%40%40%3FshowFooter%3
> >
> Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2
> >
> 525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3
> > D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9
> >
> >
> > If someone has the documents which instructs what domains to not inspect
> it
> > would also help a lot.
>
>
>
> Are you trying to get Squid certified as a PCI WAF agent?
>   or as security infrastructure agent?
>   or as general networking agent?
>
> These roles matter in regards to the PCI requirement to detect malicious
> transactions.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache_peer selection based on username

2021-01-10 Thread NgTech LTD
Squid provides the acl login or username.
http://www.squid-cache.org/Doc/config/acl/

should have maybe ident.
you will need to include a usernames file which contains them.

I believe a note in a helper should do that better.

Eliezer

On Sun, Jan 10, 2021, 17:33 roee klinger  wrote:

> Hey,
>
> I am trying to figure out the best way to select cache peers based on the
> client username, I have read extensively but I cannot figure out the best
> way to do it.
>
> so far I have:
>
> external_acl_type user_whitelist_external children-max=20 ttl=300 %>lp %>a
> script.sh
> acl whitelisted_users external user_whitelist_external
> http_access allow whitelisted_users
>
>
> and:
>
> nonhierarchical_direct off
> never_direct allow all
> cache_peer 192.168.8.1 parent 101 0 proxy-only default name=proxy1
> cache_peer_access proxy1 allow whitelisted_users
> cache_peer_access proxy0.2 deny all
> cache_peer 192.168.8.2 parent 102 0 proxy-only default name=proxy2
> cache_peer_access proxy2 allow whitelisted_users
> cache_peer_access proxy0.3 deny all
>
> ideally, script.sh checks if the request is authinticated and if it is, it
> selects the cache peer to use, is there some kind of way to achieve this
> with "Defined keywords" to select which cache peer to use or am I looking
> at this the wrong way?
>
> What would be the best way to accomplish this?
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Microsoft store issues with ssl-bump

2021-01-12 Thread NgTech LTD
Im saying that my config might be wrong and I will send you a full config
save which can show you the whole setup like most vendors has.
I have upgraded squid in production.

Let me verify first before shouting "bug".

Eliezer

On Tue, Jan 12, 2021, 12:15 Amos Jeffries  wrote:

> On 12/01/21 10:15 pm, Eliezer Croitoru wrote:
> > This works in another proxy which looks at the SNI only without any bump
> > involved.
>
> So you are saying you find a bug with Squid?
>or .. ??
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Making destination IP available in ICAP REQMOD request

2021-01-17 Thread NgTech LTD
Hey Moti,

It is a good assumption that the same caching dns server (not 8.8.8.8 or
1.1.1.1) that the client use will return the relevant destination ip for
the domain.
Its possible to do such a query in the icap service with low timeout(2-3)
seconds.
can this be good enough for your use case?

Eliezer

On Mon, Jan 18, 2021, 00:28 Moti Berger  wrote:

> Hi
>
> My goal is to obtain the destination IP when sending an HTTP request for
> my ICAP server so it would be able to decide the kind of adaptation
> required based on it.
>
> Looking at squid (5.0.4) code I discovered the following:
>
> It seems that "everything" starts at ClientRequestContext.
> I've noticed that noteAdaptationAclCheckDone calls startAdaptation which
> calls more methods, eventually getting
> to Adaptation::Icap::ModXact::makeRequestHeaders where it iterates over
> headers defined by the adaptation_meta configurations in squid.conf.
> For each, it calls the 'match' method where it tries to format (and
> assemble) it. There it seems that the value is taken from an AccessLogEntry:
>
>>case LFT_SERVER_IP_ADDRESS:
>> if (al->hier.tcpServer)
>> out = al->hier.tcpServer->remote.toStr(tmp, sizeof(tmp));
>> break;
>>
>
> So the AccessLogEntry object seems to be the key.
> At REQMOD time, I don't get the value of the destination IP.
> Looking further I found that the DNS resolving happens when it's decided
> that the request should be forwarded to the destination server.
>
> So I tracked the flow and it seems to start from FwdState::Start method
> which gets an AccessLogEntryPointer.
> Then it calls methods that eventually do the DNS resolving
> (Dns::nbgethostbyname) and ending in (FwdState::connectStart) which have
> the IP to connect to.
> So it seems that this flow will populate the AccessLogEntry.
> This seems right since during RESPMOD, the same code above
> (in Adaptation::Icap::ModXact::makeRequestHeaders) is running and this time
> the `match` method eventually gets the destination IP.
> I added logs that prints the AccessLogEntryPointer and in the FwdState.cc
> the log says address 0x5592ab521e30*12 and in the Notes.cc the log says
> address 0x5592ab521e30*25.
>
> Two things that I haven't found yet:
> 1. The place where the AccessLogEntry is populated
> 2. Where after the adaptation, the forwarding to the destination server
> occured (assuming it should be forwarded)
>
> I couldn't figure out a way to start the DNS resolving just before
> the startAdaptation starts as it requires all sorts of objects that seem to
> be unavailable there.
> I wonder if you can help me to find a way to do it.
>
> Thanks,
> Moti
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding headers in ICAP server with no preview

2021-01-18 Thread NgTech LTD
I assume that a null body is based on the logic that the icap client knows
the progress and the icap details enough to only modify the headers.
it should be tested.
I tried to test it but im busy to test it right now.

Eliezer

On Mon, Jan 18, 2021, 13:46 Moti Berger  wrote:

> Hi
>
> If the ICAP server sets 'Preview: 0' in the OPTIONS it means that when the
> ICAP client sends a request, it should not contain the body.
> This is the REQMOD request:
>
>> F..n...DREQMOD icap://censor-req.proxy:14590/request ICAP/1.0
>> Host: censor-req.proxy:14590
>> Date: Mon, 18 Jan 2021 11:34:54 GMT
>> Encapsulated: req-hdr=0, req-body=222
>> Preview: 0
>> Allow: 204, trailers
>> X-custom-header: data
>>
>> POST http://www.dst-server.com:2/v1/test HTTP/1.1
>> User-Agent: python-requests/2.25.1
>> Accept-Encoding: gzip, deflate
>> Accept: */*
>> Content-Length: 10
>> Content-Type: application/json
>> Host: www.dst-server.com:2
>>
>
> The ICAP 'Encapsulated' header has a req-body even though no 'body' should
> be in this request.
> I wonder why in this case the 'Encapsulated' header doesn't contain
> null-body.
> I could not find any reference to this case in the RFC3507.
> The ICAP server has no way to encapsulate the HTTP request body if it
> didn't get it.
>
> I want to avoid sending the body because the adaptation is body agnostic.
>
>
> On Sun, Jan 17, 2021 at 11:34 PM Alex Rousskov <
> rouss...@measurement-factory.com> wrote:
>
>> On 1/17/21 3:08 PM, Moti Berger wrote:
>> > What should the ICAP response look like?
>>
>> The vast majority off ICAP responses containing an HTTP POST message
>> will look like ICAP header + HTTP header + HTTP body. Please see RFC
>> 3507 and its errata for examples of and discussion about those three
>> components. It should help avoid guessing and developing by examples
>> (which usually leads to bugs, especially where ICAP is involved).
>>
>>
>> > What I do is to reply like this:
>> >
>> > (dI./M..ICAP/1.0 200 OK
>> > ISTag: "SjIzlRA4te41axxcDOoiSl6rBRg4ZK"
>> > Date: Sun, 17 Jan 2021 19:34:12 GMT
>> > Server: BaseICAP/1.0 Python/3.6.12
>> > Encapsulated: req-hdr=0, req-body=360
>> >
>> > POST http://www.dst-server.com:2/v1/test HTTP/1.1
>> > x-new-header: {"key": "value"}
>> > user-agent: python-requests/2.25.1
>> > accept-encoding: gzip, deflate
>> > accept: */*
>> > content-length: 16
>> > content-type: application/json
>> > host: www.dst-server.com:2 
>>
>>
>> FYI: The above incomplete ICAP response promises an HTTP request body,
>> both on the ICAP level (req-body) and on the HTTP level (content-length:
>> 16).
>>
>>
>> > As I said, I use 'Preview: 0' since I don't mind the body. The question
>> > is whether declaring the body starts at X (req-body=X) is OK even though
>> > I don't have a body to send?
>>
>> It is not OK not to send the body. Encapsulated:req-body does more than
>> declaring where the encapsulated headers end. It also promises an
>> embedded HTTP body after those headers. You must encapsulate the body if
>> the HTTP message should have one. You cannot adapt the header of an HTTP
>> message with a body without also sending the HTTP body (virgin or
>> adapted).
>>
>> Preview is pretty much irrelevant in this context -- the ICAP protocol
>> does not care how the ICAP service gets the HTTP body to include in the
>> ICAP response.
>>
>> There are unofficial ICAP extensions that make it possible to tell the
>> ICAP client to reuse the body it has buffered while adapting the header,
>> but you should get the baseline case working before bothering with those
>> extensions -- they are optimizations that are not applicable to some
>> transactions.
>>
>>
>> > I think having req-null=X is bad since it
>> > probably tells squid that I decided the adapted request should have no
>> > body, but that's only a guess.
>>
>> If you meant to say "null-body", then you guessed correctly -- null-body
>> means the adapted HTTP message has no body. That is not what you want to
>> say when adapting most HTTP POST messages.
>>
>>
>> HTH,
>>
>> Alex.
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Originserver load balancing and health checks in Squid reverse proxy mode

2021-02-09 Thread NgTech LTD
Maybe its apparmor.
pinger needs to have a setuid permission as root.
its a pinger and needs root privleges as far as i remember.

Eliezer


On Tue, Feb 9, 2021, 17:03 Chris  wrote:

> Hi,
>
> thank you Amos, this is bringing me into the right direction.
>
> Now I know what I'll have to debug: the pinger.
>
> Cache.log shows:
>
> 2021/02/09 14:49:27| pinger: Initialising ICMP pinger ...
> 2021/02/09 14:49:27| pinger: ICMP socket opened.
> 2021/02/09 14:49:27| pinger: ICMPv6 socket opened
> 2021/02/09 14:49:27| Pinger exiting.
>
> and that last line "pinger exiting" looks like a problem here.
>
> Squid is used as a package from ubuntu bionic, it's configured with
> "--enable-icmp" as stated by squid -v.
>
> Now I explicitly wrote a "pinger_enable on" and the pinger_program path
> (in this case: "/usr/lib/squid/pinger" ) into the squid.conf  (as well
> as icmp_query on) and reconfigured but the cache.log still shows:
>
> "Pinger exiting"
>
> So I don't understand why the pinger is exiting. The pinger_program is
> owned by root and has 0755 execution rights. Normal ping commands do
> work and show the one originserver at ttl=53 and time=50 while the other
> is at ttl=56 and time=155 - so a RTT comparison for weighted-round-robin
> should work here.
>
> Any hints on how I can find out why the pinger is exiting? Right now I'm
> debuging with debug_options ALL,1 44,3 15,8 but don't see a reason why
> the pinger exits.
>
> The Originservers are defined by (with icp/htcp disabled):
>
> cache_peer [ipv4_address_srv1] parent [http_port] 0 no-digest
> no-netdb-exchange weighted-round-robin originserver name=srv1
> forceddomain=[domainname]
>
> cache_peer [ipv4_address_srv2] parent [http_port] 0 no-digest
> no-netdb-exchange weighted-round-robin originserver name=srv2
> forceddomain=[domainname]
>
>
> Thank you for your help,
>
> Chris
>
>
>
>
>
> On 09.02.21 04:23, Amos Jeffries wrote:
> > On 9/02/21 3:40 am, Chris wrote:
> >> Hi all,
> >>
> >> I'm trying to figure out the best way to use squid (version 3.5.27)
> >> in reverse proxy mode in regard to originserver health checks and
> >> load balancing.
> >>
> >> So far I had been using the round-robin originserver cache peer
> >> selection algorithm while using weight to favor originservers with
> >> closer proximity/lower latency.
> >>
> >
> > Ok.
> >
> >
> >> The problem: if one cache_peer is dead it takes ages for squid to
> >> choose the second originserver. It does look as if (e.g. if one
> >> originserver has a weight of 32, the other of 2) squid tries the dead
> >> server several times before accessing the other one.
> >>
> >
> > The DEAD check by default requires 10 failures in a row to trigger.
> > This is configurable with the connect-fail-limit=N option.
> >
> >
> >> Now instead of using round-robin plus weight it would be best to use
> >> weighted-round-robin. But as I understand it, this wouldn't work with
> >> originserver if (as it's normally the case) the originserver won't
> >> handle icp or htcp requests. Did I miss sth. here? Would
> >> background-ping work?
> >
> > Well, kind of.
> >
> > ICP/HTCP is just a protocol. Most origin servers do not support them,
> > but some do. Especially if the server is not a true origin but a
> > reverse-proxy.
> >
> >
> >>
> >> I tried weighted-round-robin and background-ping on originservers but
> >> got only an evenly distributed request handling even if ones
> >> originservers rtt would be less than half of the others. But then
> >> again, those originservers won't handle icp requests.
> >
> > RTT is retrieved from ICMP data primarily. Check your Squid is built
> > with --enable-icmp, the pinger helper is operational, and that ICMP
> > Echo traffic is working on all possible network routes between your
> > Squid and the peer server(s).
> >
> >
> >>
> >> So what's the best solution to a) choose the originserver with the
> >> lowest rtt and b) still have a fast switch if one of the
> >> originservers switches into dead state?
> >
> >
> > Check whether the RTT is actually being measured properly by Squid
> > (debug_options ALL,1 44,3 15,8). If the peers are fast enough
> > responding or close enough in the network RTT could come out as a 0
> > value or some N value equal for both peer. ie. neither being "closer".
> >
> >
> > Amos
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid ssl-bump with icap returns 503

2021-03-04 Thread NgTech LTD
Would it be possible to dump some icap traffic so we would be able to
understand what might cause this issue if at all?

Eliezer

בתאריך יום ה׳, 4 במרץ 2021, 12:36, מאת Niels Hofmans ‏:

> Hi guys,
>
> I’m asking here but since I’m not too comfortable with a mailing list,
> it’s also on serverfault.com:
> https://serverfault.com/questions/1055663/squid-icap-not-working-if-using-tls-interception-but-both-work-separately
>
> I have an odd issue that squid will return a HTTP 503 when I try to do
> ICAP for an ssl-bumped HTTPS website. HTTP website works fine.
> Any ideas?
>
> Config:
>
> visible_hostname proxy
> forwarded_for delete
> via off
> httpd_suppress_version_string on
> logfile_rotate 0
> cache_log stdio:/dev/stdout
> access_log stdio:/dev/stdout
> cache_store_log stdio:/dev/stdout
> dns_v4_first on
> cache_dir ufs /cache 100 16 256
> pid_filename /cache/squid.pid
> mime_table /usr/share/squid/mime.conf
> http_port 0.0.0.0:3128
> https_port 0.0.0.0:3129 \
> generate-host-certificates=on dynamic_cert_mem_cache_size=10MB \
> tls-cert=/etc/squid/ssl/squid.crt tls-key=/etc/squid/ssl/squid.key
> ssl_bump peek all
> ssl_bump bump all
> quick_abort_min 0
> quick_abort_max 0
> quick_abort_pct 95
> pinger_enable off
> icap_enable on
> icap_service_failure_limit -1
> icap_service service_req reqmod_precache bypass=0 icap://10.10.0.119:1344/
> icap_preview_enable on
> adaptation_access service_req allow all
> cache_mem 512 mb
> dns_nameservers 1.1.1.1 1.0.0.1
> cache_effective_user proxy
> sslcrtd_program /usr/lib/squid/security_file_certgen -s /cache/ssl_db -M
> 4MB
> sslcrtd_children 8 startup=1 idle=1
> sslproxy_cert_error allow all
> http_access allow all
>
> Log line HTTPS when it doesn’t work:
> 1614853306.542 40 172.17.0.1 NONE/503 0 CONNECT //ironpeak.be:443 -
> HIER_NONE/- -
>
> < HTTP/1.1 503 Service Unavailable
> < Server: squid
> < Mime-Version: 1.0
> < Date: Thu, 04 Mar 2021 10:36:05 GMT
> < Content-Type: text/html;charset=utf-8
> < Content-Length: 1849
> < X-Squid-Error: ERR_DNS_FAIL 0
>
>
> Log line HTTP when it does work:
>   -1 1614851916 text/plain 60/60 GET
> http://ironpeak.be/blog/big-sur-t2rminator/
> 1614853320.743 SWAPOUT 00 0002 F7A390D89822E9BA831C47E1B4CDD0A8  301
> 1614853320-1 1614853320 text/plain 60/60 GET
> http://ironpeak.be/blog/big-sur-t2rminator/
> 1614853320.748302 172.17.0.1 TCP_REFRESH_MODIFIED/301 1647 GET
> http://ironpeak.be/blog/big-sur-t2rminator/ - HIER_DIRECT/104.21.60.47
> text/plain
>
> Example CLI command used:
> ALL_PROXY="https://127.0.0.1:3129"; curl -vvv --proxy-insecure
> http://ironpeak.be/
>
> Command used to start squid:
>
> exec /usr/sbin/squid -f /etc/squid/squid.conf --foreground -YCd 1
>
> Package info:
> Package: squid-openssl
> Version: 4.13-5
>
> Many thanks!
> Regards,
> Niels Hofmans
>
> SITE   https://ironpeak.be
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid full request logging

2021-03-04 Thread NgTech LTD
Hey Niels,

Take a peek at:
https://github.com/andybalholm/redwood

I am using it in production and it was written because of squid limitations.
squid is great but take a peek and see how it works for you.
I have 2 servers in ha cluster which works great.

An example I wrote to filter youtube traffic is at:
https://github.com/elico/yt-classification-service-example

let me know if it helps you or gives you any direction.

בתאריך יום ה׳, 4 במרץ 2021, 23:33, מאת Niels Hofmans ‏:

> Hi Alex,
>
> Thanks for the feedback. Although I am not proficient in C for writing an
> ecap service, is there some binding available online for Go?
> This was the reason I originally opted for an ICAP service since I can
> abstract Go behind the HTTP ICAP layer.
> Now I understand this has its limitations, but AFAIK a preview cap at
> 100kb would be sufficient per request.
> But this will slow down my current setup greatly, as I’m currently sending
> -only- the headers.
>
> Would you think that a) using Go for the ecap adapter or b) using two ICAP
> services.
> One would validate the headers and return OK or NOT (bypass=0), while the
> other only pushes the 1kb request/response to a queue.
> Ideally those two would be contacted simultaneously while only the first
> one is blocking.
> ..just thinking aloud tough.
>
> Regards,
> Niels Hofmans
>
> SITE   https://ironpeak.be
> BTW   BE0694785660
> BANK BE76068909740795
>
> On 4 Mar 2021, at 22:23, Alex Rousskov 
> wrote:
>
> On 3/4/21 2:52 PM, Niels Hofmans wrote:
>
> is it possible to do full request/response logging?
>
>
> Squid can log HTTP headers with %>h and %
> Squid cannot log HTTP message bodies.
>
>
> I do not see the appropriate log_format directive in the docs.
> I was hoping not having to do this in my ICAP service since this slows
> down approval of the HTTP request. (Empty preview v.s. a request capped
> at 1MB that needs to be sent over every time)
>
>
> FWIW, an ICAP or eCAP service can start responding to the request
> _before_ the service receives the entire HTTP message body. To get
> things going, all the service needs is HTTP headers (and even that is,
> technically, optional in some cases).
>
> Using an adaptation service is still an overhead, of course, but, very
> few legitimate Squid use cases involve logging message bodies, so there
> is no built-in mechanism optimized for that specific rare purpose
> (yet?). The fastest option available today is probably a dedicated eCAP
> service that refuses to adapt the message bit continues to receive (and
> log) the message body.
>
>
> HTH,
>
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [SPAM] Squid stops serving requests after squid -k reconfigure

2021-03-30 Thread NgTech LTD
Hey Acid,

First you should try 4.x latest.
a reload of 50 domains every 10 second doesn't make sense.
I don't understand the config and the setup.
for 50 sites you just need a basic script and even one in bash will work
dor you with grep.
I wrote an example in ruby a while ago, I will try to find it in the next
week.
Maybe the server is overloaded.

I will try to give you an example later on.

Eliezer


בתאריך יום ג׳, 30 במרץ 2021, 20:41, מאת acidflash acidflash ‏<
acidfla...@linuxmail.org>:

>
> Hi Eliezer,
>
> I hope your doing ok. Thanks for the reply. Yeah currently what I am doing
> is:
>
> include /etc/squid/blockedsites.list
> and adding the ACL's and the denies in the list file. What version do you
> recommend I upgrade to, and is this a known issue? The list is actually
> pretty small, probably no more than 50 sites or so, and thats split across
> 4 or 5 groups (different ACL's). I'll look into ufdbguard and the other
> projects as well, does this sound familiar though? If you think that the
> best path forward is to alleviate the burden off of squid to some external
> tool, I could probably think up a few hacks to for that, but would
> obviously prefer to keep it all within squid. Is this occurance common with
> squid -k reconfigure and dstdomain matching? Thanks for your time. Stay
> safe.
>
> *Sent:* Sunday, March 28, 2021 at 4:42 AM
> *From:* "Eliezer Croitoru" 
> *To:* squid-users@lists.squid-cache.org
> *Subject:* Re: [squid-users] [SPAM] Squid stops serving requests after
> squid -k reconfigure
>
> Hey Acid,
>
>
>
> Haven’t seen you here for a very long time.
>
> The first thing is to upgrade squid if possible…
>
>
>
> It’s better that you won’t use squid -kreconf for big blacklists.
>
> Instead you should use some external software to match the blacklists.
>
> The most recommended software these days is ufdbguard.
>
> Depends on the size of your blacklist your might need to find the right
> solution.
>
> The best solution would be to store the list in ram somehow.
>
> Have you tried some kind of rbl server?
>
>
>
> At the time I wrote some code to and some of it was merged into:
>
> https://github.com/looterz/grimd
>
>
>
> It has a reload url so you can update the files on disk and send a reload.
>
>
>
> Another service I am using is:
>
> https://github.com/andybalholm/redwood
>
>
>
> Which has a “Classification Service” function.
>
> It’s pretty easy to write a json http client that can run queries against
> this classification service.
>
>
>
> Also you’d better use a file In the dstdomain ac ie:
>
> acl Blacklist dstdomain “/var/blacklists/xyx.list”
>
> http_access deny Blacklist
>
>
>
> and inside the xyx.list file just add lines of domains like
>
> .blacklisted-domain.com
>
> .example.com
>
>
>
> Etc..
>
>
>
>
>
> All The Bests,
>
> Eliezer
>
>
>
> 
>
> Eliezer Croitoru
>
> Tech Support
>
> Mobile: +972-5-28704261
>
> Email: ngtech1...@gmail.com
>
> Zoom: Coming soon
>
>
>
>
>
> *From:* squid-users  *On
> Behalf Of *acidflash acidflash
> *Sent:* Saturday, March 27, 2021 10:55 AM
> *To:* squid-users@lists.squid-cache.org
> *Subject:* [SPAM] [squid-users] Squid stops serving requests after squid
> -k reconfigure
>
>
>
> I have gone through the forums, and I haven't found an answer to the
> question, although it has been asked more than once.
>
> I am running squid 3.5.X on Centos 7, the compile options are:
> "configure options:  '--build=x86_64-redhat-linux-gnu'
> '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr'
> '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
> '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
> '--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
> '--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
> '--infodir=/usr/share/info' '--disable-strict-error-checking'
> '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
> '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
> '--with-logdir=$(localstatedir)/log/squid'
> '--with-pidfile=$(localstatedir)/run/squid.pid'
> '--disable-dependency-tracking' '--enable-eui'
> '--enable-follow-x-forwarded-for' '--enable-auth'
> '--enable-auth-basic=DB,LDAP,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,SMB_LM,getpwnam'
> '--enable-auth-ntlm=smb_lm,fake'
> '--enable-auth-digest=file,LDAP,eDirectory'
> '--enable-auth-negotiate=kerberos'
> '--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group'
> '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
> '--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups'
> '--enable-linux-netfilter' '--enable-removal-policies=heap,lru'
> '--enable-snmp' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,rock,ufs'
> '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio'
> '--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads'
> '--disable-arch-native' 'build_alias=x86_64-redhat-linux-gnu'
> 'host_alias=x86_

Re: [squid-users] Cache Peers and traffic handling

2021-04-14 Thread NgTech LTD
Its not clear what is the factor for a specific cache peer selection.
This will affect any advice.
Is it only baesd on username?

Eliezer

בתאריך יום ד׳, 14 באפר׳ 2021, 9:29, מאת koshik moshik ‏<
koshikmos...@gmail.com>:

> Thank you! Yes, it works fine with 5 peers. So, what would be the best
> solution to handle 5000 peers?
>
> On Mon, Apr 12, 2021 at 6:03 PM Alex Rousskov <
> rouss...@measurement-factory.com> wrote:
>
>> On 4/10/21 5:03 PM, koshik moshik wrote:
>>
>> > I am trying to run a Squid proxy Server witth about 5000 cache peers. I
>> > am running a dedicated server with 6 cores and 32GB RAM on Ubuntu 16.
>> >
>> >
>> > Could you tell me what else is needed / not needed in my squid.config? I
>> > am encountering a high CPU usage and would like to create a very
>> > efficient proxy server.
>>
>> IIRC, Squid code is not optimized for handling a large number of
>> cache_peers: Several cache peer selection steps involve linear searches.
>>
>> I do not know what exactly causes high CPU usage in your environment but
>> it could be those linear searches. You can test that (indirectly) by
>> decreasing the number of cache_peers from 5000 to, say, 5. That is a
>> weak test, of course, because other cache_peer-related overheads could
>> be to blame, but I would start there.
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>>
>> > Down below you can find my squid.config(I deleted the other cache_peer
>> > lines):
>> >
>> > ---
>> >
>> > http_port 3128
>> >
>> > dns_v4_first on
>> >
>> > acl SSL_ports port 1-65535
>> >
>> > acl Safe_ports port 1-65535
>> >
>> > acl CONNECT method CONNECT
>> >
>> > http_access deny !Safe_ports
>> >
>> > http_access deny CONNECT !SSL_ports
>> >
>> > auth_param basic program /usr/lib/squid/basic_ncsa_auth
>> /etc/squid/.htpasswd
>> >
>> > auth_param basic children 5
>> >
>> > auth_param basic realm Squid Basic Authentication
>> >
>> > auth_param basic credentialsttl 5 hours
>> >
>> > acl password proxy_auth REQUIRED
>> >
>> > http_access allow password
>> >
>> > #http_access deny all
>> >
>> > cache allow all
>> >
>> > never_direct allow all
>> >
>> > ident_access deny all
>> >
>> >
>> >
>> >
>> >
>> > cache_mem 1 GB
>> >
>> > maximum_object_size_in_memory 16 MB
>> >
>> >
>> >
>> >
>> >
>> > # Leave coredumps in the first cache dir
>> >
>> > coredump_dir /var/spool/squid
>> >
>> >
>> > #Rules to anonymize http headers
>> >
>> > forwarded_for off
>> >
>> > request_header_access Allow allow all
>> >
>> > request_header_access Authorization allow all
>> >
>> > request_header_access WWW-Authenticate allow all
>> >
>> > request_header_access Proxy-Authorization allow all
>> >
>> > request_header_access Proxy-Authenticate allow all
>> >
>> > request_header_access Cache-Control allow all
>> >
>> > request_header_access Content-Encoding allow all
>> >
>> > request_header_access Content-Length allow all
>> >
>> > request_header_access Content-Type allow all
>> >
>> > request_header_access Date allow all
>> >
>> > request_header_access Expires allow all
>> >
>> > request_header_access Host allow all
>> >
>> > request_header_access If-Modified-Since allow all
>> >
>> > request_header_access Last-Modified allow all
>> >
>> > request_header_access Location allow all
>> >
>> > request_header_access Pragma allow all
>> >
>> > request_header_access Accept allow all
>> >
>> > request_header_access Accept-Charset allow all
>> >
>> > request_header_access Accept-Encoding allow all
>> >
>> > request_header_access Accept-Language allow all
>> >
>> > request_header_access Content-Language allow all
>> >
>> > request_header_access Mime-Version allow all
>> >
>> > request_header_access Retry-After allow all
>> >
>> > request_header_access Title allow all
>> >
>> > request_header_access Connection allow all
>> >
>> > request_header_access Proxy-Connection allow all
>> >
>> > request_header_access User-Agent allow all
>> >
>> > request_header_access Cookie allow all
>> >
>> > request_header_access All deny all
>> >
>> >
>> >
>> >
>> >
>> > #
>> >
>> > # Add any of your own refresh_pattern entries above these.
>> >
>> > #
>> >
>> > #refresh_pattern ^ftp:   144020% 10080
>> >
>> > #refresh_pattern ^gopher:14400%  1440
>> >
>> > #refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
>> >
>> > #refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
>> >
>> > #refresh_pattern .   0   20% 4320
>> >
>> >
>> > 
>> >
>> > acl me proxy_auth ye-1
>> >
>> > cache_peer my.proxy.com  parent 31280
>> > login=user1:password1 no-query name=a1
>> >
>> > cache_peer_access a1 allow me
>> >
>> > cache_peer_access a1 deny all
>> >
>> >
>> > ___
>> > squid-users mailing list
>> > squid-users@lists.squid-cache.org
>> > http://lists.squid-cache.org/listinfo/squid-users
>> >
>>
>> ___
> squid-users mailing list
> squid-users@lists.squid-

Re: [squid-users] Problems with whatsapp

2021-05-30 Thread NgTech LTD
Hey,

can you please share your squid.conf (Excluded sensitive details) so we can
try to recommend a solution?

בתאריך יום ב׳, 31 במאי 2021, 4:03, מאת Alex Irmel Oviedo Solis ‏<
alleinerw...@gmail.com>:

> Good night, I'm having problems with a transparent squid proxy (with
> squidGuard enabled). Whatsapp's web client doesn't work, I tried to add an
> exclusion to SSL Bump following this manual
> https://wiki.squid-cache.org/ConfigExamples/Chat/Whatsapp, but still not
> working.
>
> Are there any way to probe or debug if this exclusion is working?
>
> --
> *"Una alegría compartida se transforma en doble alegría; una pena
> compartida, en media pena."*
> --> http://www.alexove.me 
> --> Celular (Movistar): +51-959-625-001
> --> Sigueme en Twitter: http://twitter.com/alexove_pe
> --> Perfil: http://fedoraproject.org/wiki/user:alexove
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: Squid domain block feature is at DNS level ?

2021-07-19 Thread NgTech LTD
Hey,

Squid can Intercept both http(port 80) and https(port 443) traffic.
When Squid does these it can enforce on both dns and url level.
Specifically on https there are technical limitations in some cases.
Depends on the setup you can try to test it and make sure it does what you
would expect.

Eliezer

בתאריך יום ג׳, 20 ביולי 2021, 8:46, מאת Fennex ‏:

> Hello, I'm looking to block some pages. I tried to block domains with a
> feature of my router, but it only works at DNS level. I can bypass it
> using a secure DNS in a browser like Firefox or Brave which accepts this
> "new" feature. I want to know if Squid blocks the domains at DNS level,
> or if it does a DNS lookup and blocks by ip or something similar. Thanks
> you.
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: Getting a squid clients list

2021-08-30 Thread NgTech LTD
Hey Uzee,

You can use squidclient from another machine to access this machine.
I do not remember how by heart but Amos might know if I am guessing right.

Eliezer

בתאריך יום ב׳, 30 באוג׳ 2021, 14:44, מאת U Zee ‏:

>  I know and sadly installing it is not possible either. Without
> going into the details too much, its a machine with a legacy environment
> where yum and some other tools are broken and people who knew about the
> configs are long gone.
>
> On Monday, August 30, 2021, 01:29:47 PM GMT+3, Amos Jeffries <
> squ...@treenet.co.nz> wrote:
>
>
> On 30/08/21 10:18 pm, U Zee wrote:
> > Thanks Amos. I don't think the clientdb features you mentioned are
> > enabled, I'm getting a command not found.
> > Also I don't see anything configured for logging in squid.conf (I don't
> > know if there is any other place for it)
> >
> > bash-3.00# ps -ef|grep squid
> > root  2467 1  0  2020 ?00:00:00 /usr/squid/sbin/squid
> > nobody2471  2467  0  2020 ?07:49:45 (squid)
> > root 28018 20110  0 13:12 pts/000:00:00 grep squid
> >
> > bash-3.00# /usr/squid/sbin/squid -v
> > Squid Cache: Version 2.6.STABLE13
> > configure options: '--prefix=/usr/squid' 'CC=gcc' 'CFLAGS=-O3 -g'
> >
> > bash-3.00# squidclient mgr:client_list | grep "Address"
> > bash: squidclient: command not found
>
> That is the squidclient tool missing on your machine.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-08-31 Thread NgTech LTD
Hey Marcio,

You will need to add a systemd service file that extends the current one
with more FileDescriptors.

I cannot guide now I do hope to be able to write later.

If anyone is able to help faster go ahead.

Eliezer


בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B. ‏:

> Hi,
>
> I implemented a Squid server in version 4.6 on Debian and tested it for
> about 40 days. However I put it into production today and Internet browsing
> was extremely slow.
>
> In /var/log/syslog I'm getting the following messages:
>
> Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
> Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
> Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out
> of filedescriptors
>
>
> I searched the Internet, but I only found very old information and
> referring files that don't exist on my Squid Server.
>
> The only thing I did was add the following value to the
> /etc/security/limits.conf file:
>
> *-nofile 65535
>
> however this did not solve.
>
> Does anyone have any idea how I could solve this problem?
>
> Regards,
>
> Márcio Bacci
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Multi-clients VPS - Authentication been shared.

2021-11-19 Thread NgTech LTD
I have created an example how to use and match usernames to
tcp_outgoing_ports

https://github.com/elico/vagrant-squid-outgoing-addresses

its better to use a single port with different user names (if possible).

Let me know what do you think about the solution I am offering and if the
example is understandable.

Eliezer


בתאריך יום ה׳, 18 בנוב׳ 2021, 22:56, מאת Graminsta ‏<
marcelorodr...@graminsta.com.br>:

> Tks for the answers.
>
> Considerations:
>
> 1- "Please note that you are allowing authenticated clients to send traffic
> to unsafe ports. For example, they can CONNECT to non-SSL ports. You may
> want to reorder the above rules if that is not what you want."
>
> ANSWER:
> Tks for the advice, I already had it changed.
>
>
> 2- "However, you should also ask yourself another question: "Why am I using
> multiple http_ports if all I care about is who uses which
> tcp_outgoing_address?". The listening ports have virtually nothing to do
> with tcp_outgoing_address..."
>
> ANSWER:
> Because I have to route each http_port to specific tcp_outgoing_address.
> I have several customers per VPS.
> Each one uses like 10 different ports to direct connections through
> different IPv6s.
>
> 3- "Use http_access to deny authenticated users connected to wrong ports."
>
> ANSWER:
> So, in this scenario, how can I prevent users in the same users list to
> access ports that not belong to them.
> How to deny it in http_access rules?
>
> Marcelo
>
> -Mensagem original-
> De: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Em nome
> de squid-users-requ...@lists.squid-cache.org
> Enviada em: terça-feira, 16 de novembro de 2021 15:23
> Para: squid-users@lists.squid-cache.org
> Assunto: squid-users Digest, Vol 87, Issue 19
>
> Send squid-users mailing list submissions to
> squid-users@lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
> squid-users-requ...@lists.squid-cache.org
>
> You can reach the person managing the list at
> squid-users-ow...@lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>1. Multi-clients VPS - Authentication been shared. (Graminsta)
>2. Re: Too many ERROR: Collapsed forwarding queue overflow for
>   kid2 at 1024 items (Lou?ansk? Luk??)
>3. Re: Stable Squid Version for production on Linux (David Touzeau)
>4. Re: Too many ERROR: Collapsed forwarding queue overflow for
>   kid2 at 1024 items (Alex Rousskov)
>5. Re: Multi-clients VPS - Authentication been shared.
>   (Alex Rousskov)
>
>
> --
>
> Message: 1
> Date: Tue, 16 Nov 2021 13:53:52 -0300
> From: "Graminsta" 
> To: 
> Subject: [squid-users] Multi-clients VPS - Authentication been shared.
> Message-ID: <005e01d7db0a$8ab2ab80$a0180280$@graminsta.com.br>
> Content-Type: text/plain; charset="us-ascii"
>
> Hello friends,
>
>
>
> I'm using these user authentication lines in squid.conf based on user's
> authentication list:
>
>
>
> auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/users
>
> auth_param basic children 5
>
> auth_param basic realm Squid proxy-caching web server
>
> auth_param basic credentialsttl 2 hours
>
> auth_param basic casesensitive off
>
>
>
> http_access allow localhost
>
> acl clientes proxy_auth REQUIRED
>
> http_access allow clientes
>
> http_access deny !Safe_ports
>
> http_access deny CONNECT !SSL_ports
>
> http_access allow localhost manager
>
> http_access deny manager
>
> http_access deny all
>
>
>
> #List of outgoings (all IPs are fake)
>
> http_port 181.111.11.111:4000 name=3
>
> acl ip3 myportname 3
>
> tcp_outgoing_address 2804:1934:2E1::3D6 ip3
>
>
>
> http_port 181.111.11.112:4001 name=4
>
> acl ip4 myportname 4
>
> tcp_outgoing_address 2804:1934:3a8::3D7 ip4
>
>
>
> The problem is that everyone whom is in the users file are allow to use all
> tcp_outgoing_address.
>
> If a smarter client scans for open IPs and ports will be able to find these
> outgoings.
>
>
>
> How can I restrict each user to their own tcp_outgoing_address output?
>
>
>
> Tks.
>
> Marcelo
>
> -- next part --
> An HTML attachment was scrubbed...
> URL:
> <
> http://lists.squid-cache.org/pipermail/squid-users/attachments/2026/69f
> b0a22/attachment-0001.htm
> 
> >
>
> --
>
> Message: 2
> Date: Tue, 16 Nov 2021 18:00:56 +0100
> From: Lou?ansk? Luk?? 
> To: "Alex Rousskov" , "Squid Users"
> 
> Subject: Re: [squid-users] Too many ERROR: Collapsed forwarding queue
> overflow for kid2 at 1024 items
> Message-ID:
> <72dd5d5cf661b5459dc08a060bf26b530108

Re: [squid-users] squid url_rewrite_program how to return a kind of TCP reset

2022-01-30 Thread NgTech LTD
You can try to use deny_info with a customized error page template or an
icap service that will respond with a different page.
I think that redirecting to an external website is a good choice.
Many commercial products use this technique.
If you want the traffic of this website to be bypassed from squid you can
also do that with couple iptables lines.

All The Bests,
Eliezer

בתאריך יום ב׳, 31 בינו׳ 2022, 2:21, מאת David Touzeau ‏:

> Hi
>
> I have built my own squid url_rewrite_program
>
> protocol requires answering with
>
> # OK status=301|302 url=
> Or
> # OK rewrite-url="http://blablaba"; 
>
> In my case, especially for trackers/ads i would like to say to browsers:
> "Go away !" without need them to redirect.
>
> Sure i can use these methods but...
>
> 1) 127.0.0.1 - browser is in charge of getting out
> 
> OK status=302 url="http://127.0.0.1";  But this ain't
> clean or polished.
>
>
> 2) 127.0.0.1 - Squid is in charge of getting out
> 
> OK rewrite-url="http://127.0.0.1";  But this very very
> ain't clean or polished.
> Squid claim in logs for an unreachable URL and pollute events
>
>
> 3) Redirect to a dummy page with a deny acl
> 
> OK status=302 url="http://dummy.com"; 
> acl dummy dstdomain dummy.com
> http_access deny dummy
> deny_info TCP_RESET dummy
>
> But it makes 2 connections to the squid for just stopping queries.
> It seems not really optimized.
>
> I notice that for several reasons i cannot switch to an external_acl
>
> Is there a way / idea ?
>
>
> Regards
>
>
>
>
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting SSL Connection Reset Randomly but rarely

2022-01-30 Thread NgTech LTD
What version of amazon linux are you using? 1 or 2?
2 has support for squid 4.17.
There are couple options regarding these resets and not all of them are
squid side.

Eliezer

בתאריך יום ה׳, 27 בינו׳ 2022, 5:59, מאת Usama Mehboob ‏<
musamamehb...@gmail.com>:

> Hi I have squid 3.5 running on amazon linux and it works fine for the most
> part but sometime I see the logs of my clients from webapp saying that
> connection timeout etc. Upon checking the cache logs, I see these
> statements.
>
>
> 2022/01/23 03:10:01| Set Current Directory to /var/spool/squid
> 2022/01/23 03:10:01| storeDirWriteCleanLogs: Starting...
> 2022/01/23 03:10:01|   Finished.  Wrote 0 entries.
> 2022/01/23 03:10:01|   Took 0.00 seconds (  0.00 entries/sec).
> 2022/01/23 03:10:01| logfileRotate: daemon:/var/log/squid/access.log
> 2022/01/23 03:10:01| logfileRotate: daemon:/var/log/squid/access.log
> 2022/01/23 10:45:52| Error negotiating SSL connection on FD 170: (104)
> Connection reset by peer
> 2022/01/23 12:14:07| Error negotiating SSL on FD 139:
> error::lib(0):func(0):reason(0) (5/-1/104)
> 2022/01/23 12:14:07| Error negotiating SSL connection on FD 409: (104)
> Connection reset by peer
> 2022/01/25 01:12:04| Error negotiating SSL connection on FD 24: (104)
> Connection reset by peer
>
>
>
> I am not sure what is causing it, is it because squid is running out of
> gas? my instance has 16gb of Ram and 4VCPU. I am using SSL BUMP to use
> squid as a transparent proxy within AWS Vpc.
>
> Below is the config file
> --ConfigFile-
>
> visible_hostname squid
>
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
> machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> ###acl Safe_ports port 21 # ftp testing after blocking itp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> #http_access allow CONNECT SSL_ports
>
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
>
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
>
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
>
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
>
> # And finally deny all other access to this proxy
>
> # Squid normally listens to port 3128
> #http_port 3128
> http_port 3129 intercept
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> http_access allow SSL_ports #-- this allows every https website
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
>
> # Deny requests to proxy instance metadata
> acl instance_metadata dst 169.254.169.254
> http_access deny instance_metadata
>
> # Filter HTTP Only requests based on the whitelist
> #acl allowed_http_only dstdomain .veevasourcedev.com .google.com .pypi.org
> .youtube.com
> #acl allowed_http_only dstdomain .amazonaws.com
> #acl allowed_http_only dstdomain .veevanetwork.com .veevacrm.com .
> veevacrmdi.com .veeva.com .veevavault.com .vaultdev.com .veevacrmqa.com
> #acl allowed_http_only dstdomain .documentforce.com  .sforce.com .
> force.com .forceusercontent.com .force-user-content.com .lightning.com .
> salesforce.com .salesforceliveagent.com .salesforce-communities.com .
> salesforce-experience.com .salesforce-hub.com .salesforce-scrt.com .
> salesforce-sites.com .site.com .sfdcopens.com .sfdc.sh .trailblazer.me .
> trailhead.com .visualforce.com
>
>
> # Filter HTTPS requests based on the whitelist
> acl allowed_https_sites ssl::server_name .pypi.org .pythonhosted.org .
> tfhub.dev .gstatic.com .googleapis.com
> acl allowed_https_sites ssl::server_name .amazonaws.com
> acl allowed_h

Re: [squid-users] Tune Squid proxy to handle 90k connection

2022-01-30 Thread NgTech LTD
I would recommend you to start with 0 caching.
However, for choosing the right solution you must give more details.
For example there is an IBM reasearch that prooved that for about 90k
connections you can use vm's ontop of such hardware with apache web server.
If you do have the set of the other requirements from the proxy else then
the 90k requests it would be wise to mention them.

Do you need any specific acls?
Do you need authentication?
etc..

For a simple forward proxy I would suggest to use a simpler solution and if
possible to not log anything as a starter point.
Any local disk i/o will slow down the machine.

About the url categorization, I do not have experience with ufdbguard on
such scale but it would be pretty heavy for any software to handle 90k
rps...
 It's doable to implement such setup but will require testing.
Will you use ssl bump in this setup?

If I will have all the technical and specs/requirements details I might be
able to suggest better then now.
Take into account that each squid worker can handle about 3k rps tops(with
my experience) and it's a juggling between two sides so... 3k is really
3k+3k+external_acls+dns...

I believe that in this case an example of configuration from the squid
developers might be usefull.

Eliezer


בתאריך יום ג׳, 25 בינו׳ 2022, 18:42, מאת André Bolinhas ‏<
andre.bolin...@articatech.com>:

> Any tip about my last comment?
>
> -Mensagem original-
> De: André Bolinhas 
> Enviada: 21 de janeiro de 2022 16:36
> Para: 'Amos Jeffries' ;
> squid-users@lists.squid-cache.org
> Assunto: RE: [squid-users] Tune Squid proxy to handle 90k connection
>
> Thanks Amos
> Yes, you are right, I will put a second box with HaProxy in front to
> balance the traffic.
> About the sockets I can't double it because is a physical machine, do you
> think disable hyperthreading from bios will help, because we have other
> services inside the box that works in multi-threading, like unbound DNS?
>
> Just more a few questions:
> 1º The server have 92Gb of Ram, do you think that is needed that adding
> swap will help squid performance?
> 2º Right now we are using squid 4.17 did you recommend upgrade or
> downgrade to any specific version?
> 3º We need categorization, for this we are using an external helper to
> achieve it, do you recommend use this approach with ACL or move to some
> kind of ufdbguard service?
>
> Best regards
> -Mensagem original-
> De: squid-users  Em Nome De
> Amos Jeffries
> Enviada: 21 de janeiro de 2022 16:05
> Para: squid-users@lists.squid-cache.org
> Assunto: Re: [squid-users] Tune Squid proxy to handle 90k connection
>
> Sorry for the slow reply. Responses inline.
>
>
> On 14/01/22 05:44, André Bolinhas wrote:
> > Hi
> > ~80k request per second  10k users
>
>
> Test this, but you may need a second machine to achieve the full 80k RPS.
>
> Latest Squid do not have any details analysis, but older Squid-3.5 were
> only achieving >15k RPS under lab conditions, more likely expect under 10k
> RPS/worker on real traffic.
>   That means (IME) this machine is quite likely to hit its capacity
> somewhere under 70k RPS.
>
>
> > CPU info:
> > CPU(s) 16
> > Threads per code 2
> > Cores per socket 8
>
> With this CPU you will be able to run 7 workers. Setup affinity of one
> core per worker (the "kidN" processes of Squid). Leaving one core to the OS
> and additional processing needs - this matters at peak loading.
>
> CPU "threads" tend not to be useful for Squid. Under high loads Squid
> workers will consume all available cycles on their core, not leaving any
> for the fancy "thread" core sharing features to pretend there is another
> core available. YMMV. One of the tests to try when tuning is to turn off
> the CPU hyperthreading and see what effect it has (if any).
>
>
> > Sockets 1
> > Inter Xeron Silver 4208  @ 2.10GHz
> >
>
> Okay. Doable, but for best performance you want as high GHz rating on the
> cores as your budget can afford. The amount of "lag" Squid adds to traffic
> and RPS performance/parallelism directly correlates with how fast the CPU
> core can run cycles.
>
>
>
> HTH
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [ext] Re: Absolute upper limit for filedescriptors in squid-6?

2022-02-02 Thread NgTech LTD
Hey Ralph,

Did you tried to configure the squid proxy systemd service and squid conf
with the mentioned max fd?

Thanks,
Eliezer

בתאריך יום ד׳, 2 בפבר׳ 2022, 16:17, מאת Ralf Hildebrandt ‏<
ralf.hildebra...@charite.de>:

> > I hope somebody will change/fix the related ./configure functionality
> and/or
> > message wording. Most humans will be confused by the self-contradictory
> > output shared by Ralf. File descriptor limits is a complicated subject,
> but
> > we can do better!
>
> And apparently, my squid is running just fine with
> --with-filedescriptors=262144 -- that is up to now :)
>
> Ralf Hildebrandt
> Charité - Universitätsmedizin Berlin
> Geschäftsbereich IT | Abteilung Netzwerk
>
> Campus Benjamin Franklin (CBF)
> Haus I | 1. OG | Raum 105
> Hindenburgdamm 30 | D-12203 Berlin
>
> Tel. +49 30 450 570 155
> ralf.hildebra...@charite.de
> https://www.charite.de
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Vulnerabilities with squid 4.15

2022-02-10 Thread NgTech LTD
Hey Robert,

First: your question is not silly.
The answer will defer based on the complexity of the upgrade process.
What Os are you using and also, did you compiled squid from sources or
installed from a specific package?
Also, what is your squid setup purpose?

Eliezer

בתאריך יום ה׳, 10 בפבר׳ 2022, 20:56, מאת robert k Wild ‏<
robertkw...@gmail.com>:

> Hi all,
>
> Is there any security vulnerabilities with squid 4.15, should I update to
> 4.17 or is it OK to still use as my squid proxy server
>
> Sorry for silly question
>
> Thanks,
> Rob
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dynamic delay pools in squid?

2022-03-16 Thread NgTech LTD
Hey,

Have you tried qos on the os/routing level?

Eliezer

בתאריך יום ד׳, 16 במרץ 2022, 16:36, מאת Alberto Montes de Oca ‏<
snip3...@gmail.com>:

> Hi guys, I´d like to implement some bandwidth management using squid delay
> pools, but so far I can´t find any solution/example to do it dynamically,
> in my case what I want to accomplish is this:
> I have a 10Mb/s Internet connection, I want to use let´s say 6Mb/s for the
> standard users, and reserve 4Mb/s for IT users, servers, etc. I want to
> split the 6Mb/s bandwidth between the connected users at a given time,
> something like (6Mb / N_users) where N_users is the amount of users
> connected at a time. If there are 10 users connected they´ll get more
> bandwidth than if there were 30 users connected. I don´t want to split the
> bandwidth with a fixed percent for each user. Can this be done?
> Thanks for the help
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is Squid 5.5 considered stable?

2022-04-25 Thread NgTech LTD
Hey,

I have been using 5.5 in production since its out but yet to find a
specific issue with it.
My setup is small so I cannot say too much.
If some admins can share their cache manager with the project we can try to
identify specific abnormal memory leaks.
I have been working on a script that will dump and convert the cache
manager pages to json.

Eliezer

בתאריך יום ב׳, 25 באפר׳ 2022, 21:41, מאת Dave Dykstra ‏:

> On Thu, Apr 14, 2022 at 11:21:54PM +1200, Amos Jeffries wrote:
> > Subject: [squid-announce] Squid 5.5 is available
> ...
> >   Users of Squid-4 holding back due to earlier release issues
> >   are encouraged to test this version for upgrade.
>
> This doesn't seem to me to be a resounding endorsement.  Are there any
> other known significant issues with squid-5.5 not present in squid-4.x?
> For example, are the memory leaks fixed?  That isn't clear to me.
>
> Dave
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to put the destination ip to an external acl helper ?

2022-07-19 Thread NgTech LTD
But which one of them?

בתאריך יום ד׳, 20 ביולי 2022, 0:59, מאת Amos Jeffries ‏:

> On 20/07/22 00:05, Dieter Bloms wrote:
> > Hello,
> >
> > I wrote a little external acl helper and want squid to put the
> > destination fqdn _and_ the destination ip to it.
> >
> > I found the parameter %DST and this is filled with the destination fqdn.
> >
> > Is there also a parameter for the destination ip squid want's to connect
> to ?
> >
>
> In modern Squid you use the logformat macros in external_acl_type format.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] logfileHandleWrite: daemon:/var/log/squid/access.log: error writing ((32) Broken pipe)

2022-09-07 Thread NgTech LTD
Good one, Alex.

For this specific use case you need a special rotate script which will know
the confs file and will loop over them.
Later on I will try to see if yave one of these on my servers.
Basically you will need an array of config files and loop on them.

The pid shouldn't be relevevant for a rotate operation but it depends on
the nature of the system.(on a 24/7 system you should know about a service
that is down way before the logrotate happpens)
If you have a set of config files you can generate a set of postrotate
commands compared to a special script.

Let me know if this solution might fit for your use case.

Eliezer

בתאריך יום ד׳, 7 בספט׳ 2022, 3:53, מאת Alex Rousskov ‏<
rouss...@measurement-factory.com>:

>  > pid_filename /var/run/squid2.pid
>
>  >   postrotate
>  >   test ! -e /var/run/squid.pid || ... /usr/sbin/squid -k rotate
>  >   endscript
>
> I spotted one more (potentially critical) problem: Your Squid
> configuration sets pid_filename to /var/run/squid2.pid but your
> logrotate configuration assumes Squid uses /var/run/squid.pid.
>
> IMHO, in general, it is best not to guess where Squid has its PID if you
> are using "squid -k ...". If you want to test whether Squid is currently
> running, try using "squid -k check" instead.
>
>
> HTH,
>
> Alex.
>
>
>
> On 9/6/22 20:45, Alex Rousskov wrote:
> > On 9/6/22 18:02, roee klinger wrote:
> >> it seems that the logs has filled over 100GB of log data, since I made
> >> a configuration mistake (I think?) by setting this:
> >>
> >> logfile_rotate 0
> >
> > This is correct setting when using an external log rotation tool like
> > the logrotate daemon. More on that below.
> >
> >
> >> If I remember and read correctly, this means that the rotation of the
> >> files is disabled and they will just keeping increasing
> >> in size if left unchecked.
> >
> > To be more precise, this means that you are relying on an external tool
> > to rename the log files. With this setting, Squid rotate command closes
> > the access log and opens a new one (under the same name). While that
> > might sound useless, it is the right (and necessary) thing for Squid to
> > do when combined with the correct external log rotation setup.
> >
> >
> >> I have now gone ahead and changed all the configuration file to this
> >> setting:
> >>
> >> logfile_rotate 1
> >>
> >> So now it should rotate once daily, and on the next rotation it should
> >> be deleted, and this is all handled by logrotate on Debian-based
> >> machines?
> >
> > AFAIK, if you are using an external (to Squid) tool like logrotate, you
> > should be setting logfile_rotate to zero.
> >
> >
> >> This is my / cat /etc/logrotate.d/squid:
> >> ➜ / cat /etc/logrotate.d/squid
> >> #
> >> # Logrotate fragment for squid.
> >> #
> >> /var/log/squid/*.log {
> >>   daily
> >>   compress
> >>   delaycompress
> >>   rotate 2
> >>   missingok
> >>   nocreate
> >>   sharedscripts
> >>   prerotate
> >>   test ! -x /usr/sbin/sarg-reports || /usr/sbin/sarg-reports daily
> >>   endscript
> >>   postrotate
> >>   test ! -e /var/run/squid.pid || test ! -x /usr/sbin/squid ||
> >> /usr/sbin/squid -k rotate
> >>   endscript
> >> }
> >
> > This is not my area of expertise, but the above configuration does not
> > look 100% correct to me: sarg-reports execution failures should have no
> > effect on log rotation but does (AFAICT). There may be other problems
> > (e.g., I do not know whether your /usr/sbin/squid finds the right Squid
> > configuration file). I hope sysadmin experts on this mailing list will
> > help you polish this.
> >
> > You should be able to test whether the above is working (e.g., by asking
> > logrotate to rotate). Testing is critical even if you do end up getting
> > expert log rotation help on this list (this email is not it!).
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> >> Is there a way for me to set it so it just get deleted every 24 or 12
> >> hours without the archive first?
> >>
> >> Thanks,
> >> Roee
> >> On 6 Sep 2022, 16:28 +0300, Alex Rousskov
> >> , wrote:
> >>> On 9/6/22 07:41, roee klinger wrote:
> >>>
>  It is also important to know that I am running multiple Squid
> instances
>  on the same machine, they are all getting the error at the same time
> >>>
> >>> What external event(s) happen at that time? Something is probably
> >>> sending a signal to the logging daemon process. It would be good to
> know
> >>> what that something (and that signal) is. Your syslog or cache.log
> might
> >>> contain more info. Analyzing the timing/schedule of these problems may
> >>> also be helpful in identifying the trigger.
> >>>
> >>>
>  Is a possible workaround that might be just replacing the line with
>  this?
> >>>
>  access_log /var/log/squid/access2.log
> >>>
> >>> As you know, this configuration (in this deprecated spelling or with
> and
> >>> explicit "stdio:" prefix) will result in Squid workers writing to the
> >>> log file directly instead of asking the logging daemon. This will

[squid-users] maintenance period for ngtech www services

2023-05-23 Thread NgTech LTD
Hey List,

I have started working on couple things in my web services.
The services will be reachable only locally (IL) and later on this week
will be available again for the rest of the world.

Sorry for the in-convience (it's a surprise for me too).
If you need something just email me.

Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: blocking all requests to plain ip addresses

2023-11-06 Thread NgTech LTD
Do you need to block access to all plain ip addresses or specific ones?
What if you will want to allow specific ones but deny all the others?

Eliezer

בתאריך יום ב׳, 6 בנוב׳ 2023, 12:45, מאת Christian Metzger ‏:

> Hello,
> is the above feature available, if yes how to configure it?
> This feature should be available in all modi of no-, white- and
> blacklisting.
> This feature is important for security and it's available in big
> commercial proxies.
> Best regards, Chris
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid hangs and dies and can not be killed - needs system reboot

2023-12-18 Thread NgTech LTD
Hey Amish,

I want to replicate this issue on a local vm.
Can you give us some details on the version of arch and the relevant
settings for recreating the issue?
How did you installed arch and also squid?

Thanks,
Eliezer

בתאריך יום ב׳, 18 בדצמ׳ 2023, 16:36, מאת Amish ‏:

> Hello,
>
> I use Arch Linux and today I updated squid from squid 5.7 to squid 6.6.
>
> After the update from 5.7 to 6.6, squid starts but then reaches Dead
> state in a minute or two.
>
> # ps aux | grep squid
> root 601  0.0  0.2  73816 22528 ?Ss   12:59   0:02
> /usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC
> proxy604  0.0  0.0  0 0 ?D12:59   0:03 [squid]
> proxy607  0.0  0.0  11976  7424 ?S12:59   0:00
> (security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
> proxy608  0.0  0.0  11976  7168 ?S12:59   0:00
> (security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
> proxy609  0.0  0.0  11712  5632 ?S12:59   0:00
> (security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
> proxy610  0.0  0.0  11712  5376 ?S12:59   0:00
> (security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
> proxy611  0.0  0.0  11712  5504 ?S12:59   0:00
> (security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
> proxy622  0.0  0.0   6116  3200 ?S12:59   0:00
> (logfile-daemon) /var/log/squid/access.log
>
> And then all requests get stuck. Notice the D (dead) state of squid.
>
> I use multiple ports for multiple purposes. (It all worked fine in squid
> 5.7)
>
> Dec 18 12:59:10 mumbai squid[601]: Starting Authentication on port
> [::]:3128
> Dec 18 12:59:10 mumbai squid[601]: Disabling Authentication on port
> [::]:3128 (interception enabled)
> Dec 18 12:59:10 mumbai squid[601]: Starting Authentication on port
> [::]:8081
> Dec 18 12:59:10 mumbai squid[601]: Disabling Authentication on port
> [::]:8081 (interception enabled)
> Dec 18 12:59:12 mumbai squid[601]: Starting Authentication on port
> [::]:8082
> Dec 18 12:59:12 mumbai squid[601]: Disabling Authentication on port
> [::]:8082 (interception enabled)
> Dec 18 12:59:12 mumbai squid[601]: Starting Authentication on port
> [::]:8083
> Dec 18 12:59:12 mumbai squid[601]: Disabling Authentication on port
> [::]:8083 (interception enabled)
> Dec 18 12:59:13 mumbai squid[601]: Starting Authentication on port
> [::]:8084
> Dec 18 12:59:13 mumbai squid[601]: Disabling Authentication on port
> [::]:8084 (interception enabled)
> Dec 18 12:59:13 mumbai squid[601]: Starting Authentication on port
> [::]:3136
> Dec 18 12:59:13 mumbai squid[601]: Disabling Authentication on port
> [::]:3136 (interception enabled)
> Dec 18 12:59:13 mumbai squid[601]: Starting Authentication on port
> [::]:3137
> Dec 18 12:59:13 mumbai squid[601]: Disabling Authentication on port
> [::]:3137 (interception enabled)
> ...
> Dec 18 12:59:29 mumbai squid[604]: Adaptation support is on
> Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted HTTP Socket
> connections at conn19 local=[::]:3128 remote=[::] FD 27 flags=41
> listening port: 3128
> Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket
> connections at conn21 local=[::]:8080 remote=[::] FD 28 flags=9
> listening port: 8080
> Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted SSL bumped
> HTTPS Socket connections at conn23 local=[::]:8081 remote=[::] FD 29
> flags=41
> listening port: 8081
> Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket
> connections at conn25 local=[::]:8092 remote=[::] FD 30 flags=9
> listening port: 8092
> Dec 18 12:59:29 mumbai systemd[1]: Started Squid Web Proxy Server.
> Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket
> connections at conn27 local=[::]:8093 remote=[::] FD 31 flags=9
> listening port: 8093
> Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket
> connections at conn29 local=[::]:8094 remote=[::] FD 32 flags=9
> listening port: 8094
> Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted SSL bumped
> HTTPS Socket connections at conn31 local=[::]:8082 remote=[::] FD 33
> flags=41
> listening port: 8082
> Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted SSL bumped
> HTTPS Socket connections at conn33 local=[::]:8083 remote=[::] FD 34
> flags=41
> listening port: 8083
> Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted SSL bumped
> HTTPS Socket connections at conn35 local=[::]:8084 remote=[::] FD 35
> flags=41
> listening port: 8084
> Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepte

Re: [squid-users] SMP + Ssl-Bump squid-tls_session_cache.shm

2020-05-23 Thread NgTech LTD
can you send the output of:
squid -v

Eliezer

On Sun, May 24, 2020, 06:31 Joshua Bazgrim 
wrote:

> Squid 4.9
> Ubuntu 18.04.03
>
> I'm trying to implement ssl-bumping into the frontend of a squid smp
> setup, but I keep getting the following error:
> FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squid-tls_session_cache.shm): (2) No such file or directory
>
> shm is working correctly and generating/reading from other squid shm
> files, but not properly generating this file upon start-up in SMP mode.
>
> My ssl-bump configuration works fine in non-smp mode.
> I'm guessing it's some sort of race condition to do with improperly setup
> config files for ssl-bumping, but unsure of how to correct it.
>
> Thanks in advance
>
> ## squid.conf #
>
> debug_options ALL,3
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
> acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
> acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
> acl localhet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged)
> machines
> acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
> acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10   # RFC 4291 link-local (directly plugged)
> machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
>
> # Only allow cachemgr access from localhost
> #http_access allow localhost manager
> #http_access deny manager
>
> # Set cache user
> cache_effective_user nobody
>
> workers 3
> if ${process_number} = 1
> include /etc/squid/frontend.conf
> else
> include /etc/squid/backend.conf
> endif
>
> http_access deny all
>
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
>
>
> ### frontend.conf - some names changed/omitted##
> # Squid normally listens to port 3128
> http_port 3128 ssl-bump \
> cert=/etc/squid/ssl_cert/mycert.pem \
> key=/etc/squid/ssl_cert/mycert.pem \
> generate-host-certificates=on \
> dynamic_cert_mem_cache_size=4mb
>
> # Where to look for ssl cert
> sslcrtd_program /usr/lib/squid/security_file_certgen -s
> /var/lib/squid/ssl_db -M 4MB
> acl step1 at_step SslBump1
> ssl_bump peek step1
> ssl_bump bump all
>
> # Enable URL Params
> strip_query_terms off
>
> # add user authentication and similar options here
> http_access allow manager localhost
> http_access deny manager
>
> http_access allow localnet
> http_access allow localhost
>
> # add backends - one line for each additional worker you configured
> # NOTE how the port number matches the kid number
> cache_peer localhost parent 4002 0 carp login=PASS name=backend-kid2
> cache_peer localhost parent 4003 0 carp login=PASS name=backend-kid3
>
> #you want the frontend to have a significant cache_mem
> cache_mem 512 MB
>
> # change /tmp to your own log directory, e.g. /var/log/squid
> access_log /var/log/squid/frontend.access.log
> cache_log /var/log/squid/frontend.cache.log
>
> # the frontend requires a different name to the backend(s)
> visible_hostname Squid-Test
>
> ## backend.conf #
> # each backend must listen on a unique port
> # without this the CARP algorithm would be useless
> http_port 400${process_number}
>
> # TODO: Change 512 to larger after testing is done
> cache_dir rock /var/log/squid/cacheRock 512 max-size=32768
>
> # NP: for now AUFS does not support SMP but the CARP algorithm helps
> reduce object duplications
> # TODO: Change 512 to larger after testing is done
> cache_dir aufs /var/log/squid/cache${process_number} 512 128 128
> min-size=32769
>
> # the default maximum cached object size is a bit small
> # you want the backend to be able to cache some fairly large objects
> maximum_object_size 512 MB
>
> # you want the backend to have a small cache_mem
> cache_mem 4 MB
>
> # the backends require a different name to frontends, but can share one
> # this prevents forwarding loops between backends while allowing
> # frontend to forward via the backend
> visible_hostname Squid-Test$

Re: [squid-users] reflecting on Squid Project Status with regard to "Joshua 55" vulnerabilities

2024-10-31 Thread NgTech LTD
Hey Jonathan,

I cannot speak for the whole squid community, however if someone in the
pfsense community doesn't want to maintain and or use squid it's his own
choice.
If there is an issue it can be researched and there so much information
about this specific "issue" that it's weird nobody bothered to respond the
issue.

The reason for the log output is widely known and there are couple ways to
resolve this.
I wrote a patch to override this behaviour in the past but I am no longer
supporting this.
The main reason for me not supporting overriding this fix is since there
are many bad actors which are using squid for their own gain while
sacrificing some internet connectivity security aspects.
It is recommended to use a shared dns service for both the clients and the
proxy server to avoid such issues.

My general recommendation is to use squid on a linux based os if possible.

There are other firewall projects which might be a better choice for your
use case if you really need the proxy.
In my setup I am using Mikrotik as a router and firewall for a 1gbps line
and a tiny x86 server for all other services.
It's more efficient and practical compared to netgate in my scenario.

Yours,
Eliezer

בתאריך יום ה׳, 31 באוק׳ 2024, 21:32, מאת Jonathan Lee ‏<
jonathanlee...@gmail.com>:

> Hello, thank you for the update Francesso, there is also some chatter
> about bugs within the Netgate community. Is this also related to the fixes
> in V7 (please see Redmine attached)?
>
> I  admit, I have a bias and assumption that that Big-Tech does not like
> Squid functional, and that most of what is listed below was done within a
> political aspect to generate a confusion within the firewall community. So
> much so that the package was considered an issue and Netgate started to
> recommend Squid's removal. I have stood by this package and continue to, as
> it works beautifully.
>
> This Redmine should have been more concise and simplified within its
> notes, it seems to just generate confusion.  I do not have issues like this
> and that is where I start to question what this is related to.  Can Someone
> please respond to this Redmine for verification that has a higher-level
> knowledge about Squid? I hate to see this removed for some simple reason
> like a PHP issue that causes configuration issues.
>
> Bug #14390: Squid: SECURITY ALERT: Host header forgery detected - pfSense
> Packages - pfSense bugtracker 
> Bug #14390: Squid: SECURITY ALERT: Host header forgery detected - pfSense
> Packages - pfSense bugtracker 
> Redmine
> redmine.pfsense.org
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External Sender] Re: Squid service not restarting properly

2024-09-25 Thread NgTech LTD
Hey Vivek,

I am maintaining the CentOS and other RPM based distribution RPM's.
The page you are looking for is:
Squid on CentOS | Squid Web Cache wiki (squid-cache.org)


if you need a rpm specifically For RHEL I will need to spin my VM for that
late on.
I believe that rocky and alma should be compatible to RHEL so you can try
the rocky or alma repositories I am publishing and then give us a feedback.
Take a peek at what Distributions I currently support:
Index of Repo (ngtech.co.il) 

Specifically what you are probably looking or is:
Index of X86 64 (ngtech.co.il)


https://www.ngtech.co.il/repo/rocky/8/x86_64/squid-6.10-1.el8.x86_64.rpm
https://www.ngtech.co.il/repo/rocky/8/x86_64/squid-helpers-6.10-1.el8.x86_64.rpm

My RPM's is separated into two parts ie squid and squid-helpers and you
should probably install both.
You can download them and use dnf or yum localinstall
./squid-6.10-1.el8.x86_64.rpm ./squid-helpers-6.10-1.el8.x86_64.rpm

However just first on rhel 8 based distributions disable the squid module
so you would be able to install the packages without any hiccups.

Let me know if it resolved your issue (remember to backup the config and
cleanup the systemd and other squid related file before installing the
pacakges.

Yours,
Eliezer

On Wed, Sep 25, 2024 at 10:35 AM Vivek Saurabh (CONT) <
vivek.saur...@capitalone.com> wrote:

> Hi  Eliezer,
> Thank you for your reply. I am trying to install it in a RHEL8 server and
> I have self compiled it taking reference from the doc -
> https://wiki.squid-cache.org/KnowledgeBase/RedHat, however, this hasn't
> been of much help to start the service properly. I downloaded the tar.gz
> file from https://www.squid-cache.org/Versions/v6/squid-6.9.tar.gz and
> placed it within the server, unzipped it and executed ./configure command
> with the options.
>
> *Regards*,
> Vivek Saurabh
> *Slack*:  #uk-monitoring
> 
> *Confluence*: UK Hawkeye
> 
>
>
> On Tue, Sep 24, 2024 at 10:32 PM  wrote:
>
>> Hey Vivek,
>>
>>
>>
>> What OS are you using?
>>
>> Did you installed squid from the OS repository or you self compiled it?
>>
>> With more details we might be able to help you understand what to do.
>>
>>
>>
>> Eliezer
>>
>>
>>
>> *From:* squid-users  *On
>> Behalf Of *Vivek Saurabh (CONT)
>> *Sent:* Tuesday, September 24, 2024 2:35 PM
>> *To:* squid-users@lists.squid-cache.org
>> *Subject:* [squid-users] Squid service not restarting properly
>>
>>
>>
>> Hi Team,
>>
>>
>>
>> I have installed squid -v6.9 but the service is not restarting using
>> the systemctl command. However, when I run this only with the execstart
>> line in the service script, it works fine. Can you please advise me on this
>> issue?
>>
>>
>> *Regards*,
>>
>> Vivek Saurabh
>>
>>
>> --
>>
>>
>>
>>
>> The information contained in this e-mail may be confidential and/or
>> proprietary to Capital One and/or its affiliates and may only be used
>> solely in performance of work or services for Capital One. The information
>> transmitted herewith is intended only for use by the individual or entity
>> to which it is addressed. If the reader of this message is not the intended
>> recipient, you are hereby notified that any review, retransmission,
>> dissemination, distribution, copying or other use of, or taking of any
>> action in reliance upon this information is strictly prohibited. If you
>> have received this communication in error, please contact the sender and
>> delete the material from your computer.
>>
>>
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>>
>> https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!FrPt2g6CO4Wadw!MkvyR9hT6NSOIRFOnkZD3LD8RbHSgPfemWeANDkmF0c9K4FI5SqxO393msoOT6V_kV9Z1qpQkSusB65rRhYOp-7zrA$
>>
> --
>
> The information contained in this e-mail may be confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have recei

Re: [squid-users] Upcoming changes on the methods used to distribute Squid

2025-01-06 Thread NgTech LTD
OK, So I have just seen that the squid-cache page is not longer parsable
for some reason by my ruby script so I changed the source of the squid
version to be from github latest release.
If someone wants to write his own build scripting based on the latest
release of squid the next can script can be of some help:
squid-latest/get-latest-from-github-releases.sh at main · elico/squid-latest
<https://github.com/elico/squid-latest/blob/main/get-latest-from-github-releases.sh>

it just works...

Eliezer

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


On Mon, Jan 6, 2025 at 6:23 PM Francesco Chemolli 
wrote:

> It already is working, please test it.
>
> URLs look like:
>
> https://github.com/squid-cache/squid/releases/download/SQUID_6_12/squid-6.12.tar.bz2
>
> SQUID_6_12 is the release git tag. They are by convention named
> SQUID_MAJOR_MINOR
>
> The next release file would be named
>
> https://github.com/squid-cache/squid/releases/download/SQUID_6_13/squid-6.13.tar.bz2
>
> There are several ways to track releases, the one I would find the most
> convenient is via the
> 'gh' tool (https://cli.github.com/) . Its 'release' subcommand is very
> powerful. See https://cli.github.com/manual/gh_release
>
> On Mon, Jan 6, 2025 at 3:07 PM NgTech LTD  wrote:
>
>> Hey Francesco,
>>
>> Thank you for the big effort.
>> I had the next git working for the past 2 years now:
>> https://github.com/elico/squid-latest
>>
>> I have been using it to release my binary builds.
>> I hope that the new releases github format will help to automate squid
>> builds in the long run.
>> Will it be ready for the 6.13 release?
>> Id it is, then I will update my builds and git to work with the releases
>> page.
>>
>> Thanks,
>> Eliezer
>> 
>> Eliezer Croitoru
>> Tech Support
>> Mobile: +972-5-28704261
>> Email: ngtech1...@gmail.com
>>
>>
>> On Sat, Jan 4, 2025 at 4:52 PM Francesco Chemolli 
>> wrote:
>>
>>> Hi Squid Users,
>>>there are some ongoing changes on how we distribute the squid
>>> sources; some of them have already happened, some more will happen in the
>>> upcoming weeks.
>>>
>>> The end state we are aiming to settle on is to distribute Squid via
>>> Github Releases (https://github.com/squid-cache/squid/releases) .
>>>
>>> Each Squid release has been and will continue to be tagged in git with
>>> the SQUID_MAJ_MIN tag, which will be the official release point. Signed
>>> release tarballs will be made available as Github release assets. These are
>>> already available at https://github.com/squid-cache/squid/releases for
>>> every squid version from 1.0.0alpha to 6.12.
>>> We will no longer provide patches, these can be obtained from git.
>>>
>>> We have decommissioned the rsync and ftp distribution points on
>>> www.squid-cache.org, and are no longer advertising Squid mirrors on the
>>> website. We are very thankful to Squid mirror operators and volunteers for
>>> their continued support through the years.
>>>
>>> In the next few weeks we will rework the "Download" section of the squid
>>> website (https://www.squid-cache.org/Versions/) to point to Github for
>>> downloading instead of self-hosting tarballs, patches etc.
>>>
>>> Our plans moving forward:
>>> - we will restart announcing new releases to the squid-announce mailing
>>> list
>>>   see https://www.squid-cache.org/Support/mailing-lists.html
>>> - anyone wanting to track Squid releases can:
>>>   - use git tags
>>>   - use the 'gh' tool from github (https://cli.github.com/)
>>>   - rely on the 'releases' github page:
>>> https://github.com/squid-cache/squid/releases
>>>   - to only track the latest supported release:
>>> https://github.com/squid-cache/squid/releases/latest
>>>
>>> Any feedback is welcome
>>>
>>> --
>>> Francesco Chemolli
>>> Squid Software Foundation
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> https://lists.squid-cache.org/listinfo/squid-users
>>>
>>
>
> --
> Francesco Chemolli
> Squid Software Foundation
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upcoming changes on the methods used to distribute Squid

2025-01-06 Thread NgTech LTD
Hey Francesco,

Thank you for the big effort.
I had the next git working for the past 2 years now:
https://github.com/elico/squid-latest

I have been using it to release my binary builds.
I hope that the new releases github format will help to automate squid
builds in the long run.
Will it be ready for the 6.13 release?
Id it is, then I will update my builds and git to work with the releases
page.

Thanks,
Eliezer

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


On Sat, Jan 4, 2025 at 4:52 PM Francesco Chemolli 
wrote:

> Hi Squid Users,
>there are some ongoing changes on how we distribute the squid sources;
> some of them have already happened, some more will happen in the
> upcoming weeks.
>
> The end state we are aiming to settle on is to distribute Squid via Github
> Releases (https://github.com/squid-cache/squid/releases) .
>
> Each Squid release has been and will continue to be tagged in git with the
> SQUID_MAJ_MIN tag, which will be the official release point. Signed release
> tarballs will be made available as Github release assets. These are already
> available at https://github.com/squid-cache/squid/releases for every
> squid version from 1.0.0alpha to 6.12.
> We will no longer provide patches, these can be obtained from git.
>
> We have decommissioned the rsync and ftp distribution points on
> www.squid-cache.org, and are no longer advertising Squid mirrors on the
> website. We are very thankful to Squid mirror operators and volunteers for
> their continued support through the years.
>
> In the next few weeks we will rework the "Download" section of the squid
> website (https://www.squid-cache.org/Versions/) to point to Github for
> downloading instead of self-hosting tarballs, patches etc.
>
> Our plans moving forward:
> - we will restart announcing new releases to the squid-announce mailing
> list
>   see https://www.squid-cache.org/Support/mailing-lists.html
> - anyone wanting to track Squid releases can:
>   - use git tags
>   - use the 'gh' tool from github (https://cli.github.com/)
>   - rely on the 'releases' github page:
> https://github.com/squid-cache/squid/releases
>   - to only track the latest supported release:
> https://github.com/squid-cache/squid/releases/latest
>
> Any feedback is welcome
>
> --
> Francesco Chemolli
> Squid Software Foundation
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 2FA with Google Authenticator and squid login

2025-02-02 Thread NgTech LTD
What i was talking about is using both the auth helper and the external ack
helper.
The password is static but the authorization itself is done via some push
or another totp method that will authorize the login for a specific amount
of time.
And indeed it will kind of degrade the connection to 1fa for a period of
time, but, it will protect against couple specific attacks.
So, if the proxy connection is encrypted inside a tunnel then it's ok.

As for a directly accessible proxy over plain http, it will be vulnerable
to many auth attacks..

Thanks,
Eliezer

בתאריך יום ב׳, 3 בפבר׳ 2025, 7:10, מאת Amos Jeffries ‏:

> On 3/02/25 00:43, NgTech LTD wrote:
> > What would make  a 2fa in squid case?
> >
>
>
> When receiving a new login attempt the authentication (auth_param)
> helper should initiate whatever side-channel token delivery is needed.
> Then return "ERR" to Squid as usual.
>
>
> Replace the login challenge error message with a login page to receive
> that token and deliver it to a server that marks the client as logged
> in. (Both ERR_ACCESS_DENIED and ERR_CACHE_ACCESS_DENIED. Either new
> templates or a deny_info 401/407 - I'm not sure which will work best)
>
>
> Somewhat like how the SQL_session helper works in "active mode" session,
> but through the auth_param helpers instead of external ACL sessions.
>
>
> HTH
> Amos
>
>
> > Thanks,
> > Eliezer
> >
> > בתאריך יום א׳, 2 בפבר׳ 2025, 13:22, מאת Amos Jeffries
> > ‏mailto:squ...@treenet.co.nz>>:
> >
> > On 2/02/25 07:43, ngtech1ltd wrote:
> >  > Hey,
> >  >
> >  > I was wondering if anyone have implemented any 2FA with squid.
> >  >
> >  > IE a simple forward proxy that implements an external ACL helper
> > that
> >
> > Ah, that would not be "authentication".
> >
> >
> > 2FA is done through Squid auth_param and authentication helpers same
> as
> > "normal" (1FA) authentication. It is just a slightly different bunch
> of
> > steps the auth system performs in the background outside of Squid.
> >
> >
> > Cheers
> > Amos
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-
> > cache.org>
> > https://lists.squid-cache.org/listinfo/squid-users  > lists.squid-cache.org/listinfo/squid-users>
> >
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > https://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can SQUID change the destination address from ip to hostname?

2025-02-05 Thread NgTech LTD
Hey,

Unless you have access to the dns resolver resolved domain and ip addresses.
With these you can try(not 100%) to find the relevant domain if it's not a
multi tenant vps or cdn or waf provider.

Eliezer

בתאריך יום ד׳, 5 בפבר׳ 2025, 12:01, מאת Matus UHLAR - fantomas ‏<
uh...@fantomas.sk>:

> On 04.02.25 21:22, Foxy Lady wrote:
> >I try again asking the ML if
> >
> >- can Squid change the destination address of a call?
> >
> >I mean, something like
> >
> > $destination $Host
> >
> >original $destination is 104.26.9.59:443
> >$Host is the original HTTP Header "Host: x" alias "api.myip.com:443"
> >
> >So, a call to
> >https://104.26.9.59:443/test
> >will become
> >https://api.myip.com:443/test
>
> Hello,
>
> I guess that the URL redirector could do that.
> http://www.squid-cache.org/Doc/config/url_rewrite_program/
>
> Note that it's extremely unreliable, because IP=>hostname mapping is a
> wild
> guess, because after client asks for IP address, you don't really know
> what
> hostname they want.
> So my recommendation is: don't do that.
>
> Note that SOCKS5 protocol supports DNS resolution at server level
>
> >martedì 4 febbraio 2025 06:17, Foxy Lady  ha
> scritto:
> >
> >> Hi again.
> >> I finally found a tool (great tool, "GO Simple Tunnel") which can
> serves both HTTP(S)/SOCKS5 Proxy with dns resolution, so chaining it in the
> middle of Squid, Squid receives the destination in format of "domain.fqdn"
> and not ip "x.x.x.x",
> >>
> >> TCP_TUNNEL/200 4118 CONNECT api.myip.com:443 username HIER_DIRECT/
> 104.26.9.59 - "Go-http-client/1.1" [User-Agent:
> Go-http-client/1.1\r\nProxy-Authorization: Basic
> bWFyY286dHIwdHQwbGE=\r\nProxy-Connection: keep-alive\r\nHost:
> api.myip.com:443\r\n] [HTTP/1.1 200 Connection established\r\n\r\n]
> >>
> >> 👍👍👍
> >>
> >>
> >> Inviato con l'email sicura Proton Mail.
> >>
> >>
> >> lunedì 3 febbraio 2025 22:22, Foxy Lady foxy_lady_1...@proton.me ha
> scritto:
> >>
> >> > Sorry, resend the post in txt and correct some parts.
> >> > The question is: can i force SQUID to do a reverse dns lookup and
> maintain the Host Header inside (where's a ptr record is found), also if i
> can't find a ptr record? SOCKS5 works with 1st level tcp, so send ip
> addresses, i need some tool, or Squid workaround, which can force a reverse
> dns. I know, it's quite impossible if a ptr record is not found in dnses,
> but... i try..
> >> >
> >> > --
> >> >
> >> > Hi all.
> >> > As in subject.
> >> > SQUID server has its own dns resolver.
> >> >
> >> > Can SQUID change the destination address from ip to hostname?
> >> >
> >> > CLIENT > SQUID > DESTINATION
> >> >
> >> > 192.168.178.2 TCP_TUNNEL/200 4120 CONNECT api.myip.com:443 -
> HIER_DIRECT/104.26.8.59
> >> >
> >> > CLIENT > SOCKS5 PROXY > SQUID > DESTINATION
> >> >
> >> > 192.168.178.50 TCP_TUNNEL/200 4126 CONNECT 104.26.9.59:443 -
> HIER_DIRECT/104.26.9.59
> >> >
> >> > I would need,
> >> >
> >> > CLIENT > SOCKS5 PROXY > SQUID > DESTINATION
> >> >
> >> > 192.168.178.50 TCP_TUNNEL/200 4126 CONNECT api.myip.com:443 -
> HIER_DIRECT/104.26.9.59
>
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> The early bird may get the worm, but the second mouse gets the cheese.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 2FA with Google Authenticator and squid login

2025-02-02 Thread NgTech LTD
What would make  a 2fa in squid case?

Thanks,
Eliezer

בתאריך יום א׳, 2 בפבר׳ 2025, 13:22, מאת Amos Jeffries ‏:

> On 2/02/25 07:43, ngtech1ltd wrote:
> > Hey,
> >
> > I was wondering if anyone have implemented any 2FA with squid.
> >
> > IE a simple forward proxy that implements an external ACL helper that
>
> Ah, that would not be "authentication".
>
>
> 2FA is done through Squid auth_param and authentication helpers same as
> "normal" (1FA) authentication. It is just a slightly different bunch of
> steps the auth system performs in the background outside of Squid.
>
>
> Cheers
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Kids control and time limit function

2025-03-16 Thread NgTech LTD
I was wondering if there is a ready to use solution with web-ui for kid
time limit.
I am using mikrotik kid-control which is very nice and I was wondering if
anyone have implemented a similar function for squid with an external-acl
helper.
The src options are by:
* username
* src ip address
* src mac address

There should be a schedule in a DB per user ID.
The external ACL helper should cache for about 30 seconds and the table
should be by an hour and day of the week.
This way it would be pretty simple to build a web ui to manage the schedule.

I would like to get some input on things that kid control might be good
doing.

Thanks,

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] windows updates

2025-03-16 Thread NgTech LTD
Hey,

Did you manage to find a solution for your use case?
Let me know if you need assistance with this issue.

Eliezer

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


On Tue, Mar 4, 2025 at 1:57 AM Doug Tucker 
wrote:

> I have read through everything I can find on this subject but still cannot
> seem to get around the issue of windows updates not working through the
> squid transparent proxy.  No matter what I try I continue to see this in
> the cache log and windows update will not connect.
>
> 2025/03/03 23:26:55 kid5| Error negotiating SSL on FD 25:
> error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify
> failed (1/-1/0)
>
> I tried adding the info from the following doc to no avail.
>
> https://wiki.squid-cache.org/SquidFaq/WindowsUpdate
>
>
> The relevant parts of my squid.conf:
>
> #Handling HTTPS requests
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> acl SSL_port port 443
> http_access allow SSL_port
> acl allowed_https_sites ssl::server_name "/etc/squid/allowed-sites.txt"
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
> ssl_bump peek step2 allowed_https_sites
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate step2 all
>
> #windows update
> acl DiscoverSNIHost at_step SslBump1
> acl NoSSLIntercept ssl::server_name_regex -i "/etc/squid/url.nobump"
> ssl_bump splice NoSSLIntercept
> ssl_bump peek DiscoverSNIHost
> ssl_bump bump all
>
> I ran tcpdump and added every url i could find to the allowed-sites.txt
> and added the 2 sites recommended tot he url.nobump.  If anyone has gotten
> this to work any help would be appreciated.
>
>
>
>
>
>
> *Doug Tucker*
> Sr. Director of Networking and Linux Operations
>
> *o:* 817.975.5832
> *e: *doug.tuc...@navigaglobal.com
>
>
> Newscycle Solutions is now Naviga. Learn more.
>
>
> CONFIDENTIALITY NOTICE: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibite
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] windows updates

2025-03-16 Thread NgTech LTD
I have not tried to use SSL bump but with a regular proxy which blocks
everything else then the next list of dstdomain:
.delivery.mp.microsoft.com (http)
.dsp.mp.microsoft.com (http)
.download.windowsupdate.com (http)
static.edge.microsoftapp.net (HTTPS-connect)

The windows updates works just fine.
And as I wrote before, there are two channels: Secure for communication and
plain HTTP for data transfer.

If you need more help let me know.


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


On Sun, Mar 16, 2025 at 4:34 PM Doug Tucker 
wrote:

> No, no one responded.
>
> Doug Tucker
> Sr. Director of Networking and Linux Operations
> doug.tuc...@navigaglobal.com
> ------
> *From:* NgTech LTD 
> *Sent:* Sunday, March 16, 2025 2:38:35 AM
> *To:* Doug Tucker 
> *Cc:* squid-users@lists.squid-cache.org  >
> *Subject:* Re: [squid-users] windows updates
>
> You don't often get email from ngtech1...@gmail.com. Learn why this is
> important <https://aka.ms/LearnAboutSenderIdentification>
>
> Naviga WARNING: External email. Please verify sender before opening
> attachments or clicking on links.
>
> Hey,
>
> Did you manage to find a solution for your use case?
> Let me know if you need assistance with this issue.
>
> Eliezer
> 
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
>
>
> On Tue, Mar 4, 2025 at 1:57 AM Doug Tucker 
> wrote:
>
> I have read through everything I can find on this subject but still cannot
> seem to get around the issue of windows updates not working through the
> squid transparent proxy.  No matter what I try I continue to see this in
> the cache log and windows update will not connect.
>
> 2025/03/03 23:26:55 kid5| Error negotiating SSL on FD 25:
> error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify
> failed (1/-1/0)
>
> I tried adding the info from the following doc to no avail.
>
> https://wiki.squid-cache.org/SquidFaq/WindowsUpdate
>
>
> The relevant parts of my squid.conf:
>
> #Handling HTTPS requests
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> acl SSL_port port 443
> http_access allow SSL_port
> acl allowed_https_sites ssl::server_name "/etc/squid/allowed-sites.txt"
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
> ssl_bump peek step2 allowed_https_sites
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate step2 all
>
> #windows update
> acl DiscoverSNIHost at_step SslBump1
> acl NoSSLIntercept ssl::server_name_regex -i "/etc/squid/url.nobump"
> ssl_bump splice NoSSLIntercept
> ssl_bump peek DiscoverSNIHost
> ssl_bump bump all
>
> I ran tcpdump and added every url i could find to the allowed-sites.txt
> and added the 2 sites recommended tot he url.nobump.  If anyone has gotten
> this to work any help would be appreciated.
>
>
>
>
>
>
> *Doug Tucker*
> Sr. Director of Networking and Linux Operations
>
> *o:* 817.975.5832
> *e: *doug.tuc...@navigaglobal.com
>
>
> Newscycle Solutions is now Naviga. Learn more.
>
>
> CONFIDENTIALITY NOTICE: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibite
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kids control and time limit function

2025-03-16 Thread NgTech LTD
There is an issue with squid config file.
There is a limit to how many times i can trigger a reload.
I would like to configure it by the minute with a db or some external
config file.

For now I am trying to use a mysql db and it's fine with both mysql and
sqlite.
I have a web ui and x grok3 helps me to implement what I was thinking about.

So the filtering option would be either a user name or a mac address or a
src ip or some tag that might be based on authentication via a radius
server on a ppp connection.

I am unsure if ISP's will use squid and collect user related meta data such
as authentication.
If a radius ppp login would register the username and the ip and on a login
it would unregister the username from the ip, it would pretty nice and it
will allow to use such a webui to manage access in the ISP level.




Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

בתאריך יום א׳, 16 במרץ 2025, 17:48, מאת Jonathan Lee ‏<
jonathanlee...@gmail.com>:

> This would block everything during a time frame
>
> acl block_hours time 00:30-05:00
> ssl_bump terminate all block_hours
> http_access deny all block_hours
>
> Squid’s time directive is what you need.
>
> Sent from my iPhone
>
> On Mar 16, 2025, at 01:52, NgTech LTD  wrote:
>
> 
> I was wondering if there is a ready to use solution with web-ui for kid
> time limit.
> I am using mikrotik kid-control which is very nice and I was wondering if
> anyone have implemented a similar function for squid with an external-acl
> helper.
> The src options are by:
> * username
> * src ip address
> * src mac address
>
> There should be a schedule in a DB per user ID.
> The external ACL helper should cache for about 30 seconds and the table
> should be by an hour and day of the week.
> This way it would be pretty simple to build a web ui to manage the
> schedule.
>
> I would like to get some input on things that kid control might be good
> doing.
>
> Thanks,
> 
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] windows updates

2025-03-16 Thread NgTech LTD
I will try to look at it later on.
>From what I remember windows updates are using both http and https.
The communication channel was encrypted but the transfer channel was plain
http.




Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

בתאריך יום א׳, 16 במרץ 2025, 16:34, מאת Doug Tucker ‏<
doug.tuc...@navigaglobal.com>:

> No, no one responded.
>
> Doug Tucker
> Sr. Director of Networking and Linux Operations
> doug.tuc...@navigaglobal.com
> ----------
> *From:* NgTech LTD 
> *Sent:* Sunday, March 16, 2025 2:38:35 AM
> *To:* Doug Tucker 
> *Cc:* squid-users@lists.squid-cache.org  >
> *Subject:* Re: [squid-users] windows updates
>
> You don't often get email from ngtech1...@gmail.com. Learn why this is
> important <https://aka.ms/LearnAboutSenderIdentification>
>
> Naviga WARNING: External email. Please verify sender before opening
> attachments or clicking on links.
>
> Hey,
>
> Did you manage to find a solution for your use case?
> Let me know if you need assistance with this issue.
>
> Eliezer
> 
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
>
>
> On Tue, Mar 4, 2025 at 1:57 AM Doug Tucker 
> wrote:
>
> I have read through everything I can find on this subject but still cannot
> seem to get around the issue of windows updates not working through the
> squid transparent proxy.  No matter what I try I continue to see this in
> the cache log and windows update will not connect.
>
> 2025/03/03 23:26:55 kid5| Error negotiating SSL on FD 25:
> error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify
> failed (1/-1/0)
>
> I tried adding the info from the following doc to no avail.
>
> https://wiki.squid-cache.org/SquidFaq/WindowsUpdate
>
>
> The relevant parts of my squid.conf:
>
> #Handling HTTPS requests
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> acl SSL_port port 443
> http_access allow SSL_port
> acl allowed_https_sites ssl::server_name "/etc/squid/allowed-sites.txt"
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
> ssl_bump peek step2 allowed_https_sites
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate step2 all
>
> #windows update
> acl DiscoverSNIHost at_step SslBump1
> acl NoSSLIntercept ssl::server_name_regex -i "/etc/squid/url.nobump"
> ssl_bump splice NoSSLIntercept
> ssl_bump peek DiscoverSNIHost
> ssl_bump bump all
>
> I ran tcpdump and added every url i could find to the allowed-sites.txt
> and added the 2 sites recommended tot he url.nobump.  If anyone has gotten
> this to work any help would be appreciated.
>
>
>
>
>
>
> *Doug Tucker*
> Sr. Director of Networking and Linux Operations
>
> *o:* 817.975.5832
> *e: *doug.tuc...@navigaglobal.com
>
>
> Newscycle Solutions is now Naviga. Learn more.
>
>
> CONFIDENTIALITY NOTICE: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibite
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kids control and time limit function

2025-03-17 Thread NgTech LTD
I know this ACL.
However managing the ACLs statically is not the same as dynamically.
Managing squid.conf or any squid.conf formatted file is kinds of an issue
when using a web ui.
It's much easier for me to handle a sqlite/mysql table for that.
If I set the times to static ie by the hour ie 0-1 and 1-2 and 2-3 coupled
with the day, it's pretty easy to handle.
And if the web ui is simple enough you can manage the times on the fly with
a margin of error by a minute.
It's kids not a business so it's enough for my use case.
I just need to allow them to surf or block them as needed and also in a
planned matter.
There is another factor which I have considered and it's a timer based
external_acl helper.
A helper that will check for the current state and then allow or deny based
on the timer and in time the scheduler will
send a command to the timer.


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


On Mon, Mar 17, 2025 at 3:27 AM Jonathan Lee 
wrote:

>
>   acl aclname time [day-abbrevs] [h1:m1-h2:m2]
> # [fast]
> #  day-abbrevs:
> # S - Sunday
> # M - Monday
> # T - Tuesday
> # W - Wednesday
> # H - Thursday
> # F - Friday
> # A - Saturday
> #  h1:m1 must be less than h2:m2
>
>
>
> You can add any ACL with time based needs…
>
> Is this what you're looking for?
>
> On Mar 16, 2025, at 08:41, Jonathan Lee  wrote:
>
> This would block everything during a time frame
>
> acl block_hours time 00:30-05:00
> ssl_bump terminate all block_hours
> http_access deny all block_hours
>
> Squid’s time directive is what you need.
>
> Sent from my iPhone
>
> On Mar 16, 2025, at 01:52, NgTech LTD  wrote:
>
> 
> I was wondering if there is a ready to use solution with web-ui for kid
> time limit.
> I am using mikrotik kid-control which is very nice and I was wondering if
> anyone have implemented a similar function for squid with an external-acl
> helper.
> The src options are by:
> * username
> * src ip address
> * src mac address
>
> There should be a schedule in a DB per user ID.
> The external ACL helper should cache for about 30 seconds and the table
> should be by an hour and day of the week.
> This way it would be pretty simple to build a web ui to manage the
> schedule.
>
> I would like to get some input on things that kid control might be good
> doing.
>
> Thanks,
> 
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users