[squid-users] Squid as an education tool

2024-02-08 Thread ngtech1ltd
Hey Everybody,

I am just releasing the latest 6.7 RPMs and binaries while running couple tests 
and I was wondering if this was done.
As I am looking at proxy, in most cases it's being used as a policy enforcer 
rather than an education tool.
I believe in education as one of the top priorities compared to enforcing 
policies.
The nature of policies depends on the environment and the risks but eventually 
understanding the meaning of the policy
gives a lot to the cooperation of the user or an employee.

I have yet to see a solution like the next:
Each user has a profile/user which when receiving a policy block will be 
prompted with an option to allow temporarily 
the specific site or domain.
Also, I have not seen an implementation which allows the user to disable or 
lower the policy strictness for a short period of time.

I am looking for such implementations if those exist already to run education 
sessions with teenagers.

Thanks,
Eliezer  

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as an education tool

2024-02-10 Thread ngtech1ltd
Hey Francesco and others,

First thanks of the direction.

I was thinking about using generic tools that are available as possible.
Also, in education there is a whole thing about it not being an intercept proxy 
(with or without bump) so
it simplifies some of the aspects of the setup.

I would try to write the general specs of the project from my point of view.
Since the goal is educating and not enforcing a policy we can start by defining 
the age of the kids as low as 6-5 or even lower.
Due to the age of the kids there is a baseline policy that must be enforced ie 
couple of standard and known categories.
With this in mind we need a DB setup that will host these categories and will 
be performant enough for high load ie Schools.
Since the law in most if not all countries on earth prohibit nudity to a degree 
and also prohibit the demonstration of reproductive activities
in both animals and humans, it's pretty clear that any violation of these 
should only be possible only by professional staff that is allowed by the law
to open these doors for very very special cases (which I know to exist).
There are also other activities and categories which are known to be harmful 
for specific ages which should be blocked by policy.
We can divide the level of filtering policy to domains, urls and content inside 
a page or a dynamic app which is either embedded or sources in another method.
Domains and urls are the most known and is commonly filtered and many tools are 
available to enforce and block these.
There is some issue is with systems and sites which are not based on static 
content and others which are based on content which is 
streams inside Websocket or another method such as content that is chunked over 
multiple urls or customized requests and responses.

On this specific project I want to address only the basics which are: domains 
and maybe urls.
Due to the above fact and the fact that the internet is far ahead of 1985 or 
2000 the depth of the education session is restricted 
the proxy will be used only to demonstrate that there are bad actors on the 
Internet.
There are also other categories that probably many would like to add into the 
list such as malware sites.

I believe that the right way is to use a forward proxy which will use usernames 
to authenticate and identify the user.
This will make the whole setup a bit simpler to build and it is based on that 
the kids or teenagers are actively participating
in the setup and agreed to the terms of use based on their trust in the 
teachers and parents.
We also need to show some trust to the kids to allow them to be open in the 
session.

From my point of view the architecture should be something like this:
* Proxy
* DB (SQL or another)
* Users Web portal (app)
* Admins Web Portal (app)
* Blockpages (static content with a touch of JS)
* A set of external helpers (auth, dstdomain matcher, time limit, dns rbl 
checker)
* Audit system

The assumption is that only authenticated users can use the proxy ie no 
username no internet.. even for windows and AV updates.
We also assume that the admins of the proxy do not need to override the basic 
polices because they have access to unrestricted internet.
Authentication can be done using the existing tools with a MySQL DB which can 
be integrated with the web portal( not AD or LDAP..)
The DB for the dstdomain/url blacklists should be fast enough to allow almost 
real time updates to the degree of TTL such as 5 to 10 seconds.
Every domain which should be blocked by the policy is a "must bump" one while 
if it allowed by the policy a "no bump" should be applied.
There are couple layers of block and whitelists (first match from left to 
right):
  Top-level(never allowed), , campus wide customized blacklist (for testing), 
campus wide customized whitelist(for testing), user customized blacklist, user 
customized whitelist, campus wide blacklist, campus wide whitelist

The user can manage his lists via the web portal but not the top level and 
campus lists.
There is also a section in the web portal which allows the user to contact the 
content administrators about any non user customized
lists such as the top level and the campus wide.
The expectation from the content administrators is to really understand the 
user interaction with them and to not just enforce the policy.
There is also a requirement from the content admin to have above the average 
technical knowledge about how internet works.
It includes both IP level and application level such as how TLS and firewall 
piercing.
The expectation is that all changes in any of the lists will be logged in the 
audit log.
Also, any "action" in the web portal will be logged in the audit log.
The audit is required by law to prevent from bad actions to be done in un 
supervised manner.
Due to this the Proxy structure and config is set and cannot be changed by 
anyone, even the sys admins.
To allow the system to be effective the only option to access the DB is using 
an au

[squid-users] Basic Squid-Cache docker containers

2024-02-11 Thread ngtech1ltd
Hey Everyone,

As a part of the project I am currently working on I needed a basic squid-cache 
container.
I have looked for these in Docker hub and wasn't able to find such a container 
image with the newest version of squid.

Due to this I have created 3 containers:
Alma8 based
Debian 12 Based
Ubuntu 22.04 based

The images can be found at:
https://hub.docker.com/u/elicro

and are:
elicro/almasquid:6.7
or 
elicro/almasquid:latest

elicro/debiansquid:6.7
or
elicro/debiansquid:latest

elicro/ubuntusquid:6.7
or
elicro/ubuntusquid:latest

I will be happy to hear any response regarding these containers.

Yours,
Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Anyone build Squid for on multiarch ie arm and arm64?

2024-02-12 Thread ngtech1ltd
I have couple RouterOS devices which supports containers with the next CPU 
arches:
• x86_64
• arm64
• armv6
• armv7

And I was wondering if someone bothered compiling squid containers for these 
arches?

I know that there are packages for Debian and Ubuntu but these are not 6.x 
squid but rather 5.x.
I am almost sure that publishing a container for these would benefit someone 
but not sure about it.

Thanks,
Eliezer


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Docker container

2024-02-28 Thread ngtech1ltd
I started working on the docker containers of squid-cache these days.
The first one is at:
https://hub.docker.com/r/elicro/debian12squid/tags

but it's not ready to use as is yet, just the build steps for now with the 
binaries in place.
I need to add the supervisord damon and maybe couple other things.
These containers will not be a plain Squid as is but rather with all the tools 
I use daily like ruby and couple other packages.

My goals are:
* debian
* ubuntu
* AlmaLinux
* CentOS
* Oracle Linux
* Alpine linux
And maybe opensuse but not really sure about that yet.

The architectures I plan to build these containers for are:
* x86_64
* arm/v6
* arm/v7
* arm64

The above is since these are the most commonly used in containers platforms 
such as docker/podman/other.
My original plan was to build a container with the binaries but, It's so simple 
to build and publish with docker with the multi stages of containers so I don't 
care
to just let the CPU and RAM work couple more minutes in real so couple other 
platforms will benefit from my time.

Any recommendations and comments are more then welcome.

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-dev] Using AWS and a SQUID server to create Residential Proxies

2024-03-12 Thread ngtech1ltd
Hey Edwin,

The best place to start is Squid-Users and please do not send emails to all the 
available lists.

Squid-Cache is an open source project which you can use on any Linux OS (and 
couple others) and the 
project is not publishing any official AWS products in the any marketplace.
There are IT service providers which offer their services based on Squid-Cache.
To use squid as a residential proxy you would need more then just AWS knowledge 
and you are more then
welcome to ask and continue this thread here in the public list but just so you 
know you probably need more knowledge.

Eliezer

From: squid-dev  On Behalf Of trance 
eastaf
Sent: Friday, March 8, 2024 2:28 AM
To: squid-users@lists.squid-cache.org; i...@squid-cache.org; 
squid-...@lists.squid-cache.org
Subject: [squid-dev] Using AWS and a SQUID server to create Residential Proxies

Hello Team Squid,

I saw your proxy servers on Aws Marketplace.

How do i use them to create many rotating residential proxies on AWS, can you 
guide me step by step, please?


Thank you, 
Edwin



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Recommended squid settings when using IPS-based domain blocking

2024-03-13 Thread ngtech1ltd
Hey Jason,

I can try to build Squid 6.8 for RHEL 9, would this help you to test it as a 
solution?

Eliezer

From: squid-users  On Behalf Of 
Jason Marshall
Sent: Wednesday, March 6, 2024 4:49 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Recommended squid settings when using IPS-based domain 
blocking

Good morning,

We have been using squid (version squid-5.5-6.el9_3.5) under RHEL9 as a simple 
pass-through proxy without issue for the past month or so. Recently our 
security team implemented an IPS product that intercepts domain names known to 
be associated with malware and ransomware command and control. Once this was in 
place, we started having issues with the behavior of squid.

Through some troubleshooting, it appears that what is happening is that that 
when a user's machine make a request through squid for one of these bad 
domains, the request is dropped by the IPS, squid waits for the DNS timeout, 
and then all requests made to squid after that result in NONE_NONE/500 errors, 
and it never seems to recover until we do a restart or reload of the service.

Initially the dns_timeout was set for 30 seconds. I reduced this, thinking that 
perhaps requests were building up or something along those lines. I set it to 5 
seconds, but that just got us to a failure state faster.

I also found the negative_dns_ttl setting and thought it might be having an 
effect, but setting this to 0 seconds resulted in no change to the behavior.

Are there any configuration tips that anyone can provide that might work better 
with dropped/intercepted DNS requests? My current configuration is included 
here:
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src http://10.0.0.0/8 # RFC 1918 local private network 
(LAN)
acl localnet src http://100.64.0.0/10  # RFC 6598 shared address space 
(CGN)
acl localnet src http://169.254.0.0/16 # RFC 3927 link-local (directly 
plugged) machines
acl localnet src http://172.16.0.0/12  # RFC 1918 local private network 
(LAN)
acl localnet src http://192.168.0.0/16 # RFC 1918 local private network 
(LAN)

acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 9191# papercut
http_access deny !Safe_ports
http_access allow localhost manager
http_access deny manager

http_access allow localnet
http_access allow localhost
http_access deny all
http_port http://0.0.0.0:3128
http_port http://0.0.0.0:3129
cache deny all
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
debug_options rotate=1 ALL,2
negative_dns_ttl 0 seconds
dns_timeout 5 seconds

Thank you for any help that you can provide.

Jason Marshall

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid stops responding after 12 browser tabs opened

2024-03-13 Thread ngtech1ltd
Hey,

I should have built the newest version of squid for debian 11 but for some 
reason I didn't built and published them.
I am using a tar.gz packages and not .deb ones.
I will try to build one later on.

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
nuit...@earthlink.net
Sent: Tuesday, March 5, 2024 11:54 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid stops responding after 12 browser tabs opened

Squid 4.17 compiled on Debian 11

Squid works, but, after 12 to 17 browser tabs are opened to any web site,
subsequent tabs fail to load web site content. At the same time, telnet:80
through Squid also fails. Yet, network traffic that bypasses Squid
successfully communicates with the target web site.

Ultimately, after about two minutes, unresponsive browser tabs usually
finish loading web site content.

Adjusting cache_mem makes no difference.

This is remarked:
#cache_dir ufs /var/spool/squid 100 16 256
Tried to enable it, but squid -z fails with permission issues, so keeping
cache_dir enabled breaks squid entirely.

Any thoughts, please?

With apologies if this violates etiquette (hey, I did
--enable-http-violations!), this Squid server is taking on greater
importance, and while it's infinitely fascinating, I recognize my limits, so
I'm open to experienced help managing and maintaining it, and probably
building a newer version. :-)

Thanks!


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Manipulating request headers

2024-03-13 Thread ngtech1ltd
Hey Ben,

There is another option which is to use an ICAP server to modify the headers 
and strip the br part if exists.
It depends on the load on the server but you can edit only the headers and to 
not use any preview which will remove some un-needed overhead.

Take a peek at the example:
https://github.com/elico/bgu-icap-example


Eliezer

From: squid-users  On Behalf Of Ben 
Goz
Sent: Monday, March 11, 2024 5:00 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Manipulating request headers

By the help of God.

Hi all,
I'm using squid with ssl-bump I want to remove br encoding for request header 
Accept-Encoding
currently I'm doing it using the following configuration:
request_header_access Accept-Encoding deny all
request_header_add Accept-Encoding gzip,deflate

Is there a more gentle way of doing it?

Thanks,
Ben

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid stops responding after 12 browser tabs opened

2024-03-13 Thread ngtech1ltd
OK So I have built 6.8 for debian-11 but the NIS support has been removed.

https://www.ngtech.co.il/repo/debian/11/x86_64/

https://www.ngtech.co.il/repo/debian/11/x86_64/squid-6.8-64-bin-stripped-only.tar

I have yet to publish an installation script for it but there are couple 
binaries and shared folders.
It should be a "drop-in" replacement for the currently installed squid version 
of debian-11.
First backup your squid config and then untar the tar into some specific folder 
and backup your  (with the -C option) and make sure you sync the files into the 
current locations.

Let me know if you need some help.
Eliezer

-Original Message-
From: squid-users  On Behalf Of 
nuit...@earthlink.net
Sent: Tuesday, March 5, 2024 11:54 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid stops responding after 12 browser tabs opened

Squid 4.17 compiled on Debian 11

Squid works, but, after 12 to 17 browser tabs are opened to any web site,
subsequent tabs fail to load web site content. At the same time, telnet:80
through Squid also fails. Yet, network traffic that bypasses Squid
successfully communicates with the target web site.

Ultimately, after about two minutes, unresponsive browser tabs usually
finish loading web site content.

Adjusting cache_mem makes no difference.

This is remarked:
#cache_dir ufs /var/spool/squid 100 16 256
Tried to enable it, but squid -z fails with permission issues, so keeping
cache_dir enabled breaks squid entirely.

Any thoughts, please?

With apologies if this violates etiquette (hey, I did
--enable-http-violations!), this Squid server is taking on greater
importance, and while it's infinitely fascinating, I recognize my limits, so
I'm open to experienced help managing and maintaining it, and probably
building a newer version. :-)

Thanks!


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dynamic ACL with local auth

2024-05-06 Thread ngtech1ltd
Hey Albert,

The right way to do it is to use an external acl helper that will use some kind 
of database for the settings.
The other option is to use a reloadable ACLs file.
But you need to clarify exactly the goal if you want more then a basic advise.

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
Albert Shih
Sent: Monday, May 6, 2024 11:49 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Dynamic ACL with local auth

Hi everyone, 


I like to know how (if it's possible) to create acl dynamically. 

What I try to do is to have peoples authenticated (user1, user2, user3,
etc.) then for each user I like to create a set of acl. The problem is I
cannot have the set of acl once for all, it's dynamically change in time. 

I can put the set of acl in anything, like static file, mysql db, etc...

Performance is not a issue (no lot of users) but I really would like not to
have restart squid each time the acl static file change. 

The authentication would be through htpasswd. 

What would be the best way to do it ? 

Regards.


-- 
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
lun. 06 mai 2024 10:44:28 CEST
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dynamic ACL with local auth

2024-05-08 Thread ngtech1ltd
Hey Albert,

It's preferable to use an external ACL compared to reloading the squid conf in 
general.
It will probably require to use external acl helper with the authenticated 
username as a detail which is being sent to the helper.
Let's take an example.org squid.conf for the "project".
On what ports squid listens? 80 and 443?
It's a reverse proxy or a forward proxy which is defined in the client browser?

An "auto" reload of squid can be done using couple of systemd triggers.
If it's enough for you I can try to research how it can be done and we will go 
on from there.
If you wish to choose the "dark" path of external_acl helper development I will 
also be happy to try and
assist you in my spare time (which is not a lot these days).

Eliezer

-Original Message-
From: Albert Shih  
Sent: Wednesday, May 8, 2024 10:55 AM
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Dynamic ACL with local auth

Le 06/05/2024 à 12:21:10+0300, ngtech1...@gmail.com a écrit
Hi, 

> 
> The right way to do it is to use an external acl helper that will use some 
> kind of database for the settings.

Ok. I will check that. 

> The other option is to use a reloadable ACLs file.

But those this reload need a restart of the service ? 

> But you need to clarify exactly the goal if you want more then a basic advise.

Well..pretty simple task I need to build a squid server to allow/deny
people access to some data (website) because those website don't support
authentication. 

But the rule of access “allow/deny” are manage in other place through
another application. 

So the goal is to have some «thing» who going to retrieve the «permissions»
of the user and apply the ACL on squid. 

It's not «ultra dynamic» the modification of the permissions will occur
time to time. So even a reload will do.if the reload don't need a
shutdown of squid. 

Thanks. 

Regards

-- 
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mer. 08 mai 2024 09:51:00 CEST

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Any ideas for a project and\or research with AI about squid-cache?

2024-06-09 Thread ngtech1ltd
Hey Everyone,

I was wondering if there are specific things which can be worked on with an AI 
as a testing project to challenge an AI.
I am looking for a set of projects which a beginner squid-cache admin can try 
to implement to certify himself with real world experience.

What are the most common use cases of squid-cache these days?
* Forward proxy
* Reverse proxy
* Public proxy services with authentication
* Caching
* Authentication proxy against a DB
* Authentication proxy against LDAP and/or AD
* Radius authentication
* Multi factor authentication
* Captive portal
* SSL SNI inspection
* Traffic classification (based on APPS list)
* Url Filtering
* Domain based Filtering
* Internet Usage time limit (30 minutes or any other) based on login or actual 
traffic.
* Outband IP address selection
Etc

Please help me to fill the list.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://www.ngtech.co.il/


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Any ideas for a project and\or research with AI about squid-cache?

2024-06-09 Thread ngtech1ltd
Hey Jonathan,

First of all, thanks for the response.
I think that all squid-users knows that AI is there since very long ago.
However, since it's a tool of the current times I want to be familiar with the 
tool capabilities.
The AI tools which are published these days gives a specific response to a 
specific requirement and need with the growth of IT in the world.

Squid-Cache users list is a place which you can ask a question and a human with 
emotions, sensitivity and knowledge try to help.
This is one of the only places which I don't even remember someone responding 
with "google it" or something similar.
The question I am asking myself is: 
I and others will not be here some time in the future and we try to document 
and leave after us things for the future.
I can try to say that we are working here with UDP and not TCP ie we are 
sending what we have into the world with the hope
it will reach others and will help them and make them happy and lively.

I believe that it's possible to learn things about Squid-Cache with the 
existing tools in an interactive way.
There are many documentations on Squid-Cache but some are old and others are 
just plain wrong.

I have some spare time here and there and I want to write a set of challenges 
for proxy admins.
I am thinking about it something like:
What might certify a Squid-Cache admin to be capable to be a successful admin?

The first things that most certifications do is to make sure there is 
"knowledge" or that the admin can implement specific use cases.
I believe that above the knowledge and technical capabilities there is a whole 
other level which might be lost when some will not be here.
I would be happy if the AI tools will be able to grasp from the mailing list 
threads something more then just the technical aspect of things.

Do you think a SoundBlaster 16 ISA card on a 386 or Pentium can transfer that?

Squid indeed is a very complex software!!

How can we attract some new Squid users into the list or to try and complete 
couple challenges?
Also, there are new versions of Linux distros around and these are a great 
playground for testing.

I will try to see if the AI can summarize the functionality of Squid-Cache 
(else then the cache itself).
I wrote couple caching tools in the past 10 years and I think that the new AI 
tools might be able to find
couple things which I missed and maybe offer better solutions for couple things.

Maybe some external_acl helpers or maybe to convert existing tools to rust or 
golang.
Maybe even these tools will be able to offer some ideas on how to fix specific 
bugs.
Even if they will not write the whole code for it the fact that what someone 
else wrote somewhere on the internet is
reaching to the prompt user we might be able to understand how a single 
document can affect a lot on the end result.

Let's try to follow on this thread later on.

Thanks,
Eliezer

-Original Message-
From: Jonathan Lee  
Sent: Sunday, June 9, 2024 7:43 PM
To: ngtech1...@gmail.com
Cc: squid-users 
Subject: Re: [squid-users] Any ideas for a project and\or research with AI 
about squid-cache?

I hate to tell you this AI that you know has been around for many years.
Anyone remember Sandblaster 16 ISA card software Dr. Spatzo?
All AI is, just adapted improved 1980s ideas. It’s not new, its been here for 
years, still just if else code with more data analytics. 

Anyway I use Proxy for checking URL requests and blocking them if needed, 
inspecting HTTPS with antivirus software, caching content and having the 
ability to scan it before it hits users and block it.
Web acceleration.
I primarily use it for inspection and security.
Squid could simply block out all requests to AI if you wanted, I have it set to 
block some.
CCPA in California provides legal avenues for user privacy, not many web 
analytics companies follow the requests to not track so they can be simply put 
blocked out. 

Squid is very complex software.

> On Jun 9, 2024, at 03:10, ngtech1...@gmail.com wrote:
> 
> Hey Everyone,
> 
> I was wondering if there are specific things which can be worked on with an 
> AI as a testing project to challenge an AI.
> I am looking for a set of projects which a beginner squid-cache admin can try 
> to implement to certify himself with real world experience.
> 
> What are the most common use cases of squid-cache these days?
> * Forward proxy
> * Reverse proxy
> * Public proxy services with authentication
> * Caching
> * Authentication proxy against a DB
> * Authentication proxy against LDAP and/or AD
> * Radius authentication
> * Multi factor authentication
> * Captive portal
> * SSL SNI inspection
> * Traffic classification (based on APPS list)
> * Url Filtering
> * Domain based Filtering
> * Internet Usage time limit (30 minutes or any other) based on login or 
> actual traffic.
> * Outband IP address selection
> Etc
> 
> Please help me to fill the list.
> 
> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile

Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-10 Thread ngtech1ltd
Hey Akash,
(Is this your first name?)
 
There are ways to test the config step by step with docker containers but it 
depends on the config size and complexity.
Even if you cannot share the squid.conf you can still summarize it to a degree.
There are 2 types of proxy services which can be implemented by Squid:
*   Forward
*   Reverse
 
With these there are tons of bricks which can be pilled up to achieve 
functionality.
@Alex and @Amos, can you try to help me compile a menu list of functionalities 
that Squid-Cache can be used for?
Ie as a forward proxy..
 
I believe that the project can offer a set of generic recipes for use cases 
that every support case will be able to look at to ask about.
Currently there are many questions which were answered but not all of them a 
documented enough to answer the use cases.
I can take this 4.15 to 6.9 as an example project and to document it for 
simplicity of others.
 
I will try to take a peek at the release notes from 4.15 till 6.9 to understand 
if there are specific things to be aware of for my specific use case.
 
The first thing I think should be mentioned is the list of deprecated helpers.
 
Thanks,
Eliezer
 
From: squid-users  On Behalf Of 
Akash Karki (CONT)
Sent: Wednesday, June 5, 2024 5:31 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Upgrade path from squid 4.15 to 6.x
 
Hi Team,
 
We are running on squid ver 4.15 and want to update to n-1 of the latest ver(I 
believe 6.9 is the latest ver).
 
I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest 
version) without any intermediary steps or do we have to  update to 
intermediary first and then move to the n-1 version of 6.9?
 
Kindly send us the detailed guidance!
 
On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) mailto:akash.ka...@capitalone.com> > wrote:
Hi Team,
 
We are running on squid ver 4.15 and want to update to n-1 of the latest ver(I 
believe 6.9 is the latest ver).
 
I want to understand if we can go straight from 4.15 to 6.x (n-1 of latest 
version) without any intermediary steps or do we have to  update to 
intermediary first and then move to the n-1 version of 6.9?
 
Kindly send us the detailed guidance!

 
-- 
Thanks & Regards,
Akash Karki
 
 
Save Nature to Save yourself :) 


 
-- 
Thanks & Regards,
Akash Karki
UK Hawkeye Team
Slack : #uk-monitoring
Confluence :   UK 
Hawkeye
 
Save Nature to Save yourself :) 
  _  


 

The information contained in this e-mail may be confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.




 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version squid-5.7-150400.3.6.1.x86_64 -- Squid is crashing continusly

2024-07-18 Thread ngtech1ltd
Hey Anitha,

There are couple missing details.
Is it a brand new proxy? What OS are you using? What Distro?
It looks like a very simple forward proxy setup.
When is the proxy crashing? At startup? After a while?

Thanks,
Eliezer

From: squid-users  On Behalf Of
M, Anitha (CSS)
Sent: Thursday, July 18, 2024 7:24 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid Version squid-5.7-150400.3.6.1.x86_64 -- Squid
is crashing continusly

Hi Team, 

We are seeing squid is continuously crashing with signal 6. Any known issues
with this version? Pls help.  Attached is the squid.conf file we are using
it. 

regards,
Anitha

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-19 Thread ngtech1ltd
Hey Alex,

Sorry for the confusion,
And  we are back in the Squid-Users.

I have tested this issue with Windows 11 as a client in an intercept and TPROXY 
mode.
I can try to test it using another client such as linux or windows 10 but I 
assume that the issue is the same.
I sniffed some packets on the proxy to see what might be wrong and I see that 
there is a SNI so I am not sure how to look at the issue.
I was thinking that maybe it's something with the OpenSSL version (3.x.x) on 
Fedora but then I installed both 5.9 and 6.10 on Almalinux 8 and the result is 
the same.

I will describe my setup which might give some background.
I have a very big lab...
In the front of the Internet connection there are couple NGFW devices and 
RouterOS.
Mikrotik RouterOS is the edge and all the others are used with PBR accordingly.
The proxy sits in a deferent segment on the network and I have tried couple 
methods to intercept the traffic with squid.
The only one which works with Squid and the existing equipment and do not cause 
some weird loop is ethernet level tunnel ie not:
* GRE
* IPIP
And couple others.

The only ones which works fine are:
* EoIP (Mikrotik which is based on GRE0
* VxLAN

There are two methods to intercept the traffic:
* PBR+DNAT on the squid box
* PBR+TPROXY on the squid box

Since the intercept method terminates the connection and creates a new one with 
the ip of the proxy it's very simple to even use gre and ipip.
But, with tproxy to allow the traffic being identified currently as a packet 
which is not still in the routing stack we the linux OS need to tag it somehow.
To do that the default "Salt" for the packet hash in the routing stack is the 
source and destination mac address.
Due to this the only methods which are allowing to use tproxy are the above 
mentioned tunnels. (Maybe I will post a video on it later on with a demo)

The Mikrotik RouterOS device re-routes the traffic from the LAN interface into 
the VxLAN interface directly to the proxy machine which has a
static or dynamic route to the LAN subnet via the other side of the VxLAN 
tunnel which is the edge RouterOS device.
I want to gather a set of configurations and tests for this configurations to 
verify what might cause this issue and if possible to resolve it.
For me it seems that if my FortiGate and CheckPoint devices are able to 
intercept the traffic and "Bump" it, there is no reason why squid should 
be able to do that the same way.

I will later on send you a private link to the pcaps in a zip file so you would 
be able to inspect this issue in the network level and to see if there is
some details which can help us understand what cause this specific issue.

I want to say that bumping works fine on non-intercepted connections and that I 
have tested the interception with the two available methods ie:
* DNAT Redirect
* Tproxy

Thanks,
Eliezer Croitoru

-Original Message-
From: Alex Rousskov  
Sent: Monday, August 19, 2024 7:18 PM
To: NgTech LTD 
Subject: Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump 
SSL Traffic

Eliezer, please move this thread back to squid-users mailing list 
instead of emailing me personally. When you do so, please clarify 
whether all 12 access.log records correspond to this single curl request 
(if not, please only share access.log record(s) that do correspond). --Alex.

On 2024-08-19 12:03, NgTech LTD wrote:
> This is the output of curl on windows 11 desktop:
> C:\Users\USER>curl https://www.youtube.com/ -k -v -o 1.txt
>% Total% Received % Xferd  Average Speed   TimeTime Time 
>   Current
>   Dload  Upload   Total   SpentLeft 
>   Speed
>0 00 00 0  0  0 --:--:-- --:--:-- 
> --:--:-- 0* Host www.youtube.com:443  
> was resolved.
> * IPv6: 2a00:1450:4001:800::200e, 2a00:1450:4001:80e::200e, 
> 2a00:1450:4001:81c::200e, 2a00:1450:4001:809::200e
> * IPv4: 142.250.185.78, 142.250.185.110, 142.250.185.142, 
> 142.250.186.174, 142.250.185.174, 142.250.184.238, 142.250.185.238, 
> 142.250.185.206, 142.250.181.238, 142.250.186.46, 142.250.186.78, 
> 172.217.16.142, 216.58.212.174, 216.58.206.46, 172.217.23.110, 
> 216.58.212.142
> *   Trying 142.250.185.78:443...
> * Connected to www.youtube.com  (142.250.185.78) 
> port 443
> * schannel: disabled automatic use of client certificate
> * ALPN: curl offers http/1.1
> * ALPN: server accepted http/1.1
> * using HTTP/1.x
>  > GET / HTTP/1.1
>  > Host: www.youtube.com 
>  > User-Agent: curl/8.8.0
>  > Accept: */*
>  >
> * Request completely sent off
> * schannel: remote party requests renegotiation
> * schannel: renegotiating SSL/TLS connection
> * schannel: SSL/TLS connection renegotiated
> * schannel: failed to decrypt data, need more data
> < HTTP/1.1 200 OK
> < Content-Type: text/html; charset=utf-8
> < X-Content-Type-Options: nosniff
> < Cache-Control: no-cache, no-store

Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-19 Thread ngtech1ltd
Attached a gist with all the technical details (the email was too long)

https://gist.githubusercontent.com/elico/bc5189e74aacf1f902f767fc1902d3a4/raw/afe876f5d46d2789d48b41dab7a73c7a6fd40be1/sslbump-issue-5.9.txt



Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Monday, August 19, 2024 10:59 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump 
SSL Traffic



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-20 Thread ngtech1ltd
Attached a link for the pcap file that might shed some light on the issue from 
a technical perspective:
https://cloud.hisstory.org.il/apps/maps/s/Mw8Cb8QLYto83rK

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-23 Thread ngtech1ltd
OK so the issue was that:

The http_port was used for ssl bump with intercept while the only port which 
can really intercept ssl connections is:

https_port

 

so I believe that there should be a warning about such a line in the cache log.

When there is http_port and intercept and ssl_bump there should be a warning.

 

Thanks,

Eliezer

 

From: NgTech LTD  
Sent: Monday, August 19, 2024 10:48 AM
To: Squid Users 
Subject: Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

 

I am testing Squid 6.10 on Fedora 40 (their package).
And it seems that Squid is unable to bump clients (ESNI/ECH)?

 

I had couple iterations of pek stare and bump and I am not sure what is the 
reason for that:
shutdown_lifetime 3 seconds
external_acl_type whitelist-lookup-helper ipv4 ttl=10 children-max=10 
children-startup=2 \
children-idle=2 concurrency=10 %URI %SRC 
/usr/local/bin/squid-conf-url-lookup.rb
acl whitelist-lookup external  whitelist-lookup-helper
acl ytmethods method POST GET
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8   # RFC 1918 local 
private network (LAN)
acl localnet src 100.64.0.0/10    # RFC 6598 
shared address space (CGN)
acl localnet src 169.254.0.0/16   # RFC 3927 
link-local (directly plugged) machines
acl localnet src 172.16.0.0/12    # RFC 1918 
local private network (LAN)
acl localnet src 192.168.0.0/16   # RFC 1918 
local private network (LAN)
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny to_localhost
http_access deny to_linklocal
acl tubedoms dstdomain .ytimg.com   .youtube.com 
  .youtu.be  
http_access allow ytmethods localnet tubedoms whitelist-lookup
http_access allow localnet
http_access deny all
http_port 3128
http_port 13128 ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 23128 tproxy ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 33128 intercept ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslcrtd_program /usr/lib64/squid/security_file_certgen -s 
/var/spool/squid/ssl_db -M 4MB
sslcrtd_children 5
acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all
acl monitoredSites ssl::server_name .youtube.com   
.ytimg.com  
acl monitoredSitesRegex ssl::server_name_regex \.youtube\.com \.ytimg\.com
acl serverIsBank ssl::server_name .visa.com  
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump bump all
strip_query_terms off
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
logformat ssl_custom_format %ts.%03tu %6tr %>a %Ss/%03>Hs %sni
access_log daemon:/var/log/squid/access.log ssl_custom_format
##EOF

 

access.log from before:
1724028804.797486 192.168.78.15 TCP_TUNNEL/200 17764 CONNECT 
40.126.31.73:443   - ORIGINAL_DST/40.126.31.73 
  - -
1724028805.413  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request - 
HIER_NONE/- - -
1724028806.028  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request - 
HIER_NONE/- - -
1724028806.028  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request - 
HIER_NONE/- - -
1724028806.029  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request - 
HIER_NONE/- - -
1724028806.030  0 192.168.78.15 NONE_NONE/000 0 - error:invalid-request - 
HIER_NONE/- - -
1724028806.085 57 192.168.78.15 TCP_TUNNEL/200 4513 CONNECT 
104.18.72.113:

[squid-users] Rocky 8 new repo

2024-08-28 Thread ngtech1ltd
Hey List,

After some time, work and testing I started maintaining the rocky squid
cache packaging at:
https://www.ngtech.co.il/repo/rocky/8/

Until now the tests are showing very good results in real usage.

https://www.nethserver.org/

Are using Rocky linux and their 7 release is pretty good, I am waiting to
see what will be the result for 8.



Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] A periodic update

2024-09-02 Thread ngtech1ltd
Hey Everybody,

Since https://cachevideos.com/ is no longer in development due to YouTube
and other vendors usage of tokens and vbr streaming.
Are there any specific video sites which are good to be cached?
Can we cache Vimeo or any other specific sites without using ICAP or ECAP ie
using plain StoreID?

Maybe Facebook?

We are talking about a SSL BUMP setup..

I have seen:
https://www.marasystems.com/products/cachemara.html

But it seems that they probably are not using Squid anymore or at-least not
a vanilla Squid.

I was working on url filtering and it seems that ufdbguard prefers to use
storage rather than CPU from what I see (specifically on YouTube).
I prefer using a bit more CPU rather then disk.

I have started working on Squid Nuggets with hope to publish some videos
about couple nice things with squid.
One of the things is SquidGuard internals breakdown and conversion to  human
readable code ie RUBY.

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RFC: Removal of ESI Support from Squid

2024-09-08 Thread ngtech1ltd
Hey Jonathan,

The issues and comparison between 5.x to 6.x can be tested and verified.
The ESI related code can be disabled in these tests and I think that the 
subject you are talking about is different then the subject of the thread.
I will be happy to try and assist with testing these performance issues since I 
currently am using in production 6.10.

Eliezer


-Original Message-
From: squid-users  On Behalf Of 
Jonathan Lee
Sent: Saturday, September 7, 2024 7:30 PM
To: Amos Jeffries 
Cc: Squid Developers ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] RFC: Removal of ESI Support from Squid

I use bump splice, with split acls and access lists that match MAC addresses, 
plus cachemgr, I hate to admit I am using 5.8 because 6.6 has issues with so 
many errors showing and is so much slower. I do not want to reissue all my 
certificates, it works perfect for what I need in my mini firewall. 

It took many many hours of config and years of testing and changes to get it to 
this level of performance. I was very happy that 6.6 was released but it broke 
a lot of the php gui tools. Again it is like 5.8 is an everything bagel, it’s 
protected behind a firewall so I am less concerned as well. I am not protecting 
a massive corporate environment, it is just home issue Sent from my iPhone

> On Sep 7, 2024, at 08:52, Amos Jeffries  wrote:
> 
> Hi all,
> 
> The ESI (Edge Side Includes) feature of Squid has a growing number of unfixed 
> bugs, more than a few are turning into security issues.
> 
> Also, the current Squid developers do not have spare brain cycles to maintain 
> everything and v7 is seeing a lot more effort to prune away old and unused 
> mechanisms in Squid.
> 
> 
> As such this is a callout to see how much use there is for this feature.
> 
> 
>  DO you need ESI in Squid?  Yes or No.
> 
>   Speak Now, or face regrets at upgrade time.
> 
> 
> 
> Thank You
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access internal resources via hostname

2024-09-17 Thread ngtech1ltd
Hey Josh,

Configuring Squid is not a simple task in some cases.
I used to think it's a pretty simple piece of software to configure and
indeed with the right background and labs you can achieve specific goals
easily and fast.
However, I encountered over the years enough situations to understand that
it might not be easy for everybody.

This is the main reason that this mailing list exists, if you need help we
are here to try and help you.
I have seen that Amos and Alex gave you suggestions and I hope these helps
you.

If you need more help I will be happy to give you some of my time via zoom
and to see and try to understand better the scenario and the issues.

Yours,
Eliezer

-Original Message-
From: squid-users  On Behalf Of
Piana, Josh
Sent: Monday, September 16, 2024 9:58 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Antony, 

So those two rules were definitely not the way to go, thank you to those who
clarified that to me. 

I'll remove them. 

This is really frustrating. I've been trying to get a working Squid
configuration for weeks now and it is literally a 5 minute process for most
people. 

I'll keep looking and see what else could be blocking traffic. 

-Original Message-
From: squid-users  On Behalf Of
Antony Stone
Sent: Monday, September 16, 2024 2:23 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

[You don't often get email from antony.st...@squid.open.source.it. Learn why
this is important at https://aka.ms/LearnAboutSenderIdentification ]

Caution: This email originated from outside of Hexcel. Do not click links or
open attachments unless you recognize the sender and know the content is
safe.


On Monday 16 September 2024 at 20:06:41, Piana, Josh wrote:

> How I understand the rules are as follows:
> > http_access deny !localnet
>
> This denies HTTP traffic to what I defined as "localnet".

No; firstly the "localnet" ACL is defined by *source* address, therefore
"localnet" matches traffic *from* your local network.

Secondly the ! negates this, therefore "!localnet" matches any source
address which is *not* in your local network.

Therefore "http_access deny !localnet" denies any access from an address not
in your local network.

> > http_access allow localnet

This then allows access from any address which *is* in your local network.

Now, having matched all traffic not from your local network, and all traffic
which is from your local network, you have accounted for all possible
traffic, therefore any other rules have no effect.


Hope this helps,


Antony.

--
Because it messes up the order in which people normally read text.
> Why is top-posting such a bad thing?
> > Top-posting.
> > > What is the most annoying way of replying to e-mail?

   Please reply to the list;
 please *don't* CC
me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anyone has experience with Windows clients DNS timeout

2021-01-02 Thread ngtech1ltd
Hey Amos,

For an INTERCEPT setup we still need to resolve before squid is touching the 
packets.
There are registry keys for this purpose however we first need to identify this 
issue.
The basic way to verify this is using the "set debug" on nslookup and use a 
fully "cold" DNS recurser.

I was thinking about writing some PowerShell script that will do that but for 
now it's not really important.
More important then that is a good sysadmin.

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon




-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, December 30, 2020 6:15 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Anyone has experience with Windows clients DNS 
timeout

On 30/12/20 9:02 am, NgTech LTD wrote:
> I have seen this issue on Windows clients over the past.
> Windows nslookup shows that the query has timed out after 2 seconds.
> On Linux and xBSD I have researched this issue and have seen that:
> the DNS server is doing a recursive lookup and it takes from 7 to 10++
> seconds sometimes.
> When I pre-warn the DNS cache and the results are cached it takes
> lower then 500 ms for a response to be on the client side and then
> everything works fine.
> 
> I understand that Windows DNS client times out..
> When using froward proxy with squid or any other it works as expected
> since the DNS resolution is done on the proxy server.
> However for this issue I believe that this timeout should be increased
> instead of moving to DNS over HTTPS.


The DNS timeout in Squid is 30sec for exactly this type of reason. 2 
seconds is far too short to *guarantee* a recursive resolver is able to 
perform all the work and many round-trip lookups that are needed.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL-BUMP 5.0.4 not working as expected

2021-01-02 Thread ngtech1ltd
I am trying to configure 5.0.4 with sslbump to bump only a set of domains.

I am unsure about the right way it should be done.

The basic constrains are POLICY vs a set of rules.

*   Should I bump all connections with exceptions? 
*   Should I bump non else then the exceptions?
*   Based on server_name regex and/or server_name domains

 

 

Squid Cache: Version 5.0.4-20201125-r5fadc09ee

Service Name: squid

 

This binary uses OpenSSL 1.1.1g FIPS  21 Apr 2020. For legal restrictions on
distribution see https://www.openssl.org/source/license.html

 

configure options:  '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--exec_prefix=/usr'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--disable-dependency-tracking' '--enable-follow-x-forwarded-for'
'--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,PAM,POP3,RADIUS,SASL,SMB,getpwnam,fake'
'--enable-auth-ntlm=fake' '--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,LDAP_group,d
elayer,file_userip,SQL_session,unix_group,session,time_quota'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter'
'--enable-removal-policies=heap,lru' '--enable-snmp'
'--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' '--enable-esi'
'--enable-security-cert-generators' '--enable-security-cert-validators'
'--enable-icmp' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=16384' '--with-dl' '--with-openssl'
'--enable-ssl-crtd' '--with-pthreads' '--with-included-ltdl'
'--disable-arch-native' '--without-nettle'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'CC=gcc' 'CFLAGS=-O2  -fexceptions -g -grecord-gcc-switches -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection'
'LDFLAGS=-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' 'CXX=g++' 'CXXFLAGS=-O2
-fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC'
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
'LT_SYS_LIBRARY_PATH=/usr/lib64:' --enable-ltdl-convenience

 

 

I have tried the next set of rules:

## START

acl step1 at_step SslBump1

acl step2 at_step SslBump2

acl step3 at_step SslBump3

 

acl NoBump_server_regex ssl::server_name_regex -i
/etc/squid/server-regex.nobump

acl NoBump_server_name ssl::server_name /etc/squid/server-name.nobump

 

acl NoBump_ALL_regex ssl::server_name_regex -i
/etc/squid/all_server-regex.nobump

 

acl MustBump_server_regex ssl::server_name_regex -i
/etc/squid/must_server-regex.bump

acl MustBump_server_name ssl::server_name /etc/squid/must_server-name.bump

 

 

ssl_bump peek step1

 

ssl_bump splice NoBump_server_regex

ssl_bump splice NoBump_server_name

 

ssl_bump bump MustBump_server_regex

ssl_bump bump MustBump_server_name

 

ssl_bump splice NoBump_ALL_regex

 

ssl_bump bump all

##END

 

 

 

But the BoBump are not applied.

I tried to understand why squid is bumping despite the explicit splice
action.

 

Thanks,

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:   ngtech1...@gmail.com

Zoom: Coming soon

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL-BUMP 5.0.4 not working as expected

2021-01-03 Thread ngtech1ltd
Hey Amos,

I forgot about the "".
I am attaching /etc/squid/ and inside a txt log dump from cache.log of the 
minute which 2 or more transactions happening.

I think I'm doing something wrong in the config but not 100% sure.

Link to config and output:
https://1drv.ms/u/s!AoiLG1Jyh7JqqEmrmgzPM5dRFUvK?e=adVJOe

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Sunday, January 3, 2021 9:12 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SSL-BUMP 5.0.4 not working as expected

On 3/01/21 9:08 am, ngtech1ltd wrote:
> I am trying to configure 5.0.4 with sslbump to bump only a set of domains.
> 
> I am unsure about the right way it should be done.
> 
> The basic constrains are POLICY vs a set of rules.
> 
>   * Should I bump all connections with exceptions?
>   * Should I bump non else then the exceptions?
>   * Based on server_name regex and/or server_name domains
>

In regards to policy:

Security best-practice is to reject as early as possible. So for 
transactions that early bump steps are indicating going to forbidden 
places should reject immediately on that detection.

For transactions which appear to be not-bad, there is no "best" way. 
That depends on your specific setup needs and the side-effects of making 
a wrong deision.

I prefer to advise bump'ing at step 3 where the most information is 
available for checks and correction of client claims.


...
> I have tried the next set of rules:
> 
> ## START
> 
> acl step1 at_step SslBump1
> 
> acl step2 at_step SslBump2
> 
> acl step3 at_step SslBump3
> 
> acl NoBump_server_regex ssl::server_name_regex -i 
> /etc/squid/server-regex.nobump
> 
> acl NoBump_server_name ssl::server_name /etc/squid/server-name.nobump
> 
> acl NoBump_ALL_regex ssl::server_name_regex -i 
> /etc/squid/all_server-regex.nobump
> 
> acl MustBump_server_regex ssl::server_name_regex -i 
> /etc/squid/must_server-regex.bump
> 
> acl MustBump_server_name ssl::server_name /etc/squid/must_server-name.bump
> 
> ssl_bump peek step1
> 
> ssl_bump splice NoBump_server_regex
> 
> ssl_bump splice NoBump_server_name
> 
> ssl_bump bump MustBump_server_regex
> 
> ssl_bump bump MustBump_server_name
> 
> ssl_bump splice NoBump_ALL_regex
> 
> ssl_bump bump all
> 
> ##END
> 
> But the BoBump are not applied.
> 
> I tried to understand why squid is bumping despite the explicit splice 
> action.

Note that all these splice/bump rules are being applied at step2. There 
is no step3 taking place.


Does your actual config have the required "" marks around those filenames?

Without that all your ACLs will non-match (SNI vs name of the file) and 
the last "bump all" will be applied below.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL-BUMP 5.0.4 not working as expected

2021-01-03 Thread ngtech1ltd
Comments bellow

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Sunday, January 3, 2021 9:12 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SSL-BUMP 5.0.4 not working as expected

On 3/01/21 9:08 am, ngtech1ltd wrote:
> I am trying to configure 5.0.4 with sslbump to bump only a set of domains.
> 
> I am unsure about the right way it should be done.
> 
> The basic constrains are POLICY vs a set of rules.
> 
>   * Should I bump all connections with exceptions?
>   * Should I bump non else then the exceptions?
>   * Based on server_name regex and/or server_name domains
>

In regards to policy:

Security best-practice is to reject as early as possible. So for 
transactions that early bump steps are indicating going to forbidden 
places should reject immediately on that detection.

For transactions which appear to be not-bad, there is no "best" way. 
That depends on your specific setup needs and the side-effects of making 
a wrong deision.

I prefer to advise bump'ing at step 3 where the most information is 
available for checks and correction of client claims.


# How to do that? I tried to read the docs at:
https://wiki.squid-cache.org/Features/SslPeekAndSplice

But couldn't understand or grasp how to implement what you are talking about.
#

...
> I have tried the next set of rules:
> 
> ## START
> 
> acl step1 at_step SslBump1
> 
> acl step2 at_step SslBump2
> 
> acl step3 at_step SslBump3
> 
> acl NoBump_server_regex ssl::server_name_regex -i 
> /etc/squid/server-regex.nobump
> 
> acl NoBump_server_name ssl::server_name /etc/squid/server-name.nobump
> 
> acl NoBump_ALL_regex ssl::server_name_regex -i 
> /etc/squid/all_server-regex.nobump
> 
> acl MustBump_server_regex ssl::server_name_regex -i 
> /etc/squid/must_server-regex.bump
> 
> acl MustBump_server_name ssl::server_name /etc/squid/must_server-name.bump
> 
> ssl_bump peek step1
> 
> ssl_bump splice NoBump_server_regex
> 
> ssl_bump splice NoBump_server_name
> 
> ssl_bump bump MustBump_server_regex
> 
> ssl_bump bump MustBump_server_name
> 
> ssl_bump splice NoBump_ALL_regex
> 
> ssl_bump bump all
> 
> ##END
> 
> But the BoBump are not applied.
> 
> I tried to understand why squid is bumping despite the explicit splice 
> action.

Note that all these splice/bump rules are being applied at step2. There 
is no step3 taking place.


Does your actual config have the required "" marks around those filenames?

Without that all your ACLs will non-match (SNI vs name of the file) and 
the last "bump all" will be applied below.

# I didn't understood how to separate the different steps and to make the right 
config which will either allow me bump or splice.
I want to be able to bump or splice by my acls and I couldn't make this happen.
Either I'm really confused or didn't understood how to do that.
With another software I was able to do that and more and this is why it's 
probably so hard for me.

Thanks,
Eliezer


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] PCI Certification compliance lists

2021-01-03 Thread ngtech1ltd
I am looking for domains lists that can be used for squid to be PCI
Certified.

I have read this article:
https://www.imperva.com/learn/data-security/pci-dss-certification/

And couple others to try and understand what might a Squid proxy ssl-bump
exception rules should contain.
So technically we need:
- Banks
- Health care
- Credit Cards(Visa, Mastercard, others)
- Payments sites
- Antivirus(updates and portals)
- OS and software Updates signatures(ASC, MD5, SHAx etc..)

* https://support.kaspersky.com/common/start/6105
*
https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e
set-product-with-a-third-party-firewall
*
https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s
55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc
p&articleId=TS100291&_afrLoop=641093247174514&leftWidth=0%25&showFooter=fals
e&showHeader=false&rightWidth=0%25¢erWidth=100%25#!%40%40%3FshowFooter%3
Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2
525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3
D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9


If someone has the documents which instructs what domains to not inspect it
would also help a lot.

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Setting up a transparent http and https proxy server using squid 4.6

2021-01-03 Thread ngtech1ltd
Hey,

 

I am missing a bit of the context, like:

Did you self compiled squid? Is it from the OS repository?

Squid -v might help a bit to understand what you do have enabled in your Squid.

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:   ngtech1...@gmail.com

Zoom: Coming soon

 

 

From: squid-users  On Behalf Of jean 
francois hasson
Sent: Thursday, December 31, 2020 11:10 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Setting up a transparent http and https proxy server 
using squid 4.6

 

Hi,

I am trying to create for my home network a transparent proxy to implement 
filtering rules based on website names mainly.

I have been looking at using a Raspberry pi 3B+ running pi OS. I configured it 
to be a Wifi access point using RaspAP quick install. The Wifi network on which 
the filtering option is to be implemented is with IP 10.3.141.xxx. The router 
is at address 10.3.141.1.

I have the following squid.conf file which I tried to create based on different 
mails, websites and blogs I read :

acl SSL_ports port 443 #https
acl SSL_ports port 563 # snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

#Le réseau local
acl LocalNet src 10.3.141.0/24

acl bump_step1 at_step SslBump1
acl bump_step2 at_step SslBump2
acl bump_step3 at_step SslBump3

#Définition des autorisations
http_access deny !Safe_ports
#http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow LocalNet
http_access deny all

#Définition des ports d'écoute
http_port 8080
http_port 3128 intercept
https_port 3129 intercept ssl-bump \
  tls-cert=/etc/squid/cert/example.crt \
  tls-key=/etc/squid/cert/example.key \
  generate-host-certificates=on  dynamic_cert_mem_cache_size=4MB

sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db -M 4MB
sslcrtd_children 5

ssl_bump peek all
acl tls_whitelist ssl::server_name .example.com
ssl_bump splice tls_whitelist
ssl_bump terminate all

coredump_dir /var/spool/squid

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

cache_dir ufs /cache 400 16 256
cache_access_log /var/log/squid/access.log
cache_effective_user proxy

If I set up on a device connected to the access point a proxy manually ie 
10.3.141.1 on port 8080, I can access the internet. If I put the following 
rules for iptables to use in files rules.v4 :

*nat
-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 
10.3.141.1:3128
-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128
-A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 
10.3.141.1:3129
-A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3129
-A POSTROUTING -s 10.3.141.0/24 -o eth0 -j MASQUERADE
COMMIT
Now, if I remove the manual proxy configuration of the device connected to the 
access point, I can't connect to the internet. If I leave the manual proxy 
configuration it does work and there is activity logged in 
/var/log/squid/access.log.

Please let me know what might be wrong in my configuration if possible.

Best regards,

JF

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread ngtech1ltd
Hey David.

 

Indeed it should be done with the local websites however, These sites are 
pretty static.

Would it be OK to publish theses lists online as a file/files?

 

The main issue is that ssl-bump requires couple “fast” acls.

I believe it should be a “fast” acl but we also need the option to use an 
external helper like for many other function.

If I can choose between “fast” as default and the ability to run a “slow” 
external acl helper I can
choose what is right for/in my environment.

 

Currently I cannot program a helper that will decide if a CONNECT connection 
should be spliced or bumped programmatically.

It forces me to reload this list manually which might take couple seconds.

 

Thanks,

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com  

Zoom: Coming soon

 

 

From: squid-users  On Behalf Of 
David Touzeau
Sent: Monday, January 4, 2021 10:23 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] PCI Certification compliance lists

 

Hi Eiezer, 

I can help you by giving a list but 

Just by using "main domains": 

*   Banking/transcations : 27 646 websites.
*   AV sofwtare and updates sites (fw, routers...) :  133 295 websites


I can give it to you the lists , they are incomplete and it should decrease 
squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own list 
according it's country or company activity.





Le 03/01/2021 à 15:12, ngtech1...@gmail.com   a 
écrit :

I am looking for domains lists that can be used for squid to be PCI
Certified.
 
I have read this article:
https://www.imperva.com/learn/data-security/pci-dss-certification/
 
And couple others to try and understand what might a Squid proxy ssl-bump
exception rules should contain.
So technically we need:
- Banks
- Health care
- Credit Cards(Visa, Mastercard, others)
- Payments sites
- Antivirus(updates and portals)
- OS and software Updates signatures(ASC, MD5, SHAx etc..)
 
* https://support.kaspersky.com/common/start/6105
*
https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e
set-product-with-a-third-party-firewall
*
https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s
55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc
p&articleId=TS100291&_afrLoop=641093247174514&leftWidth=0%25&showFooter=fals
e&showHeader=false&rightWidth=0%25¢erWidth=100%25#!%40%40%3FshowFooter%3
Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2
525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3
D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9
 
 
If someone has the documents which instructs what domains to not inspect it
would also help a lot.
 
Thanks,
Eliezer
 

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Zoom: Coming soon
 
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Setting up a transparent http and https proxy server using squid 4.6

2021-01-04 Thread ngtech1ltd
Try as test to remove:

ssl_bump terminate all

 

Ie use only the next bump rules:

### START

# TLS/SSL bumping definitions

acl tls_s1_connect at_step SslBump1

acl tls_s2_client_hello at_step SslBump2

acl tls_s3_server_hello at_step SslBump3

 

ssl_bump peek tls_s1_connect

ssl_bump splice all
### END

The above is from an example at ufdbguard manual.

 

Let me know if you are still having issues in full splice mode.

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com  

Zoom: Coming soon

 

 

From: jean francois hasson  
Sent: Monday, January 4, 2021 8:51 AM
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Setting up a transparent http and https proxy server 
using squid 4.6

 

Hi,

Thank you for looking at my question.

I dowloaded the squid 4.6 source code from 
http://ftp.debian.org/debian/pool/main/s/squid/ and selected 
squid_4.6.orig.tar.gz, squid_4.6-1+deb10u4.debian.tar.xz and 
squid_4.6-1+deb10u4.dsc. I modified the debian/rules file by adding to 
DEB_CONFIGURE_EXTRA_FLAGS the following --with-openssl, --enable-ssl and 
--enable-ssl-crtd.

The squid -v output is :

Squid Cache: Version 4.6
Service Name: squid
Raspbian linux

This binary uses OpenSSL 1.0.2q  20 Nov 2018. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--build=arm-linux-gnueabihf' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' 
'--libexecdir=${prefix}/lib/squid' '--srcdir=.' '--disable-maintainer-mode' 
'--disable-dependency-tracking' '--disable-silent-rules' 'BUILDCXXFLAGS=-g -O2 
-fdebug-prefix-map=/home/pi/build/squid/squid-4.6=. -fstack-protector-strong 
-Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-z,relro 
-Wl,-z,now -Wl,--as-needed -latomic' 'BUILDCXX=arm-linux-gnueabihf-g++' 
'--with-build-environment=default' '--enable-build-info=Raspbian linux' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' 
'--disable-arch-native' '--enable-async-io=8' 
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' 
'--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' 
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' 
'--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper' 
'--enable-auth-ntlm=fake,SMB_LM' 
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,time_quota,unix_group,wbinfo_group'
 '--enable-security-cert-validators=fake' 
'--enable-storeid-rewrite-helpers=file' '--enable-url-rewrite-helpers=fake' 
'--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' 
'--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' 
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' 
'--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' 
'--with-gnutls' '--with-openssl' '--enable-ssl' '--enable-ssl-crtd' 
'--enable-linux-netfilter' 'build_alias=arm-linux-gnueabihf' 
'CC=arm-linux-gnueabihf-gcc' 'CFLAGS=-g -O2 
-fdebug-prefix-map=/home/pi/build/squid/squid-4.6=. -fstack-protector-strong 
-Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now 
-Wl,--as-needed -latomic' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 
'CXX=arm-linux-gnueabihf-g++' 'CXXFLAGS=-g -O2 
-fdebug-prefix-map=/home/pi/build/squid/squid-4.6=. -fstack-protector-strong 
-Wformat -Werror=format-security'

When I run openssl version I get 1.1.1d.

I hope it helps.

Best regards,

JF

Le 03/01/2021 à 21:55, ngtech1...@gmail.com   a 
écrit :

Hey,

 

I am missing a bit of the context, like:

Did you self compiled squid? Is it from the OS repository?

Squid -v might help a bit to understand what you do have enabled in your Squid.

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com  

Zoom: Coming soon

 

 

From: squid-users   
 On Behalf Of jean francois hasson
Sent: Thursday, December 31, 2020 11:10 AM
To: squid-users@lists.squid-cache.org 
 
Subject: [squid-users] Setting up a transparent http and https proxy server 
using squid 4.6

 

Hi,

I am trying to create for my home network a transparent proxy to implement 
filtering rules based on website names mainly.

I have been looking at using a Raspberry pi 3B+ running pi OS. I configured it 
to be a Wifi access point using RaspAP quick install. The Wifi network on which 
the filtering option is to be implemented is with IP 10.3.141.xxx. The router 
is at address 10.3.141.1.

I have the

Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread ngtech1ltd
Thanks David,

 

I don’t understand something:

1490677018.addr

 

Are these integers representing of ip addresses? 

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com  

Zoom: Coming soon

 

 

From: David Touzeau  
Sent: Monday, January 4, 2021 3:25 PM
To: ngtech1...@gmail.com; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] PCI Certification compliance lists

 


Hi Eliezer:

http://articatech.net/tmpf/categories/banking.gz
http://articatech.net/tmpf/categories/cleaning.gz




Le 04/01/2021 à 10:27, ngtech1...@gmail.com   a 
écrit :

Hey David.

 

Indeed it should be done with the local websites however, These sites are 
pretty static.

Would it be OK to publish theses lists online as a file/files?

 

The main issue is that ssl-bump requires couple “fast” acls.

I believe it should be a “fast” acl but we also need the option to use an 
external helper like for many other function.

If I can choose between “fast” as default and the ability to run a “slow” 
external acl helper I can
choose what is right for/in my environment.

 

Currently I cannot program a helper that will decide if a CONNECT connection 
should be spliced or bumped programmatically.

It forces me to reload this list manually which might take couple seconds.

 

Thanks,

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com  

Zoom: Coming soon

 

 

From: squid-users   
 On Behalf Of David Touzeau
Sent: Monday, January 4, 2021 10:23 AM
To: squid-users@lists.squid-cache.org 
 
Subject: Re: [squid-users] PCI Certification compliance lists

 

Hi Eiezer, 

I can help you by giving a list but 

Just by using "main domains": 

1.  Banking/transcations : 27 646 websites.
2.  AV sofwtare and updates sites (fw, routers...) :  133 295 websites


I can give it to you the lists , they are incomplete and it should decrease 
squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own list 
according it's country or company activity.






Le 03/01/2021 à 15:12, ngtech1...@gmail.com   a 
écrit :

I am looking for domains lists that can be used for squid to be PCI
Certified.
 
I have read this article:
https://www.imperva.com/learn/data-security/pci-dss-certification/
 
And couple others to try and understand what might a Squid proxy ssl-bump
exception rules should contain.
So technically we need:
- Banks
- Health care
- Credit Cards(Visa, Mastercard, others)
- Payments sites
- Antivirus(updates and portals)
- OS and software Updates signatures(ASC, MD5, SHAx etc..)
 
* https://support.kaspersky.com/common/start/6105
*
https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e
set-product-with-a-third-party-firewall
*
https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s
55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc
p&articleId=TS100291&_afrLoop=641093247174514&leftWidth=0%25&showFooter=fals
e&showHeader=false&rightWidth=0%25¢erWidth=100%25#!%40%40%3FshowFooter%3
Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2
525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3
D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9
 
 
If someone has the documents which instructs what domains to not inspect it
would also help a lot.
 
Thanks,
Eliezer
 

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Zoom: Coming soon
 
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Setting up a transparent http and https proxy server using squid 4.6

2021-01-04 Thread ngtech1ltd
Just take into account that it will not filter any https/ssl sites this way.
You will need to create an acl to allow only exceptions to be spliced.

Try to look at the ufdbguard manual at:
https://www.urlfilterdb.com/files/downloads/ReferenceManual.pdf

at section: 3.3.2Squid Example Configuration, SSL-Bump peek+splice

All The Bests,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: mailto:ngtech1...@gmail.com
Zoom: Coming soon


From: jean francois hasson  
Sent: Monday, January 4, 2021 4:19 PM
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Setting up a transparent http and https proxy server 
using squid 4.6

Hi,
Doing the change below works. I can now access ebay.fr through the raspberry pi.
Best regards,
JF
Le 04/01/2021 à 13:04, mailto:ngtech1...@gmail.com a écrit :
Try as test to remove:
ssl_bump terminate all
 
Ie use only the next bump rules:
### START
# TLS/SSL bumping definitions
acl tls_s1_connect at_step SslBump1
acl tls_s2_client_hello at_step SslBump2
acl tls_s3_server_hello at_step SslBump3
 
ssl_bump peek tls_s1_connect
ssl_bump splice all
### END
The above is from an example at ufdbguard manual.
 
Let me know if you are still having issues in full splice mode.
 
Eliezer
 

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: mailto:ngtech1...@gmail.com
Zoom: Coming soon
 
 
From: jean francois hasson mailto:jfhas...@club-internet.fr 
Sent: Monday, January 4, 2021 8:51 AM
To: mailto:ngtech1...@gmail.com
Cc: mailto:squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Setting up a transparent http and https proxy server 
using squid 4.6
 
Hi,
Thank you for looking at my question.
I dowloaded the squid 4.6 source code from 
http://ftp.debian.org/debian/pool/main/s/squid/ and selected 
squid_4.6.orig.tar.gz, squid_4.6-1+deb10u4.debian.tar.xz and 
squid_4.6-1+deb10u4.dsc. I modified the debian/rules file by adding to 
DEB_CONFIGURE_EXTRA_FLAGS the following --with-openssl, --enable-ssl and 
--enable-ssl-crtd.
The squid -v output is :
Squid Cache: Version 4.6
Service Name: squid
Raspbian linux

This binary uses OpenSSL 1.0.2q  20 Nov 2018. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--build=arm-linux-gnueabihf' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' 
'--libexecdir=${prefix}/lib/squid' '--srcdir=.' '--disable-maintainer-mode' 
'--disable-dependency-tracking' '--disable-silent-rules' 'BUILDCXXFLAGS=-g -O2 
-fdebug-prefix-map=/home/pi/build/squid/squid-4.6=. -fstack-protector-strong 
-Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-z,relro 
-Wl,-z,now -Wl,--as-needed -latomic' 'BUILDCXX=arm-linux-gnueabihf-g++' 
'--with-build-environment=default' '--enable-build-info=Raspbian linux' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' 
'--disable-arch-native' '--enable-async-io=8' 
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' 
'--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' 
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' 
'--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper' 
'--enable-auth-ntlm=fake,SMB_LM' 
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,time_quota,unix_group,wbinfo_group'
 '--enable-security-cert-validators=fake' 
'--enable-storeid-rewrite-helpers=file' '--enable-url-rewrite-helpers=fake' 
'--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' 
'--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' 
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' 
'--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' 
'--with-gnutls' '--with-openssl' '--enable-ssl' '--enable-ssl-crtd' 
'--enable-linux-netfilter' 'build_alias=arm-linux-gnueabihf' 
'CC=arm-linux-gnueabihf-gcc' 'CFLAGS=-g -O2 
-fdebug-prefix-map=/home/pi/build/squid/squid-4.6=. -fstack-protector-strong 
-Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now 
-Wl,--as-needed -latomic' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 
'CXX=arm-linux-gnueabihf-g++' 'CXXFLAGS=-g -O2 
-fdebug-prefix-map=/home/pi/build/squid/squid-4.6=. -fstack-protector-strong 
-Wformat -Werror=format-security'
When I run openssl version I get 1.1.1d.
I hope it helps.
Best regards,
JF
Le 03/01/2021 à 21:55, mailto:ngtech1...@gmail.com a écrit :
Hey,
 
I am missing a bit of the context, like:
Did you self compiled squid? Is it from the OS repository?
Squid -v might help a bit to understand what you do have enabled in your Squid.
 
Eliezer
 

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: mailto:ngtech1.

Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread ngtech1ltd
Thanks Alex,

So for now the next should work by the docs at:
http://www.squid-cache.org/Versions/v5/cfgman/ssl_bump.html

I just noticed that I didn't put helper in the right context as you wrote in 
another email.
This way we can reload automatically lists on a change without reloading the 
whole squid.
So for it to work we just need a single server which supports threading and 
concurrency.
To overcome updates related issues we can use either a lock/mutex/other.

Thanks Again,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Alex Rousskov  
Sent: Monday, January 4, 2021 4:48 PM
To: squid-users@lists.squid-cache.org
Cc: ngtech1...@gmail.com
Subject: Re: [squid-users] PCI Certification compliance lists

On 1/4/21 4:27 AM, ngtech1...@gmail.com wrote:
> The main issue is that ssl-bump requires couple “fast” acls.

It does not: The ssl_bump directive supports both fast and slow ACLs.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] There is the problems with instagram images and videos

2022-06-14 Thread ngtech1ltd
Hey,

You have reduced the conf.
We are not trying to guess
You can either share your entire configuration or just to not ask.
We cannot try to help you if we are missing parts of the configuration.
( leaving aside the ip addresses and confidential information)
You should share both squid.conf and every included config file and also the 
relevant access.log details.

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
simwin
Sent: Wednesday, 15 June 2022 0:17
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] There is the problems with instagram images and 
videos


В Tue, 14 Jun 2022 17:52:06 +0200
Matus UHLAR - fantomas  пишет:

> >В Tue, 14 Jun 2022 16:57:05 +0200
> >Matus UHLAR - fantomas :  
> >> if a browser prohibits this, squid can't do anything with it.
> >> Have you tried without proxy?  
> 
> On 14.06.22 18:01, simwin wrote:
> >Yes, it works fine without squid proxy and with ssh-tunnel proxy from my
> >localhost, but with danted proxy I have the same problem - no images and no
> >videos  
> 
> as first turn off all attempts to tune your squid and go with default 
> config.
> Then, you can trace which configuration directive causes it.

Ok. My default config see below, the result is the same - no intagram images, no
videos :(

$ grep -vE '^$|^#' /etc/squid/squid.conf

auth_param basic program /usr/lib/squid/basic_ncsa_auth
/etc/squid/internet_users acl auth_users proxy_auth REQUIRED
http_access allow auth_users
auth_param basic casesensitive on
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network
(LAN) acl localnet src 100.64.0.0/10# RFC 6598 shared address
space (CGN) acl localnet src 169.254.0.0/16 # RFC 3927 link-local
(directly plugged) machines acl localnet src 172.16.0.0/12  # RFC
1918 local private network (LAN) acl localnet src 192.168.0.0/16
# RFC 1918 local private network (LAN) acl localnet src fc00::/7
# RFC 4193 local private network range acl localnet src fe80::/10
# RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
include /etc/squid/conf.d/*
http_access allow localhost
http_access deny all
http_port xxx.xxx.xxx.xxx:3128 (hidden)
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reloading squid service results in connection resets

2022-06-14 Thread ngtech1ltd
Hey Matt,

 

Can you please verify what is the size of the squid.conf and all of the related 
files?

How long does it take to reload the configuration?

I do not know the exact details but it’s recommended that you will upgrade to 
the latest 4.x or 5.x.
If this scenario is re-producible in another environment it’s possible that the 
issue itself might be found to some degree.

I have here a local virtual server with Oracle Enterprise Linux 8 which runs 
squid 5.15 with lots of helpers in a 

quite complex config that is being reconfigured enough to state that it’s a 
weird issue.

 

>From my experience it’s possible to implement a helper that will be able to 
>reconfigure without any squid
reload or reconfiguration and I believe it’s the right way to do things.

It’s a better and faster path then fixing squid source code and reconfiguration 
sequence or write another proxy.

(just to be clear)

 

I am working on a series of zoom meetings: “Squid-Cache from 0 to hero” and I 
am trying to collect use cases
that I will be able to discuss and demonstrate a practical solution for these.

 

Please feel free to shed more light on the scenario so I would be able to maybe 
use your case for the benefit
of the Squid-Users community.

 

Thanks,

Eliezer

 

*   If anyone else have a scenario and use case please flood the 
Squid-Users and we will try to help you
*   The Squid-Cache community encourages questions even the hardest or the 
dumbest, we like them all ;(

 

From: squid-users  On Behalf Of 
Toler, Matt
Sent: Tuesday, 14 June 2022 1:36
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Reloading squid service results in connection resets

 

Hello!

 

I hope you all are well. We’ve run into a troublesome issue and hope to get 
some guidance. 

 

We have an automated workflow that will reload the squid configuration if any 
changes are made. In our use case the changes are dynamic and happen often. We 
were able to determine that frequency isn’t the problem as the issue can be 
manually reproduced in the absents of any automation or significant load.

 

Environment:

OS: RHEL 7.9

Kernel: 3.10.0-1160.62.1.el7.x86_64

Squid: 4.14

 

In our test case we’re using the AWS CLI to get information from EC2 instances. 
While looping “ec2 describe-instances” through the squid via a load balancer 
everything works great until we reload the service. Most times the command will 
pause and return the requested output after a few more seconds. However, more 
times than is tolerable when the service is reloading the command will return 
error “Failed to connect to proxy URL” to the client. With the AWS CLI debug 
enabled before the “Failed to connect to proxy URL” is thrown we see 
“ConnectionResetError: [Errno 104] Connection reset by peer”. We then ran 
packet captures and were able to determine that the reset was coming from the 
server running squid at the time the service was reloading. This connection is 
not logged by squid at all which was confusing to us that we had any client 
connection issues as the squid logs are clean. We are reloading the service 
with systemd but were able to reproduce the issue with “squid -k reconfigure” 
as well. 

 

Our current plan is to upgrade our servers to RHEL8 and compile squid 5.6 in 
hopes that this issue will go away but we may need to go so far as 
programmatically removing a given squid node from the Load balancer before any 
service reload.   

 

That said, it was our understanding that reloading the service would not 
disturb any old connections and new connections would receive the new 
configuration. We were wondering if anyone else has encountered any issues with 
connection resets of client traffic during squid service reload? Or may 
otherwise have any thoughts on this issue. Please let me know if I can provide 
any further detail.

 

Thanks in advance.

 

Regards,

Matt   

 

 

 

 

 

 

   

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] There is the problems with instagram images and videos

2022-06-14 Thread ngtech1ltd
Hey,

Two things:

First, you have lots of TCP_TUNNEL_ABORTED and I am not sure if the client or 
the server is the cause for these.

Second, when you share the conf you can clean it up with the grep but just try 
to pay attention and make sure
that it would be human readable, usually mail agents tend to handle squid.conf 
in such a way that
later on they won't be readable.
So you can just share the tidy squid.conf in a clear plain text format that 
will not jump lines
if lines would jump it would be hard to read

##

Since you are not using any SSL-BUMP it's pretty straight forward setup and I 
believe that the latest stable Squid-Cache version should
be used so the developers can help you and others with such an issue.
Can you please verify what browser or user agent is being using in your setup?
Also, can you show us the version of squid and share whether it's a self 
compiled or from a packges?
Please share as much details as possible about the host.

# squid -v

Should give us the relevant details.
What OS are you using?
I have a support script that might help us to make sure we have all the 
relevant details.

Thanks,
Eliezer


-Original Message-
From: squid-users  On Behalf Of 
simwin
Sent: Wednesday, 15 June 2022 1:23
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] There is the problems with instagram images and 
videos

В Wed, 15 Jun 2022 00:36:55 +0300
:

> Hey,
> 
> You have reduced the conf.
> We are not trying to guess
> You can either share your entire configuration or just to not ask.
> We cannot try to help you if we are missing parts of the configuration.
> ( leaving aside the ip addresses and confidential information)
> You should share both squid.conf and every included config file and also the
> relevant access.log details.

OK, I understand it. Please see links below:

My access.log - https://sharetext.me/z1mbjxkogv
My squid.conf - https://sharetext.me/wd7rdcpqax
and conf.d/debian.conf - https://sharetext.me/vdo9im5epe

> $ grep -vE '^$|^#' /etc/squid/squid.conf
> 
> auth_param basic program /usr/lib/squid/basic_ncsa_auth
> /etc/squid/internet_users acl auth_users proxy_auth REQUIRED
> http_access allow auth_users
> auth_param basic casesensitive on
> acl localnet src 0.0.0.1-0.255.255.255# RFC 1122 "this" network (LAN)
> acl localnet src 10.0.0.0/8   # RFC 1918 local private network
> (LAN) acl localnet src 100.64.0.0/10  # RFC 6598 shared address
> space (CGN) acl localnet src 169.254.0.0/16   # RFC 3927 link-local
> (directly plugged) machines acl localnet src 172.16.0.0/12#
> RFC 1918 local private network (LAN) acl localnet src 192.168.0.0/16
> # RFC 1918 local private network (LAN) acl localnet src fc00::/7
> # RFC 4193 local private network range acl localnet src fe80::/10
> # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443   # https
> acl Safe_ports port 70# gopher
> acl Safe_ports port 210   # wais
> acl Safe_ports port 1025-65535# unregistered ports
> acl Safe_ports port 280   # http-mgmt
> acl Safe_ports port 488   # gss-http
> acl Safe_ports port 591   # filemaker
> acl Safe_ports port 777   # multiling http
> acl CONNECT method CONNECT
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost manager
> http_access deny manager
> include /etc/squid/conf.d/*
> http_access allow localhost
> http_access deny all
> http_port xxx.xxx.xxx.xxx:3128 (hidden)
> coredump_dir /var/spool/squid
> refresh_pattern ^ftp: 144020% 10080
> refresh_pattern ^gopher:  14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0   0%  0
> refresh_pattern . 0   20% 4320
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] There is the problems with instagram images and videos

2022-06-14 Thread ngtech1ltd
Hey,

I just compiled the newest version of Squid for Debian 11(bullseye) at:
https://www.ngtech.co.il/repo/debian/11/x86_64/

However you need to know how to install it and I cannot work on the installer 
now.
It's also doesn't include all of my patches yet.


From what I have seen at:
https://packages.debian.org/bullseye/squid

The current version at bullseye is 4.13 so you'd better try first 5.6 before 
any other things.

Eliezer

* Tomorrow I will try to publish the installer(I have it somewhere in my local 
repos but yet to find it, it was an ansible one as far as I remember)

-Original Message-
From: squid-users  On Behalf Of 
simwin
Sent: Wednesday, 15 June 2022 2:15
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] There is the problems with instagram images and 
videos


В Wed, 15 Jun 2022 01:35:45 +0300
 пишет:

> whether it's a self compiled or from a packges?
It debian 11 default apt packages 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] There is the problems with instagram images and videos

2022-06-14 Thread ngtech1ltd
We will be in touch tomorrow (I'm in IST so it’s +2 UTC), I assume you are in a 
different TZ.
In what TZ are you?

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
simwin
Sent: Wednesday, 15 June 2022 2:35
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] There is the problems with instagram images and 
videos

Once more all together in raw txt format without lines breaking:

access.log - https://pastebin.mozilla.org/LQidBnKn/raw
squid.conf - https://pastebin.mozilla.org/mwO0yiRb/raw 
conf.d/debian.conf - https://sharetext.me/vdo9im5epe
squid -v - https://pastebin.mozilla.org/F0w2r623/raw
Browsers: 
Firefox 101 for Linux, 
Chrome Linux 102.0.5005.115 for Linux and Windows, 
Chromium 102.0.5005.61 for Linux 

OS Debian 11 64 bit.
Squid from Debian 11 default package


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-Cache 5.6 RPMs are out

2022-06-14 Thread ngtech1ltd
Hey Everybody,
 
Since 5.6 was recently published (and not all the masters has yet to pick it 
up) I have built RPM for:
CentOS 7,8
Oracle Enterprise Linux 7,8
Amazon Enterprise Linux 2
 
All of the above includes couple of my personal patches.
Feel free to pick the SRPMS and look at the sources.
 
All The Bests,
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] There is the problems with instagram images and videos

2022-06-15 Thread ngtech1ltd
Hey,

Let sum things up:
Squid-Cache works all over the world and you are having a trouble in a specific 
environment.
The main thing to do now is find the difference between others and you in the 
setup level.

First you referenced to a danted sock5 proxy.
So the issue is that your setup is more complex then a simple forward proxy.
I do not remember how socks5 works and also do not remember anything about a 
danted sock5 proxy.
I will try to learn more about such setups later and will try to response later 
on.

It's not a squid issue but please give me some tutorial that you might have 
used or will be good for me to re-produce such a setup.
Once I will have more details about the setup I might be able to respond the 
right way and be a bit smarter.

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
simwin
Sent: Wednesday, 15 June 2022 19:16
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] There is the problems with instagram images and 
videos


andre.bolin...@articatech.com:

> TCP_TUNNEL/200 means that the proxy is able to establish a correct connection
> with the destination, do you have any firewall, antivirus, ad-blocker in
> between that could block the traffic?  

No antivirus, no firewall and no ad-blocker - I've checked it!. 

I've made ssh tunnel (ssh -D 0.0.0.0:) and make sock5 proxy to my server -
it works fine! But with danted sock5 proxy I have the same problem - no
instagram images, no videos.
 
Also it may be providers issue - all instagram's traffic blocked in our
country. 

That is why I need to know - does squid works with instagram (and twitter
videos) for anyone from another country with default squid config?

В Wed, 15 Jun 2022 19:01:30 +0300
simwin :

> Plus squid -v:
> 
> configure options:  '--prefix=' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
> '--libexecdir=/usr/libexec/squid' '--libdir=/usr/lib/squid'
> '--mandir=/usr/share/man/man8' '--sysconfdir=/etc/squid'
> '--with-default-user=proxy' '--with-pidfile=/run/squid.pid'
> '--with-logdir=/var/log/squid'
> 
> В Wed, 15 Jun 2022 18:48:00 +0300
> simwin  пишет:
> 
> > With the latest stable squid-5.6-20220607-rfca8b79b5 the result is the same
> > - no instagram photos and videos :(
> > 
> > The squid configs is default, please see all info below:
> > 
> > $ grep -vE '^$|^#' /etc/squid/squid.conf
> > 
> > acl localnet src 0.0.0.1-0.255.255.255  
> > acl localnet src 10.0.0.0/8 
> > acl localnet src 100.64.0.0/10  
> > acl localnet src 169.254.0.0/16 
> > acl localnet src 172.16.0.0/12  
> > acl localnet src 192.168.0.0/16 
> > acl localnet src fc00::/7   
> > acl localnet src fe80::/10  
> > acl SSL_ports port 443
> > acl Safe_ports port 80  # http
> > acl Safe_ports port 21  # ftp
> > acl Safe_ports port 443 # https
> > acl Safe_ports port 70  # gopher
> > acl Safe_ports port 210 # wais
> > acl Safe_ports port 1025-65535  # unregistered ports
> > acl Safe_ports port 280 # http-mgmt
> > acl Safe_ports port 488 # gss-http
> > acl Safe_ports port 591 # filemaker
> > acl Safe_ports port 777 # multiling http
> > 
> > auth_param basic program /usr/libexec/squid/basic_ncsa_auth
> > /etc/squid/internet_users
> > 
> > acl auth_users proxy_auth REQUIRED
> > http_access allow auth_users
> > auth_param basic casesensitive on
> > http_access deny !Safe_ports
> > http_access deny CONNECT !SSL_ports
> > http_access allow localhost manager
> > http_access deny manager
> > http_access allow localnet
> > http_access allow localhost
> > http_access deny all
> > http_port xxx.xxx.xxx.xxx:
> > coredump_dir /var/cache/squid
> > refresh_pattern ^ftp:   144020% 10080
> > refresh_pattern ^gopher:14400%  1440
> > refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> > refresh_pattern .   0   20% 4320
> > 
> > Full squid.conf - https://pastebin.mozilla.org/JKSiBuvU/raw
> > Firefox 101 console errors - https://pastebin.mozilla.org/0Osvw45J/raw
> > Squid access.log - https://pastebin.mozilla.org/pOsXtMBW/raw
> > OS Debian 11
> > 
> > 2All: Please answer: does squid works with instagram (and twitter videos)
> > for anyone?!
> > 
> > В Wed, 15 Jun 2022 12:14:22 +0300
> > simwin  пишет:
> >   
> > > В Wed, 15 Jun 2022 02:59:29 +0300
> > > :
> > > 
> > > > I just compiled the newest version of Squid for Debian 11(bullseye) at:
> > > > https://www.ngtech.co.il/repo/debian/11/x86_64/ 
> > > > However you need to know how to install it and I cannot work on the
> > > > installer now. It's also doesn't include all of my patches yet.
> > > > From what I have seen at:
> > > > https://packages.debian.org/bullseye/squid
> > > > The current version at bullseye is 4.13 so you'd better try first 5.6
> > > > before any other things.  
> > > 
> > > That is the good idea! 
> > > 
> > > I'm already in trying t

Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
Rob,
 
It will be different how you implement and use logrotate manually or with the 
logrotate tools.
What OS are you using?
 
Eliezer
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Wednesday, 15 June 2022 20:19
To: Squid Users 
Subject: [squid-users] Logrotate question
 
Hi all,
 
ATM to clear the logs, I do this in crontab, every 3 months
 
0 0 1 */3 * echo "" > /usr/local/squid/var/logs/access.log and do the same for 
cache log
 
It works but I want to really use log rotate ie
 
0 0 1 */3 * /usr/local/squid/sbin/squid -k rotate
 
I hear log rotate keeps 10 files by default so does that mean I will have 10 
access logs etc and also will it keep the file the same ie won't change the 
size or compress it to save space
 
Thanks,
Rob
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
Hey Rob,
 
First there is a difference between rotation and deletion.
If it’s not a loaded system then 3 month is ok but… in most use cases it’s 
better to rotate every day but to delete after 3 month.
You have the choice to compress the files or to leave them in plain text but 
it’s only a choice of resources preservation.
 
Let me see, I will look at my CentOS 7 system and will try to find the right 
way to do it.
 
Eliezer
 
From: robert k Wild  
Sent: Thursday, 16 June 2022 11:28
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] Logrotate question
 
Thanks Eliezer
 
I have centos 7 and I want it to rotate every 3 months as we need to keep logs 
for every 3 months.
 
Thanks,
Rob
 
On Thu, 16 Jun 2022, 08:11 , mailto:ngtech1...@gmail.com> > wrote:
Rob,
 
It will be different how you implement and use logrotate manually or with the 
logrotate tools.
What OS are you using?
 
Eliezer
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Wednesday, 15 June 2022 20:19
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: [squid-users] Logrotate question
 
Hi all,
 
ATM to clear the logs, I do this in crontab, every 3 months
 
0 0 1 */3 * echo "" > /usr/local/squid/var/logs/access.log and do the same for 
cache log
 
It works but I want to really use log rotate ie
 
0 0 1 */3 * /usr/local/squid/sbin/squid -k rotate
 
I hear log rotate keeps 10 files by default so does that mean I will have 10 
access logs etc and also will it keep the file the same ie won't change the 
size or compress it to save space
 
Thanks,
Rob
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
You should combine them both.
I am checking this for you right now…
 
Eliezer
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Thursday, 16 June 2022 12:32
To: Squid Users 
Subject: Re: [squid-users] Logrotate question
 
Cool, Thanks all, il try the logrotate program instead of using squids one
 
Thanks guys :)
 
On Thu, 16 Jun 2022, 10:26 Matus UHLAR - fantomas, mailto:uh...@fantomas.sk> > wrote:
On 16.06.22 10:23, robert k Wild wrote:
>So I can use the package logrotate instead of the squid one

squid packages in debian comes configured for rotating logs with logrotate.
- logfile_rotate is set to 0
- logrotate config file tells when/how to rotate

perhaps it's the same with centos.


>On Thu, 16 Jun 2022, 10:22 Matus UHLAR - fantomas,  >
>wrote:
>
>> On 16.06.22 09:53, robert k Wild wrote:
>> >All I know is I need to keep a record of up to 3 months, worth of logs,
>> due
>> >to gdpr, how would you say I go about this
>>
>> keeping 3 months of log is very different from rotating each 3 months.
>> configure logrotate to rotate daily and keep 92 days worth of logs.
>>
>> I believe centos squid package comes with logrotate configured, should be
>> in
>> /etc/logrotate.d/squid


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
My mind is like a steel trap - rusty and illegal in 37 states.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
Hey Rob,
 
The next is the file:
 
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Thursday, 16 June 2022 13:27
To: Squid Users 
Subject: Re: [squid-users] Logrotate question
 
Cool, so I will rotate daily and delete after 91 days, thanks guys
 
On Thu, 16 Jun 2022, 11:14 Matus UHLAR - fantomas, mailto:uh...@fantomas.sk> > wrote:
On 16.06.22 10:54, robert k Wild wrote:
>Basically I want to keep logs for 3 months then rotate so it overwrites
>them with another 3 months, if that makes sense

in fact, it does not.

I guess you are supposed to keep 3 months of logs, which mean, you always 
need to have 3 months of logs available.

Each day, you can delete log files over 3 months old.

If you rotated lof once in 3 months, you would have single file with 3 
months of logs in it, and could remove it 3 months after rotating, when 
first logs would be 6 months old.

As we already told you, rotate daily and remove old logs after 92 days.
and use logrotate config.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
Oops,
 
The next is the file: /etc/logrotate.d/squid
##START
/var/log/squid/*.log {
weekly
rotate 5
compress
notifempty
missingok
nocreate
sharedscripts
postrotate
  # Asks squid to reopen its logs. (logfile_rotate 0 is set in squid.conf)
  # errors redirected to make it silent if squid is not running
  /usr/sbin/squid -k rotate 2>/dev/null
  # Wait a little to allow Squid to catch up before the logs is compressed
  sleep 1
endscript
}
##END
 
So you need to change the rotate to 92+ and also change the squid number of 
logs to the same number.
 
Let me know if you it’s helpful.
 
Eliezer
 
From: ngtech1...@gmail.com  
Sent: Thursday, 16 June 2022 14:00
To: 'robert k Wild' ; 'Squid Users' 

Subject: RE: [squid-users] Logrotate question
 
Hey Rob,
 
The next is the file:
 
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Thursday, 16 June 2022 13:27
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Cool, so I will rotate daily and delete after 91 days, thanks guys
 
On Thu, 16 Jun 2022, 11:14 Matus UHLAR - fantomas, mailto:uh...@fantomas.sk> > wrote:
On 16.06.22 10:54, robert k Wild wrote:
>Basically I want to keep logs for 3 months then rotate so it overwrites
>them with another 3 months, if that makes sense

in fact, it does not.

I guess you are supposed to keep 3 months of logs, which mean, you always 
need to have 3 months of logs available.

Each day, you can delete log files over 3 months old.

If you rotated lof once in 3 months, you would have single file with 3 
months of logs in it, and could remove it 3 months after rotating, when 
first logs would be 6 months old.

As we already told you, rotate daily and remove old logs after 92 days.
and use logrotate config.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
How did you installed squid on CentOS 7?
>From my packages or the OS default or self compiled or another source?
 
Eliezer
 
From: robert k Wild  
Sent: Thursday, 16 June 2022 14:05
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] Logrotate question
 
Oops sorry you did say that, sorry I didn't see that at first
 
On Thu, 16 Jun 2022, 12:04 robert k Wild, mailto:robertkw...@gmail.com> > wrote:
I imagine Eliezer that's what I need to put in logrotate.conf file
 
On Thu, 16 Jun 2022, 12:01 , mailto:ngtech1...@gmail.com> > wrote:
Oops,
 
The next is the file: /etc/logrotate.d/squid
##START
/var/log/squid/*.log {
weekly
rotate 5
compress
notifempty
missingok
nocreate
sharedscripts
postrotate
  # Asks squid to reopen its logs. (logfile_rotate 0 is set in squid.conf)
  # errors redirected to make it silent if squid is not running
  /usr/sbin/squid -k rotate 2>/dev/null
  # Wait a little to allow Squid to catch up before the logs is compressed
  sleep 1
endscript
}
##END
 
So you need to change the rotate to 92+ and also change the squid number of 
logs to the same number.
 
Let me know if you it’s helpful.
 
Eliezer
 
From: ngtech1...@gmail.com   mailto:ngtech1...@gmail.com> > 
Sent: Thursday, 16 June 2022 14:00
To: 'robert k Wild' mailto:robertkw...@gmail.com> >; 
'Squid Users' mailto:squid-users@lists.squid-cache.org> >
Subject: RE: [squid-users] Logrotate question
 
Hey Rob,
 
The next is the file:
 
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Thursday, 16 June 2022 13:27
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Cool, so I will rotate daily and delete after 91 days, thanks guys
 
On Thu, 16 Jun 2022, 11:14 Matus UHLAR - fantomas, mailto:uh...@fantomas.sk> > wrote:
On 16.06.22 10:54, robert k Wild wrote:
>Basically I want to keep logs for 3 months then rotate so it overwrites
>them with another 3 months, if that makes sense

in fact, it does not.

I guess you are supposed to keep 3 months of logs, which mean, you always 
need to have 3 months of logs available.

Each day, you can delete log files over 3 months old.

If you rotated lof once in 3 months, you would have single file with 3 
months of logs in it, and could remove it 3 months after rotating, when 
first logs would be 6 months old.

As we already told you, rotate daily and remove old logs after 92 days.
and use logrotate config.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
Since this one is from yum install it’s very simple to just change the config 
files of squid and logrotate.
 
If you need more assistance let me know.
 
Eliezer
 
From: robert k Wild  
Sent: Thursday, 16 June 2022 14:52
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] Logrotate question
 
Self compiled from source with others ie
 
Squidclamav
Cicap
Cicap modules
 
And clamav but did this one via yum install
 
 
On Thu, 16 Jun 2022, 12:27 , mailto:ngtech1...@gmail.com> > wrote:
How did you installed squid on CentOS 7?
>From my packages or the OS default or self compiled or another source?
 
Eliezer
 
From: robert k Wild mailto:robertkw...@gmail.com> > 
Sent: Thursday, 16 June 2022 14:05
To: Eliezer Croitoru mailto:ngtech1...@gmail.com> >
Cc: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Oops sorry you did say that, sorry I didn't see that at first
 
On Thu, 16 Jun 2022, 12:04 robert k Wild, mailto:robertkw...@gmail.com> > wrote:
I imagine Eliezer that's what I need to put in logrotate.conf file
 
On Thu, 16 Jun 2022, 12:01 , mailto:ngtech1...@gmail.com> > wrote:
Oops,
 
The next is the file: /etc/logrotate.d/squid
##START
/var/log/squid/*.log {
weekly
rotate 5
compress
notifempty
missingok
nocreate
sharedscripts
postrotate
  # Asks squid to reopen its logs. (logfile_rotate 0 is set in squid.conf)
  # errors redirected to make it silent if squid is not running
  /usr/sbin/squid -k rotate 2>/dev/null
  # Wait a little to allow Squid to catch up before the logs is compressed
  sleep 1
endscript
}
##END
 
So you need to change the rotate to 92+ and also change the squid number of 
logs to the same number.
 
Let me know if you it’s helpful.
 
Eliezer
 
From: ngtech1...@gmail.com   mailto:ngtech1...@gmail.com> > 
Sent: Thursday, 16 June 2022 14:00
To: 'robert k Wild' mailto:robertkw...@gmail.com> >; 
'Squid Users' mailto:squid-users@lists.squid-cache.org> >
Subject: RE: [squid-users] Logrotate question
 
Hey Rob,
 
The next is the file:
 
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Thursday, 16 June 2022 13:27
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Cool, so I will rotate daily and delete after 91 days, thanks guys
 
On Thu, 16 Jun 2022, 11:14 Matus UHLAR - fantomas, mailto:uh...@fantomas.sk> > wrote:
On 16.06.22 10:54, robert k Wild wrote:
>Basically I want to keep logs for 3 months then rotate so it overwrites
>them with another 3 months, if that makes sense

in fact, it does not.

I guess you are supposed to keep 3 months of logs, which mean, you always 
need to have 3 months of logs available.

Each day, you can delete log files over 3 months old.

If you rotated lof once in 3 months, you would have single file with 3 
months of logs in it, and could remove it 3 months after rotating, when 
first logs would be 6 months old.

As we already told you, rotate daily and remove old logs after 92 days.
and use logrotate config.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logrotate question

2022-06-16 Thread ngtech1ltd
So just create the file I sent you before or extract the file from the squid 
RPM using “rpm2cpio squid…rpm |cpio -dimv” in some tmp dir.
You will just need to copy the file into the proper location, disable the cron 
you have created and if the squid binary is in a specific different folder 
change the path of the squid binary in the squid logrotate file accordingly.
 
All The Bests,
Eliezer
 
From: robert k Wild  
Sent: Thursday, 16 June 2022 15:24
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] Logrotate question
 
No squid isn't sorry it is compiled from source, I forgot to add it sorry about 
that
 
On Thu, 16 Jun 2022, 13:19 , mailto:ngtech1...@gmail.com> > wrote:
Since this one is from yum install it’s very simple to just change the config 
files of squid and logrotate.
 
If you need more assistance let me know.
 
Eliezer
 
From: robert k Wild mailto:robertkw...@gmail.com> > 
Sent: Thursday, 16 June 2022 14:52
To: Eliezer Croitoru mailto:ngtech1...@gmail.com> >
Cc: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Self compiled from source with others ie
 
Squidclamav
Cicap
Cicap modules
 
And clamav but did this one via yum install
 
 
On Thu, 16 Jun 2022, 12:27 , mailto:ngtech1...@gmail.com> > wrote:
How did you installed squid on CentOS 7?
>From my packages or the OS default or self compiled or another source?
 
Eliezer
 
From: robert k Wild mailto:robertkw...@gmail.com> > 
Sent: Thursday, 16 June 2022 14:05
To: Eliezer Croitoru mailto:ngtech1...@gmail.com> >
Cc: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Oops sorry you did say that, sorry I didn't see that at first
 
On Thu, 16 Jun 2022, 12:04 robert k Wild, mailto:robertkw...@gmail.com> > wrote:
I imagine Eliezer that's what I need to put in logrotate.conf file
 
On Thu, 16 Jun 2022, 12:01 , mailto:ngtech1...@gmail.com> > wrote:
Oops,
 
The next is the file: /etc/logrotate.d/squid
##START
/var/log/squid/*.log {
weekly
rotate 5
compress
notifempty
missingok
nocreate
sharedscripts
postrotate
  # Asks squid to reopen its logs. (logfile_rotate 0 is set in squid.conf)
  # errors redirected to make it silent if squid is not running
  /usr/sbin/squid -k rotate 2>/dev/null
  # Wait a little to allow Squid to catch up before the logs is compressed
  sleep 1
endscript
}
##END
 
So you need to change the rotate to 92+ and also change the squid number of 
logs to the same number.
 
Let me know if you it’s helpful.
 
Eliezer
 
From: ngtech1...@gmail.com   mailto:ngtech1...@gmail.com> > 
Sent: Thursday, 16 June 2022 14:00
To: 'robert k Wild' mailto:robertkw...@gmail.com> >; 
'Squid Users' 
Subject: RE: [squid-users] Logrotate question
 
Hey Rob,
 
The next is the file:
 
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Thursday, 16 June 2022 13:27
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Logrotate question
 
Cool, so I will rotate daily and delete after 91 days, thanks guys
 
On Thu, 16 Jun 2022, 11:14 Matus UHLAR - fantomas, mailto:uh...@fantomas.sk> > wrote:
On 16.06.22 10:54, robert k Wild wrote:
>Basically I want to keep logs for 3 months then rotate so it overwrites
>them with another 3 months, if that makes sense

in fact, it does not.

I guess you are supposed to keep 3 months of logs, which mean, you always 
need to have 3 months of logs available.

Each day, you can delete log files over 3 months old.

If you rotated lof once in 3 months, you would have single file with 3 
months of logs in it, and could remove it 3 months after rotating, when 
first logs would be 6 months old.

As we already told you, rotate daily and remove old logs after 92 days.
and use logrotate config.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] There is the problems with instagram images and videos

2022-06-16 Thread ngtech1ltd
Hey,

Take a peek at:
https://www1.ngtech.co.il/wpe/2016/05/02/proxy-per-internet-user-is-it-realistic/

You might find ShadowSocks interesting.

Let me know if one of the proxies in the article is good enough for your use 
case.

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
simwin
Sent: Tuesday, 14 June 2022 17:08
To: squid-users@lists.squid-cache.org
Subject: [squid-users] There is the problems with instagram images and videos

Hi! There is the problems with instagram - I can see only text
without photos and videos. 

This is the errors in browsers console:

16:52:51.024 Subsequent non-fatal errors won't be logged; see
https://fburl.com/debugjs. GyaOhXSa8tR.js:56:563

errorListener
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:56
reportNormalizedError
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:56
reportError
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:56
k
https://static.cdninstagram.com/rsrc.php/v3i7Br4/yh/l/ru_RU/0THwjINDuU0.js?_nc_x=Ij3Wp8lg5Kz:899
a
https://static.cdninstagram.com/rsrc.php/v3i7Br4/yh/l/ru_RU/0THwjINDuU0.js?_nc_x=Ij3Wp8lg5Kz:900
x
https://static.cdninstagram.com/rsrc.php/v3i7Br4/yh/l/ru_RU/0THwjINDuU0.js?_nc_x=Ij3Wp8lg5Kz:896
a
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:220
m
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:147
q
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:147
applyWithGuard
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:56
c
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:56
e
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:143
r
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:143
onmessage
https://static.cdninstagram.com/rsrc.php/v3/yK/r/GyaOhXSa8tR.js?_nc_x=Ij3Wp8lg5Kz:143

16:52:51.026 A request from an outside source is blocked: The Single source
policy prohibits reading a remote resource on
https://graph.instagram.com/logging_client_events . (Reason: CORS request
failed). Status code: (null).

16:53:17.750 A request from an outside source is blocked: The Single source
policy prohibits reading a remote resource on
https://scontent-hel3-1.cdninstagram.com/v/t51.2885-19/82690581_2710077395747525_6629318554367819776_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent-hel3-1.cdninstagram.com&_nc_cat=105&_nc_ohc=ctOErYLKsNAAX9tXH6M&edm=AJ9x6zYB&ccb=7-5&oh=00_AT85lFWnwrBmMMuEgs_q4E2WyNALHSTtKfbcOwA0pfA1eA&oe=62AD7CD2&_nc_sid=cff2a4
. (Reason: CORS request failed). Status code: (null)

My squid config, Debian 11:

grep -vE '^$|^#' /etc/squid/squid.conf

auth_param basic program /usr/lib/squid/basic_ncsa_auth
/etc/squid/internet_users
auth_param basic children 5
auth_param basic realm Sasic Authentication
auth_param basic credentialsttl 8 hours
auth_param basic casesensitive on
acl auth_users proxy_auth REQUIRED
http_access allow auth_users
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network
(LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space
(CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly
plugged) machines
acl localnet src 172.16.0.0/12  # RFC 1918 local private network
(LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network
(LAN)
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
include /etc/squid/conf.d/*
http_access allow localhost
http_access deny all
http_port xxx.xxx.xxx.xxx: (hidden) 
sslproxy_cert_error allow all
sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/spool/ssl_db -M
20MB
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorizatio

Re: [squid-users] The usage of extended SNMPD commands to monitor squid.

2022-06-17 Thread ngtech1ltd
Hey Matus,

The Squid-Cache project to my knowledge doesn't have a developer expert or have 
enough "free" time to maintain the SNMP parts of the code.
Amos and Alex can correct me if I'm wrong.
There were plans to make the cache manager pages in a yaml format to allow 
programs to work with instead of parsing the current format.
I do not know were these plans stand and I believe that an extended SNMPD 
commands might be pretty useful since not everything in the cache manager pages 
is available via the SNMP.

It will require some development time and a bit of QA but the relevant things 
are:
* Prometheus (json)
* SNMP
* others (yaml)

It's pretty simple to parse the current cache manager to specific degree and 
specific developers and to convert it to json/yaml is also pretty simple.
I remember that I have some code somewhere that does some of the heavy lifting 
in such a project.

I will say something like this:
Let make a DIFF and see what happens, what do you think?

Eliezer

-Original Message-
From: squid-users  On Behalf Of 
Matus UHLAR - fantomas
Sent: Friday, 17 June 2022 11:30
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] The usage of extended SNMPD commands to monitor 
squid.

On 24.05.22 10:39, Eliezer Croitoru wrote:
>Since the Squid-Cache project doesn't maintain the SNMP part of it as far as
>I know I was thinking about:

Doesn't it?

I mean, some data are already avilable via SNMP.

If there are gauges and counters in squid and they are available via SNMP, 
we can expect them to be correct, am I right?

is it that they are in squid but not available via SNMP?

>Using extended SNMPD ie in /etc/snmp/snmpd.conf
>
>extend squid_x_stats /bin/bash /usr/local/bin/squid_x_stats.sh
>
>while the binary itself probably will be a single command/script that will
>have symlinks to itself with a different name (like what busybox provides
>binaries).
>
>With a set of these commands it would be possible to monitor squid via the
>linux SNMPD and the backend would be a script.
>
>To overcome a DOS from the SNMP side I can build a layer of caching in
>files.
>It would not be like the current squid SNMP tree of-course but as long the
>data is there it can be used in any system that supports it.
>
>I have used nagios/cacti/others to create graphs based on this concept.
>
>I am currently working on the PHP re-testing project and it seems that PHP
>7.4 is not exploding it's memory and crashes compared to older versions.
>
>I still need a more stressed system to test the scripts.
>
>I have created the next scripts for now:
>
>*  Fake helper
>*  Session helper based on Redis
>*  Session helper based on FS in /var/spool/squid/session_helper_fs


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid ACLs by DSCP

2022-06-18 Thread ngtech1ltd
Hey,
 
I have been marking different clients with DSCP and have managed to redirect 
traffic to different squid ports based on DSCP.
I am trying to use a single squid port that will read the DSCP of the 
connection as an ACL, is this even possible?
Currently my best shot is to use couple squid ports with different ACLs per 
squid port.
For example, SSL-BUMP for a specific port while the other will not or to allow 
unknown protocols etc..
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid ACLs by DSCP

2022-06-22 Thread ngtech1ltd
Hey Amos,

I have a set of clients which I want to bump while others I don't want to bump.
I have 10 classes of clients which each and every one of them have a different 
pre-defined class.
If I can read the TOS hex value of the incoming intercepted connection I can 
decide in the ACLs 
based on the TOS specific decisions.
Since I am using an external_helper it's pretty easy to change the rules pretty 
easy without reloading squid.
Currently what I tried is to use couple squid ports and then intercept the 
traffic based on the DSCP(TOS..) value
to the designated port.

It's a pretty nice combination for my specific use case that I have about 10 
pre defined client classes.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, 22 June 2022 13:08
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid ACLs by DSCP

On 19/06/22 06:55, ngtech1ltd wrote:
> Hey,
> 
> I have been marking different clients with DSCP and have managed to 
> redirect traffic to different squid ports based on DSCP.
> 
> I am trying to use a single squid port that will read the DSCP of the 
> connection as an ACL, is this even possible?
> 

The so-called DSCP "field" is a re-mapping of the TOS value.

See this table for the TOS hex values for each DSCP service type: 
<https://linuxreviews.org/Type_of_Service_(ToS)_and_DSCP_Values#The_DSCP_and_The_ToS_Byte_Values>


Squid has a fair amount of support for TOS. So the question is more 
whether Squid TOS directives can do what you want.


I do not understand quite what ACLs have to do with what you are 
wanting. Can you clarify what you are trying to have happen in terms of 
traffic flow?


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] APPs defintions

2022-06-23 Thread ngtech1ltd
I have started working on APPs definitions by destination AS, Domains and 
Destination IP addresses.
I am currently working on Netflix related domains.
 
If anyone knows about a specific source that contains such lists please let me 
know.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-23 Thread ngtech1ltd
Hey David,
 
Just trying to understand something:
Aren’t Fortinet something that should replace squid?
I assumed that it should do a much better job then Squid in many aeras.
What a Fortinet(I have one…) is not covering?
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
David Touzeau
Sent: Thursday, 23 June 2022 19:12
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 
message: truncated record
 
Hi Alex,
is the v5 commit 7a73a54 already included in the latest 5.5,5.6 versions?

This is very unfortunate because WCCP is used by default by Fortinet firewall 
devices. It should be very popular.
Indeed, Fortinet is flooding the market.
I can volunteer for the funding and the necessary testing to be done.
Le 23/06/2022 à 14:44, Alex Rousskov a écrit :
On 6/21/22 07:43, David Touzeau wrote: 



We trying to using WCCP with Fortigate without success Squid version  5.5 
always claim "Ignoring WCCPv2 message: truncated record" 

What can be the cause ? 

The most likely cause are bugs in untested WCCP fixes (v5 commit 7a73a54). 
Dormant draft PR 970 contains unfinished fixes for the problems in that 
previous attempt: 
https://github.com/squid-cache/squid/pull/970 

IMHO, folks that need WCCP support should invest into that semi-abandoned Squid 
feature or risk losing it. WCCP code needs serious refactoring and proper 
testing. There are currently no Project volunteers that have enough resources 
and capabilities to do either. 

https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
 


HTH, 

Alex. 




We have added a service ID 80 on fortigate 

config system wccp 
 edit "80" 
 set router-id 10.10.50.1 
 set group-address 0.0.0.0 
 set server-list 10.10.50.2 255.255.255.255 
 set server-type forward 
 set authentication disable 
 set forward-method GRE 
 set return-method GRE 
 set assignment-method HASH 
 next 
end 

Squid wccp configuration 

wccp2_router 10.10.50.1 
wccp_version 3 
# tested v4 do the same behavior 
wccp2_rebuild_wait on 
wccp2_forwarding_method gre 
wccp2_return_method gre 
wccp2_assignment_method hash 
wccp2_service dynamic 80 
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash priority=240 
ports=80,443 
wccp2_address 0.0.0.0 
wccp2_weight 1 

Squid claim in debug log 

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206) wccp2HandleUdp: 
wccp2HandleUdp: Called. 
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect: FD 38, type=1, 
handler=1, client_data=0, timeout=0 
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230) wccp2HandleUdp: Incoming 
WCCPv2 I_SEE_YOU length 112. 
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message: truncated record 
 exception location: wccp2.cc(1133) CheckSectionLength 



-- 

___ 
squid-users mailing list 
squid-users@lists.squid-cache.org   
http://lists.squid-cache.org/listinfo/squid-users 

___ 
squid-users mailing list 
squid-users@lists.squid-cache.org   
http://lists.squid-cache.org/listinfo/squid-users 
-- 


Technical Support
David Touzeau
Orgerus, Yvelines, France
Artica Tech 

P: +33 6 58 44 69 46 
www: wiki.articatech.com   
www: articatech.net   
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-24 Thread ngtech1ltd
Hey David,
 
I am not sure and can spin up my Forti but from what I remember there are PBR 
functions in the Forti.
Why would a WCCP be required? To pass only ports 80 and 443 instead of all 
traffic?
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
David Touzeau
Sent: Friday, 24 June 2022 14:04
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 
message: truncated record
 
Hi Elizer
No, Fortinet is good.
In this case is connecting HTTP/HTTPs with WCCP from Fortinet to squid did not 
work, because SQUID refuse to communicate with Fortinet according to "Ignoring 
WCCPv2 message: truncated record" issue.
With Squid,  Fortinet report that is no WCCP server available.
 
Le 23/06/2022 à 18:33, ngtech1...@gmail.com   a 
écrit :
Hey David,
 
Just trying to understand something:
Aren’t Fortinet something that should replace squid?
I assumed that it should do a much better job then Squid in many aeras.
What a Fortinet(I have one…) is not covering?
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users   
 On Behalf Of David Touzeau
Sent: Thursday, 23 June 2022 19:12
To: squid-users@lists.squid-cache.org 
 
Subject: Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 
message: truncated record
 
Hi Alex,
is the v5 commit 7a73a54 already included in the latest 5.5,5.6 versions?

This is very unfortunate because WCCP is used by default by Fortinet firewall 
devices. It should be very popular.
Indeed, Fortinet is flooding the market.
I can volunteer for the funding and the necessary testing to be done.
Le 23/06/2022 à 14:44, Alex Rousskov a écrit :
On 6/21/22 07:43, David Touzeau wrote: 



We trying to using WCCP with Fortigate without success Squid version  5.5 
always claim "Ignoring WCCPv2 message: truncated record" 

What can be the cause ? 

The most likely cause are bugs in untested WCCP fixes (v5 commit 7a73a54). 
Dormant draft PR 970 contains unfinished fixes for the problems in that 
previous attempt: 
https://github.com/squid-cache/squid/pull/970 

IMHO, folks that need WCCP support should invest into that semi-abandoned Squid 
feature or risk losing it. WCCP code needs serious refactoring and proper 
testing. There are currently no Project volunteers that have enough resources 
and capabilities to do either. 

https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
 


HTH, 

Alex. 




We have added a service ID 80 on fortigate 

config system wccp 
 edit "80" 
 set router-id 10.10.50.1 
 set group-address 0.0.0.0 
 set server-list 10.10.50.2 255.255.255.255 
 set server-type forward 
 set authentication disable 
 set forward-method GRE 
 set return-method GRE 
 set assignment-method HASH 
 next 
end 

Squid wccp configuration 

wccp2_router 10.10.50.1 
wccp_version 3 
# tested v4 do the same behavior 
wccp2_rebuild_wait on 
wccp2_forwarding_method gre 
wccp2_return_method gre 
wccp2_assignment_method hash 
wccp2_service dynamic 80 
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash priority=240 
ports=80,443 
wccp2_address 0.0.0.0 
wccp2_weight 1 

Squid claim in debug log 

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206) wccp2HandleUdp: 
wccp2HandleUdp: Called. 
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect: FD 38, type=1, 
handler=1, client_data=0, timeout=0 
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230) wccp2HandleUdp: Incoming 
WCCPv2 I_SEE_YOU length 112. 
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message: truncated record 
 exception location: wccp2.cc(1133) CheckSectionLength 



-- 

___ 
squid-users mailing list 
squid-users@lists.squid-cache.org   
http://lists.squid-cache.org/listinfo/squid-users 

___ 
squid-users mailing list 
squid-users@lists.squid-cache.org   
http://lists.squid-cache.org/listinfo/squid-users 
-- 


Technical Support
David Touzeau
Orgerus, Yvelines, France
Artica Tech 

P: +33 6 58 44 69 46 
www: wiki.articatech.com   
www: articatech.net   
 



___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid

[squid-users] MS-SQL with squid helpers

2022-06-26 Thread ngtech1ltd
Hey Everybody,
 
I was wondering if someone wrote a set of helpers that works with MS-SQL server 
database?
I have a very big MSSQL Database that contains a set of domains and urls and I 
have a program that runs queries against this DB.
 
If no one wrote such helpers I can manage to write a set of helpers at-least on 
ruby and GoLang and maybe couple other languages.
I can recommend a nice basic video about MS-SQL that I have seen lately:
https://www.youtube.com/watch?v=h0nxCDiD-zg
 
Very basic but very nice if you already have some SQL and programming 
fundamentals.
 

Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] the free domains blacklists are gone..

2022-06-30 Thread ngtech1ltd
Hey,
 
I have tried to download blacklists from couple sites that was publishing these 
in the past and all of them are gone.
The only free resource I have found was DNS blacklists.
 
I just wrote a dstdomain external helper that can work with a SQL DB and it 
seems to run pretty nice.
Until now I have tried MySQL, Maraidb, MSSQL, PostgreSQL and all of them works 
pretty nice.
There is an overhead in storing the data in a DB compared to a plain text file 
but the benefits are worth it.
 
The only lists I have found are for Pihole for example at:
https://github.com/blocklistproject/Lists
 
So now I just need to convert these to dstdomain format and it will work with 
Squid pretty nice.
 
Any recommendations for free lists are welcome.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid.conf in a DB Mysql

2022-07-09 Thread ngtech1ltd
Hey Marcelo,
 
It’s possible to use a SQL DB (Mysql,MSSQL,Oracle,PostgreSQL…) and a 
programming language to put the rules outside of squid.conf.
It could be a combination of external acl helpers with DB backend and a 
configuration (squid.conf) generator based on a DB.
However, you first need to do some homework and make sure it will be good 
enough for your use case.
As Alex mentioned there are fast and slow ACLs but for your use case depends on 
your service size it’s possible 
you can use simple text files and simple external acl helpers.
 
To be able to answer your question in detail you need to prepare a technical 
spec that will try summarize your use case.
I am working on a series of Zoom meetings that I hope will start next week on 
Thursday evening IST.
I need to prepare the slides and environment for this meeting and it will be 
the first of: Squid 0 to hero
So the first meetings will not touch your use case specifically but I will 
discuss with the participants their aera so
we can talk and give demos for specific use cases somewhere in these meetings.
 
I will try to post about these meetings in the coming week with hope that you 
will be able to participate.
 
I believe that for your use case it’s better that you will not use PHP to write 
your helper despite to that fact that
the latest versions of PHP which I have tested are stable and won’t stop 
working even after a very very long run time
(what I was informed that should be tested in these PHP versions).
 
In my production environment I am using many RUBY helpers that eventually worth 
their memory consumption.
(Kinkie it took me a while to grasp that the memory usage differences between 
languages doesn’t worth considering)
 
I am working on couple examples but I am pretty sure it’s not your use case.
 
All The Bests,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Marcelo
Sent: Sunday, 10 July 2022 0:08
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid.conf in a DB Mysql
 
Hello,
 
Is it possible to use MySQL and PHP programming to put squid rules outside of 
squid.conf?
I heard about it using external acl, but can find any documentation or good 
example of it.
 
I would like to “transfer” parameters as ACLs, HTTP_ACCESS, HTTP_PORT, and 
TCP_OUTGOING_ADDRESS from squid.conf to a DB+PHP solution.
 
 
Marcelo
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 0 2 RO - Squid-Cache Zoom Meetup

2022-07-10 Thread ngtech1ltd
Hey Everybody,

https://www.ngtech.co.il/0-2-ro/index.php/2022/07/11/0-2-ro/

https://www.ngtech.co.il/0-2-ro/wp-content/uploads/2022/07/meeting-01-1024x576.png

Up-coming 0 2 RO Squid-Cache community meetup next week the 21/07/2022 at 20:30 
IST. The meeting will be in Zoom and I hope that we can meet each other and 
understand what we can do together. The idea is to learn more about Squid-Cache 
as a one punch tool. It can do a lot of things and you we just need to learn 
from the experience of the elders. There is a eight layer in the OSI model and 
this is not something you can find in search engines but in each and every one 
of our souls. The Computers world is a very special cross-world place and we 
utilize the 7 OSI layers to get to the eight and I this meetup will give the 
platform for this layer for the Squid-Cache Community.
Put it in your calendar: 

https://www.ngtech.co.il/0-2-ro/wp-content/uploads/2022/07/0-2-ro-Squid-Cache-Community-meetup.ics

Please send me a response for the invite.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sqid uses all RAM / killed by OOM

2022-07-11 Thread ngtech1ltd
Hey Ronny,
 
First to make the data more readable use a top snapshot to illustrate the 
memory usage.
Second, use Squid 5.6 and not 5.2
The issue is not necessarily  because of the Squid version but other things.
We should narrow down the issues as any other Squid issue.
First upgrade to 5.6 and then we need the top snapshot.
After that we need snapshots of cache-manager pages.
 
To dump snapshots of the cache manager pages you can use the next script:
https://gist.githubusercontent.com/elico/8790bdc835d8e9ecbc57e72fc31effc0/raw/60d140b0e772fa4f418779bfc27e4804a345ce23/dump-cache-mgr-to-file.sh
 
I am using it like this:
/usr/local/bin/dump-cache-mgr-to-file.sh 2>&1 | tee “cachemg-data$(date 
+%Y-%m-%d_%H-%M-%S)”
 
Try to take couple dumps with 4.1 and 5.6 (if you want 5.2 also) and upload the 
snapshots to somewhere in a zip file.
Just take into account that these snapshots may contains confidential data so 
you are advised to review them and deduct specific
parts of them before making them public.
 
Just an advice,
A machine that should handle “max_filedescriptors 40960” ie above 16k should 
have more RAM ie 8-16 GB as a starter.
 
All The Bests,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Ronny Preiss
Sent: Monday, 11 July 2022 9:55
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Sqid uses all RAM / killed by OOM
 
Hello all,
 
I have the following problem with squid 5.2 on ubuntu 22.04.
Squid consumes all ram and the entire SWAP. When swap and ram are completely 
full, the OOM killer strikes and terminates the process.
 
We use three internal child proxy servers with keepalived and haproxy as load 
balancers. From our ISP we use a parent upstream proxy for external internet 
traffic.
As an operating system we have so far Ubuntu 20.04.4 with squid 4.1 in use. 
This constellation works flawlessly.
 
Now I want to update the Server to Ubuntu 22.04 and squid 5.2. But with Ubuntu 
22.04 and squid 5.2 the above mentioned problem with the OOM Killer occurs.
The new machine has only the OS and squid installed.
 
Who can help me with a solution?
 
With kind regards
Ronny
 
Attached the squid configuration and the VMWare specs.
 
### VM Specs ###
OS: Ubuntu 22.04 Server
CPU: 4x (Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz)
RAM: 4 GB
VMWare: ESXi 7.0 U2
 
### CONFIG ###
acl 10.172.xxx.xxx/18   src 10.172.xxx.xxx/18 
 
acl 172.16.xxx.xxx/12   src 172.16.xxx.xxx/12 
 
acl 192.168.xxx.xxx/16   src 192.168.xxx.xxx/16 
 
 
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
 
http_access allow 10.172.xxx.xxx/18   Safe_ports
http_access allow 172.16.xxx.xxx/12   Safe_ports
http_access allow 192.168.xxx.xxx/16   Safe_ports
 
http_access allow localhost manager
http_access allow localhost
http_access deny manager
http_access deny all
 
include /etc/squid/conf.d/*
http_port 10.172.xxx.xxx:3128  
 
cache_peer 10.210.xxx.xxx parent 8080 0
cache_dir ufs /var/spool/squid 3000 16 256
cache_effective_user proxy
cache_effective_group proxy
 
coredump_dir /var/spool/squid
 
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern .   0   20% 4320
 
never_direct allow all
max_filedescriptors 40960
dns_nameservers 10.244.xxx.xxx
 
### DMESG ###
 
[256929.150801] 
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/squid.service,task=squid,pid=26390,uid=13
[256929.150822] Out of memory: Killed process 26390 (squid) total-vm:9691764kB, 
anon-rss:3657748kB, file-rss:2320kB, shmem-rss:0kB, UID:13 pgtables:18932kB 
oom_score_adj:0
[256929.510641] oom_reaper: reaped process 26390 (squid), now anon-rss:0kB, 
file-rss:0kB, shmem-rss:0kB
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.x on Centos8 not working

2022-07-11 Thread ngtech1ltd
Hey Ahmad,
 
What is preventing you from using 4.x or 5.x?
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Ahmad Alzaeem
Sent: Tuesday, 28 June 2022 16:29
To: squid-users@lists.squid-cache.org
Subject: [squid-users] squid 3.x on Centos8 not working
 
 
Hello Folks ,
 
Trying to compile squid 3.x on Centos8 but have an errors below seems in SMBLIB 
.
 
Squid ver :
squid-3.5.28
 
GCC ver :
 
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/8/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap 
--enable-languages=c,c++,fortran,lto --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla 
--enable-shared --enable-threads=posix --enable-checking=release 
--enable-multilib --with-system-zlib --enable-__cxa_atexit 
--disable-libunwind-exceptions --enable-gnu-unique-object 
--enable-linker-build-id --with-gcc-major-version-only 
--with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl 
--disable-libmpx --enable-offload-targets=nvptx-none --without-cuda-driver 
--enable-gnu-indirect-function --enable-cet --with-tune=generic 
--with-arch_32=x86-64 --build=x86_64-redhat-linux
Thread model: posix
gcc version 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
 
we are using ./configure  with default flags  ,  and have the errors below :
 
 
make[2]: Entering directory '/root/squid-3.5.28/lib/rfcnb'
depbase=`echo rfcnb-io.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
/bin/sh ../../libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H   -I../.. 
-I../../include -I../../lib -I../../src -I../../include-I../../lib  -Wall 
-Wpointer-arith -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -Wall -g -O2 -MT rfcnb-io.lo -MD 
-MP -MF $depbase.Tpo -c -o rfcnb-io.lo rfcnb-io.c &&\
mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT rfcnb-io.lo -MD -MP -MF .deps/rfcnb-io.Tpo -c 
rfcnb-io.c  -fPIC -DPIC -o .libs/rfcnb-io.o
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT rfcnb-io.lo -MD -MP -MF .deps/rfcnb-io.Tpo -c 
rfcnb-io.c -o rfcnb-io.o >/dev/null 2>&1
depbase=`echo rfcnb-util.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
/bin/sh ../../libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H   -I../.. 
-I../../include -I../../lib -I../../src -I../../include-I../../lib  -Wall 
-Wpointer-arith -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -Wall -g -O2 -MT rfcnb-util.lo 
-MD -MP -MF $depbase.Tpo -c -o rfcnb-util.lo rfcnb-util.c &&\
mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT rfcnb-util.lo -MD -MP -MF .deps/rfcnb-util.Tpo -c 
rfcnb-util.c  -fPIC -DPIC -o .libs/rfcnb-util.o
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT rfcnb-util.lo -MD -MP -MF .deps/rfcnb-util.Tpo -c 
rfcnb-util.c -o rfcnb-util.o >/dev/null 2>&1
depbase=`echo session.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
/bin/sh ../../libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H   -I../.. 
-I../../include -I../../lib -I../../src -I../../include-I../../lib  -Wall 
-Wpointer-arith -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -Wall -g -O2 -MT session.lo -MD 
-MP -MF $depbase.Tpo -c -o session.lo session.c &&\
mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT session.lo -MD -MP -MF .deps/session.Tpo -c 
session.c  -fPIC -DPIC -o .libs/session.o
libtool: compile:  gcc -DHAVE_CONFIG_H -I../

Re: [squid-users] squid 3.x on Centos8 not working

2022-07-11 Thread ngtech1ltd
Hey Ahmad,
 
I really don’t know what to say.
I am not using delay pools so I cannot say anything about that.
 
About DNS IPV4/IPV6 I am not sure what you are referring to.
Can you please refer me to the bug report on these?
It should be testable.
I have not seen anything about this in my environment until now so I am pretty 
confused.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: Ahmad Alzaeem <0xf...@gmail.com> 
Sent: Monday, 11 July 2022 22:53
To: ngtech1...@gmail.com; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid 3.x on Centos8 not working
 
None of squid4.x support delay pools .
 
Squid5.x is full of bugs with DNS IPV4/IPV6 Because of the eyeball feature.
 
Thanks 
 
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > on behalf of 
ngtech1...@gmail.com   mailto:ngtech1...@gmail.com> >
Date: Monday, July 11, 2022 at 12:37 PM
To: squid-users@lists.squid-cache.org 
  mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] squid 3.x on Centos8 not working
Hey Ahmad,
 
What is preventing you from using 4.x or 5.x?
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Ahmad Alzaeem
Sent: Tuesday, 28 June 2022 16:29
To: squid-users@lists.squid-cache.org 
 
Subject: [squid-users] squid 3.x on Centos8 not working
 
 
Hello Folks ,
 
Trying to compile squid 3.x on Centos8 but have an errors below seems in SMBLIB 
.
 
Squid ver :
squid-3.5.28
 
GCC ver :
 
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/8/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap 
--enable-languages=c,c++,fortran,lto --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla 
--enable-shared --enable-threads=posix --enable-checking=release 
--enable-multilib --with-system-zlib --enable-__cxa_atexit 
--disable-libunwind-exceptions --enable-gnu-unique-object 
--enable-linker-build-id --with-gcc-major-version-only 
--with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl 
--disable-libmpx --enable-offload-targets=nvptx-none --without-cuda-driver 
--enable-gnu-indirect-function --enable-cet --with-tune=generic 
--with-arch_32=x86-64 --build=x86_64-redhat-linux
Thread model: posix
gcc version 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
 
we are using ./configure  with default flags  ,  and have the errors below :
 
 
make[2]: Entering directory '/root/squid-3.5.28/lib/rfcnb'
depbase=`echo rfcnb-io.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
/bin/sh ../../libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H   -I../.. 
-I../../include -I../../lib -I../../src -I../../include-I../../lib  -Wall 
-Wpointer-arith -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -Wall -g -O2 -MT rfcnb-io.lo -MD 
-MP -MF $depbase.Tpo -c -o rfcnb-io.lo rfcnb-io.c &&\
mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT rfcnb-io.lo -MD -MP -MF .deps/rfcnb-io.Tpo -c 
rfcnb-io.c  -fPIC -DPIC -o .libs/rfcnb-io.o
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissing-prototypes -Wmissing-declarations -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -Wall -g -O2 -MT rfcnb-io.lo -MD -MP -MF .deps/rfcnb-io.Tpo -c 
rfcnb-io.c -o rfcnb-io.o >/dev/null 2>&1
depbase=`echo rfcnb-util.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\
/bin/sh ../../libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H   -I../.. 
-I../../include -I../../lib -I../../src -I../../include-I../../lib  -Wall 
-Wpointer-arith -Wwrite-strings -Wmissing-prototypes -Wmissing-declarations 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -Wall -g -O2 -MT rfcnb-util.lo 
-MD -MP -MF $depbase.Tpo -c -o rfcnb-util.lo rfcnb-util.c &&\
mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  gcc -DHAVE_CONFIG_H -I../.. -I../../include -I../../lib 
-I../../src -I../../include -I../../lib -Wall -Wpointer-arith -Wwrite-strings 
-Wmissin

Re: [squid-users] 0 2 RO - Squid-Cache Zoom Meetup

2022-07-12 Thread ngtech1ltd
OK So the meeting will be up in the next zoom link:
https://us02web.zoom.us/j/83973796573?pwd=TTdjY1p1dFBVUDVta1Yxa3N6OEo0dz09

It's public but has restricted login so you will need to be admitted by me.
To prepare for the meetup it would be nice to know who will be participating in 
the meetup.
Please send me thumbs up that you are willing to participate.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: ngtech1...@gmail.com  
Sent: Monday, 11 July 2022 1:24
To: 'squid-users@lists.squid-cache.org' 
Subject: 0 2 RO - Squid-Cache Zoom Meetup

Hey Everybody,

https://www.ngtech.co.il/0-2-ro/index.php/2022/07/11/0-2-ro/

https://www.ngtech.co.il/0-2-ro/wp-content/uploads/2022/07/meeting-01-1024x576.png

Up-coming 0 2 RO Squid-Cache community meetup next week the 21/07/2022 at 20:30 
IST. The meeting will be in Zoom and I hope that we can meet each other and 
understand what we can do together. The idea is to learn more about Squid-Cache 
as a one punch tool. It can do a lot of things and you we just need to learn 
from the experience of the elders. There is a eight layer in the OSI model and 
this is not something you can find in search engines but in each and every one 
of our souls. The Computers world is a very special cross-world place and we 
utilize the 7 OSI layers to get to the eight and I this meetup will give the 
platform for this layer for the Squid-Cache Community.
Put it in your calendar: 

https://www.ngtech.co.il/0-2-ro/wp-content/uploads/2022/07/0-2-ro-Squid-Cache-Community-meetup.ics

Please send me a response for the invite.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 0 2 RO - Squid-Cache Zoom Meetup

2022-07-13 Thread ngtech1ltd
IST = Israel Time Zone
Which now is UTC+3

IST != India Standard Time


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: ngtech1...@gmail.com  
Sent: Wednesday, 13 July 2022 5:52
To: 'squid-users@lists.squid-cache.org' 
Subject: RE: 0 2 RO - Squid-Cache Zoom Meetup

OK So the meeting will be up in the next zoom link:
https://us02web.zoom.us/j/83973796573?pwd=TTdjY1p1dFBVUDVta1Yxa3N6OEo0dz09

It's public but has restricted login so you will need to be admitted by me.
To prepare for the meetup it would be nice to know who will be participating in 
the meetup.
Please send me thumbs up that you are willing to participate.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: ngtech1...@gmail.com  
Sent: Monday, 11 July 2022 1:24
To: 'squid-users@lists.squid-cache.org' 
Subject: 0 2 RO - Squid-Cache Zoom Meetup

Hey Everybody,

https://www.ngtech.co.il/0-2-ro/index.php/2022/07/11/0-2-ro/

https://www.ngtech.co.il/0-2-ro/wp-content/uploads/2022/07/meeting-01-1024x576.png

Up-coming 0 2 RO Squid-Cache community meetup next week the 21/07/2022 at 20:30 
IST. The meeting will be in Zoom and I hope that we can meet each other and 
understand what we can do together. The idea is to learn more about Squid-Cache 
as a one punch tool. It can do a lot of things and you we just need to learn 
from the experience of the elders. There is a eight layer in the OSI model and 
this is not something you can find in search engines but in each and every one 
of our souls. The Computers world is a very special cross-world place and we 
utilize the 7 OSI layers to get to the eight and I this meetup will give the 
platform for this layer for the Squid-Cache Community.
Put it in your calendar: 

https://www.ngtech.co.il/0-2-ro/wp-content/uploads/2022/07/0-2-ro-Squid-Cache-Community-meetup.ics

Please send me a response for the invite.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS-SQL with squid helpers

2022-07-14 Thread ngtech1ltd
There is a new mssql driver for linux named:
free_tds

Which is superior to ODBC which perl mostly uses in the DBI library.
It's pretty simple to install free_tds and from my tests it seems like the 
better choice.
I will try to write an example for session helper using free_tds and ruby.
For me Ruby is the simplest to write, test and use with such simple and tiny 
projects.
It's not worth writing it in other languages unless there is server component 
to the service.
It's possible to write more robust server side code which supports threading 
and parallelism in other languages.
But for most use cases and my experience it's cheaper to use a bit more RAM and 
CPU compared to writing
a very good server side code.

Amos,
I have also tested MSSQL as a blacklist categories backend and it looks 
promising.
MySQL would be OK but MSSQL and Oracle takes much more load for specific use 
cases.

Working on the helper...

Lookup of for the next week Squid-Cache community meetup.


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, 14 July 2022 11:57
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] MS-SQL with squid helpers

On 26/06/22 23:27, ngtech1ltd wrote:
> Hey Everybody,
> 
> I was wondering if someone wrote a set of helpers that works with MS-SQL 
> server database?
> 

(I see you went ahead with this already, just responding for anyone in 
future).

Squid ships with several DB helpers, for both authentication and 
external ACL lookups. They are based on Perl so any database which Perl 
provides a DB interface for can be used with those helpers.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS-SQL with squid helpers

2022-07-14 Thread ngtech1ltd
While looking for some materials I have found the next MSSQL helper which used 
my repo 😃

https://github.com/flysen/squid




Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, 14 July 2022 11:57
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] MS-SQL with squid helpers

On 26/06/22 23:27, ngtech1ltd wrote:
> Hey Everybody,
> 
> I was wondering if someone wrote a set of helpers that works with MS-SQL 
> server database?
> 

(I see you went ahead with this already, just responding for anyone in 
future).

Squid ships with several DB helpers, for both authentication and 
external ACL lookups. They are based on Perl so any database which Perl 
provides a DB interface for can be used with those helpers.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] MS-SQL with squid helpers

2022-07-14 Thread ngtech1ltd
So the first helper is the session helper login script example:
https://github.com/elico/squid-mssql-helpers




Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, 14 July 2022 11:57
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] MS-SQL with squid helpers

On 26/06/22 23:27, ngtech1ltd wrote:
> Hey Everybody,
> 
> I was wondering if someone wrote a set of helpers that works with MS-SQL 
> server database?
> 

(I see you went ahead with this already, just responding for anyone in 
future).

Squid ships with several DB helpers, for both authentication and 
external ACL lookups. They are based on Perl so any database which Perl 
provides a DB interface for can be used with those helpers.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] fool windows into thinking it has internet access

2022-07-21 Thread ngtech1ltd
Take a peek at:
https://docs.microsoft.com/en-us/powershell/module/nettcpip/test-netconnection?view=windowsserver2022-ps
 
This will highlight your issue and will probably make more sense into what you 
see.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Wednesday, 20 July 2022 18:23
To: Squid Users 
Subject: [squid-users] fool windows into thinking it has internet access
 
hi all,
 
trying to fool windows it has internet access and not just network access
 
looking at this guide
 
https://docs.microsoft.com/en-US/troubleshoot/windows-client/networking/internet-explorer-edge-open-connect-corporate-public-network
 
i have put in my white list
 
.msftconnecttest.com  
.msftncsi.com  
 
i have whitelisted ports 80 and 443 by default
 
on my windows client i have enabled the proxy via settings > proxy > manual 
proxy setup
 
and also i have enabled the winhttp proxy putting this in cmd
 
netsh winhttp set proxy ip_address:port
 
and i get back a connection test text saying "microsoft connect test" if i go to
 
http://www.msftconnecttest.com/connecttest.txt
 
but my network icon is still saying just network access only not internet access
 
can anyone help me out please
 
thanks,
rob

-- 
Regards, 

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] fool windows into thinking it has internet access

2022-07-22 Thread ngtech1ltd
Hey Robert,
 
The internet reachability test is composed of couple parts.
Only one of them is HTTP.
There is also an ICMP and DNS part to it.
You can customize it on the registry at:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NlaSvc\Parameters\Internet
 
The windows internet access check doesn’t affect other software, they do not 
rely on any of windows checks but do this by themselves.
 
I hope this have helped you.
I can try to test it locally but it’s better that you will first make a tiny 
effort to dump with wireshark the traffic on the interface
(after you have flushed the dns cahce) to verify what traffic windows use to 
test the internet connectivity.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: robert k Wild  
Sent: Friday, 22 July 2022 13:23
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] fool windows into thinking it has internet access
 
so i need to whitelist
 
internetbeacon.msedge.net  
?
 
On Thu, 21 Jul 2022 at 20:53, mailto:ngtech1...@gmail.com> > wrote:
Take a peek at:
https://docs.microsoft.com/en-us/powershell/module/nettcpip/test-netconnection?view=windowsserver2022-ps
 
This will highlight your issue and will probably make more sense into what you 
see.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Wednesday, 20 July 2022 18:23
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: [squid-users] fool windows into thinking it has internet access
 
hi all,
 
trying to fool windows it has internet access and not just network access
 
looking at this guide
 
https://docs.microsoft.com/en-US/troubleshoot/windows-client/networking/internet-explorer-edge-open-connect-corporate-public-network
 
i have put in my white list
 
.msftconnecttest.com  
.msftncsi.com  
 
i have whitelisted ports 80 and 443 by default
 
on my windows client i have enabled the proxy via settings > proxy > manual 
proxy setup
 
and also i have enabled the winhttp proxy putting this in cmd
 
netsh winhttp set proxy ip_address:port
 
and i get back a connection test text saying "microsoft connect test" if i go to
 
http://www.msftconnecttest.com/connecttest.txt
 
but my network icon is still saying just network access only not internet access
 
can anyone help me out please
 
thanks,
rob

-- 
Regards, 

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users


-- 
Regards, 

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] pros/cons squid vs next generation firewall

2022-07-25 Thread ngtech1ltd
Hey Dieter,

You should differentiate between a SMB level  appliances to a more advanced 
ones.
The basic difference is simplicity of management via WEBUI.
They also have API but you will need developer level skills for that.
>From my experience with checkpoint they basically have a large DB of 
>applications and threat feeds.
Since you need it for basic ACLs and ICAP virus scanners it's possible that a 
checkpoint server might be good
for your use case.
You should really compare the goals and the costs in general.
The SMB versions downgrades any HTTP connection to HTTP 1.x so don't expect 
these to support HTTP2.
I don't know the size of your company but in general the most "famous" vendors 
for NGFW are:
* CheckPoint
* FortiNet
* Palo Alto
* Sonicwall

There is a price for each product and you should compare all of them and also 
different versions of them.
Every product in the industry has it's limitations and I have found weaknesses 
in each and every one of them
and in many others including Squid.

The NGFW Is basically something that Palo Alto invented and all others used 
this same naming for publicity.
The one big PRO I found in checkpoint is that their support was very responsive.

Specifically in CheckPoint you should look at the:
Known Limitations

Per the firmware version you might be getting for the appliance or server your 
company might want to purchase.

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of 
Dieter Bloms
Sent: Monday, 25 July 2022 14:22
To: squid-users@lists.squid-cache.org
Subject: [squid-users] pros/cons squid vs next generation firewall

Hello,

I run some Squid proxy servers in conjunction with ICAP virus scanners
and I'm very happy with them. Our company now wants to replace them with
a checkpoint next generation firewall. Do you have some arguments that
speak for the further operation of the Squid proxies?

Thank you for any hint!


-- 
Rgeards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: Sqid uses all RAM / killed by OOM

2022-07-25 Thread ngtech1ltd
Hey Alex,

Just to clear out the doubts.
Ronny was trying to use 5.2 on Ubuntu 22.04 as an upgrade from 20.04.
The issue was that probably for the same traffic on 20.04 with another version 
of squid
it consumed a lot of RAM.

My first suggestion was to upgrade into latest 5.6 but since 22.04 uses OpenSSL 
3.x Squid 5.6
would not compile on it. The referenced patch is for OpenSSL 3.x compatibility 
and not a memory leak. 

What I didn't understood is first: how can 4.17 can be compiled on 22.04 and if 
it compiles
is there still some memory leak?

I believe it's too soon to upgrade into 22.04 and I would suggest to use 
another OS for now.
>From what I have seen Ubuntu doesn't have more support than other OS for now 
>so..

The only thing I can offer is to use some RPM based OS which can use my 
packages.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Monday, 25 July 2022 23:05
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Fwd: Sqid uses all RAM / killed by OOM

On 7/25/22 01:59, Ronny Preiss wrote:

> Can someone support me regarding my question about compiling squid 5.6 
> on ubuntu 22.04?

There is probably some misunderstanding: You are expecting some kind of 
a patch for Squid v5.6, but I do not know what patch you are talking 
about. I am aware of one important bug fix that was added to Squid v5 
after v5.6 release[1], but that fix is not targeting any memory leaks 
(it may still fix some as a side effect though).

I also do not recall any known memory leaks in v5.6, but perhaps I have 
forgotten something we fixed in master/v6 long time ago -- until [1], it 
was not possible to run v5 in production deployments I dealt with, so I 
can easily forget or miss some old v5 details.

If you believe that your Squid v5.6 is leaking memory, try [1]. If that 
does not help, you may need to create a bug report on Bugzilla and start 
collecting the necessary details to confirm the leak and identify what 
is leaking.

[1] https://github.com/squid-cache/squid/commit/c999621.diff


HTH,

Alex.



> Since my previous attempts also have the "memory leak" on ubuntu 22.04 
> and squid 5.6 problem again.
> 
> Kind regards Ronny
> 
> -- Forwarded message -
> Von: *Ronny Preiss* mailto:ronny.pre...@gmail.com>>
> Date: Mo., 11. Juli 2022 um 08:54 Uhr
> Subject: Sqid uses all RAM / killed by OOM
> To:  >
> 
> 
> Hello all,
> 
> I have the following problem with squid 5.2 on ubuntu 22.04.
> Squid consumes all ram and the entire SWAP. When swap and ram are 
> completely full, the OOM killer strikes and terminates the process.
> 
> We use three internal child proxy servers with keepalived and haproxy as 
> load balancers. From our ISP we use a parent upstream proxy for external 
> internet traffic.
> As an operating system we have so far Ubuntu 20.04.4 with squid 4.1 in 
> use. This constellation works flawlessly.
> 
> Now I want to update the Server to Ubuntu 22.04 and squid 5.2. But with 
> Ubuntu 22.04 and squid 5.2 the above mentioned problem with the OOM 
> Killer occurs.
> The new machine has only the OS and squid installed.
> 
> Who can help me with a solution?
> 
> With kind regards
> Ronny
> 
> Attached the squid configuration and the VMWare specs.
> 
> ### VM Specs ###
> OS: Ubuntu 22.04 Server
> CPU: 4x (Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz)
> RAM: 4 GB
> VMWare: ESXi 7.0 U2
> 
> ### CONFIG ###
> acl 10.172.xxx.xxx/18  src 10.172.xxx.xxx/18 
> 
> acl 172.16.xxx.xxx/12  src 172.16.xxx.xxx/12 
> 
> acl 192.168.xxx.xxx/16  src 192.168.xxx.xxx/16 
> 
> 
> acl Safe_ports port 80
> acl Safe_ports port 21
> acl Safe_ports port 443
> acl Safe_ports port 70
> acl Safe_ports port 210
> acl Safe_ports port 1025-65535
> acl Safe_ports port 280
> acl Safe_ports port 488
> acl Safe_ports port 591
> acl Safe_ports port 777
> 
> http_access allow 10.172.xxx.xxx/18  Safe_ports
> http_access allow 172.16.xxx.xxx/12  Safe_ports
> http_access allow 192.168.xxx.xxx/16  Safe_ports
> 
> http_access allow localhost manager
> http_access allow localhost
> http_access deny manager
> http_access deny all
> 
> include /etc/squid/conf.d/*
> http_port 10.172.xxx.xxx:3128 
> 
> cache_peer 10.210.xxx.xxx parent 8080 0
> cache_dir ufs /var/spool/squid 3000 16 256
> cache_effective_user proxy
> cache_effective_group proxy
> 
> coredump_dir /var/spool/squid
> 
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> refresh_pattern \/(P

Re: [squid-users] slow TCP_TUNNEL [SOLVED]

2022-07-27 Thread ngtech1ltd
Great!
 
I’m happy you were able to resolve the issue easily.
 
All The Bests,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Katerina Bubenickova
Sent: Wednesday, 27 July 2022 10:22
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] slow TCP_TUNNEL [SOLVED]
 
Hey Eliezer,
 
Thank you. Now I am ashamed for I have sent unintentionally [solved] answer 
only to Alex yesterday.
 
I tried wireshark, disabled client persistent connections.
but the problem was system run out of filedescriptors.
4096 was not enough . I didn't notice it in the cache.log before
 
So I put into 
/etc/security/limits.conf
 
line 
* - nofile 65535
 
 
and into squid.conf:
max_filedescriptors 65535
 
systemctl restart squid
Now it is working all right.
 
Thank you for your effort,
now see I have a lot to learn.
I have to figure out the difference between  forward and intercept proxy,
what the pinger daemon is and what workers are.
 
Two proxies makes faster performance for us, 
On Debian 11 there is version Version 4.13
load balance is done by bind - two IPs have the name in A record
 
I will remove the `dns_v4_first` 
Thank you all again for your support,
 
 
Katerina
 
 
 
 
 
 
>>> mailto:ngtech1...@gmail.com> > 07/26/22 9:40 PM >>>
Hey Katerina,

Let’s try to understand the issue first.
The CentOS 6 squid version is what exactly? (squid -v )
What do you require from this proxy to do else then squidguard?
>From what I understand it’s a simple forward proxy and not an intercept proxy.
It means that all DNS resolution is done on the proxy.
What localhost DNS daemon are you using? (Bind? unbound? other?)
What version of Squid are you running on the Debian 11 machine (squid -v)

The `dns_v4_first` configuration is obsolete.
The first thing to check is that if the pinger is up and running.
>From what I remember Debian didn’t liked the pinger daemon.
Are you using workers in your configuration?
If you can share your current squid version and squid.conf we might be able to 
help you try to understand what’s going on.
The latest version of squid I have published for CentOS 6 was: 
squid-3.5.28-1.el6.x86_64.rpm

For a simple forward proxy you won’t need too much hardware but if you won’t be 
using workers this would be the expected behavior.
I believe that if you do not require special configurations we can compare 
another simple forward proxy just to make sure that
the core of the issue is something with squid.
(it’s possible to find the issue without using a comparison if enough details 
would be published)

How exactly is the proxy configured on the clients side?
Also is there any form of authentication happening or plain connections?
How do you load balance between the proxies?

>From my experience 700 clients are OK on a single proxy however…
only if the load is not too high in the network level.
It can be that the maximum limit of open file descriptors is reached or couple 
other reasons.
Can you share couple pages of your cache manager output?
The next script:
https://gist.githubusercontent.com/elico/8790bdc835d8e9ecbc57e72fc31effc0/raw/60d140b0e772fa4f418779bfc27e4804a345ce23/dump-cache-mgr-to-file.sh

should dump all pages to stdout but don’t send the output on the public list 
please.
I will try to analyze the pages and see if there is something specific I can 
understand from them.
If the above output would not be sufficient I have another more in-depth script 
that should help us
to understand what is the core issue with this VM.

I am waiting for more details so I would be able to try and help you.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: mailto:ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Katerina 
Bubenickova
Sent: Monday, 25 July 2022 11:40
To: squid-users@lists.squid-cache.org 
 
Subject: [squid-users] slow TCP_TUNNEL

Hi,
We have 2 squid proxies running on Centos 6 which is very old (let's call them 
C1, C2) in DMZ.

I have installed Debian 11 bullseye and squid +squidguard, trying to use the 
same configuration (let's call it D1).
If I use this proxy for 1 pc station, all is ok.
If Iadded the proxy to dns as third proxy (C1+C2 + D1) or use one old proxy and 
the new one (C1+D1) the response of internet was very slow, unusable.

I tryed to fix it without success:
I added directive url_rewrite_children 200
I changed DNS from 8.8.8.8 to localhost
I turned off squid cache
I commented out squidguard
I migrated from one vmware server to another, better,
I added memory (16 GB) and CPU (6) ,
I added directive dns_v4_first on
There are no rules in D1 firewall

We have about 70

Re: [squid-users] regex for normal websites

2022-07-27 Thread ngtech1ltd
I would assume that if you want to match something like dstdomain you would use:
(^(.*\.)?)adobe\.com$
 
Or two regex:
\.adobe\.com$
^adobe\.com$
 
I like very much: https://rubular.com/
 
Which allows you to see visually the matches.
 
Eliezer
 
 
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Wednesday, 27 July 2022 21:03
To: Antony Stone 
Cc: Squid Users 
Subject: Re: [squid-users] regex for normal websites
 
Makes sense thanks Antony
 
On Wed, 27 Jul 2022, 18:59 Antony Stone, mailto:antony.st...@squid.open.source.it> > wrote:
On Wednesday 27 July 2022 at 19:25:46, robert k Wild wrote:

> nice one thanks Amos
> 
> i dont understand as in regex the terms
> 
> ^ - start of line
> . - any single character
> * - repetition of character before

Correction: zero or more instances of the character before

> $ - end of line
> 
>  so going by this it should be
> 
> ^.*adobe.com  $

Well, that means "start of line, something or nothing, then 'adobe.com 
 ' and 
end of line".

So, it basically just means, "adobe,com at the end of the line"


Thus, the same as "adobe.com  $"


Antony.

-- 
This space intentionally has nothing but text explaining why this space has 
nothing but text explaining that this space would otherwise have been left 
blank, and would otherwise have been left blank.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] regex for normal websites

2022-07-28 Thread ngtech1ltd
Hey Robert,
 
The docs at http://www.squid-cache.org/Doc/config/acl/  states:
 
acl aclname ssl::server_name_regex [-i] \.foo\.com ...
  # regex matches server name obtained from various sources [fast]
 
Which and I do not know exactly what it means but it will not work with a 
helper in most cases.
I have found the in the git the next sources:
https://github.com/squid-cache/squid/blob/bf95c10aa95bf8e56d9d8d1545cb5a3aafab0d2c/doc/release-notes/release-3.5.sgml#L414
 
New types ssl::server_name  and ssl::server_name_regex
   to match server name from various sources (CONNECT authority 
name,
   TLS SNI domain, or X.509 certificate Subject Name).
 
Which means that there is a set of checks which the acl does and not just a 
domain name.
It’s also even possible that the domain name is not know in the CONNECT state 
of the connection.
If I remember correctly there is a possibility for browsers to use the same 
exact connection for multiple domains but
I have not seen this yet in production.
With Squid once you bump the connection to HTTP/1.x you can make 100% sure the 
features of the Host header request.
 
At Servername.cc ie:
https://github.com/squid-cache/squid/blob/aee3523a768aff4d1e6c1195c4a401b4ef5688a0/src/acl/ServerName.cc#L81
 
 
There is a specific logic of what is done and what is matched but I am not sure 
what would be used in the case of:
*.adobe.com
 
Certificate SAN.
 
Specifically This part of the Common Names ie SAN:
https://github.com/squid-cache/squid/blob/aee3523a768aff4d1e6c1195c4a401b4ef5688a0/src/acl/ServerName.cc#L105
 
which to my understanding points to:
https://github.com/squid-cache/squid/blob/d146da3bfe7083381ae7ab38640cbfd0d2542374/src/ssl/support.cc#L195
 
doesn’t make any sense to me.( didn’t tried that much to understand)
 
If someone might be able to make sense of things in a synchronic fashion it 
would help.
(I do not see any debugs usage there or any helping comment )
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Wednesday, 27 July 2022 13:52
To: Squid Users 
Subject: Re: [squid-users] regex for normal websites
 
that's the weird thing, when i try this in  "ssl::server_name_regex"
.adobe.com  
 
it doesnt work
 
you mean escape ie the \ character
 
 
 
 
 
On Wed, 27 Jul 2022 at 11:05, Matus UHLAR - fantomas mailto:uh...@fantomas.sk> > wrote:
On 27.07.22 10:54, robert k Wild wrote:
>think i got it right but just want to double check with you guys
>
>so in my "ssl::server_name" i had
>.adobe.com  
>
>that worked but i want to mix normal website and regex websites together so
>i just have one list for all

didn't the above work?  AFAIK it should, IIRC domain matching in squid 
matches "domain.com  " if you check for ".domain.com 
 ".

>i now have this for "ssl::server_name_regex"
>^.*adobe.com  $
>
>it works, so im guessing its right

the dot should be escaped


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
BSE = Mad Cow Desease ... BSA = Mad Software Producents Desease
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users


 
-- 
Regards, 

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] adding cache_control = nocache to http request using squid transparent proxy

2022-07-28 Thread ngtech1ltd
Hey Amos,

I support what you wrote and I do not know why the service provider wants this 
but there are some cases
which there is a need to lower the cache ratio of the clients.
Usually fast service is what ISPs want but there are couple use cases that I 
have seen which makes sense to somehow
try to disable cache in the client side.
In these specific cases the relationship of the ISP and the client should be 
fully understood by both side
and the ISP should OVER COMMIT their services to the client to compensate for 
the client limitations.
I would assume it should be some 50-100% of the package over commit.
In the last years I have seen that 1080HD videos uses usually 3Mbps VBR in 
cases of real time transcoding of the video
( both in server and client side ) while 6Mbps in CBR for the exact same 
pre-transcoded videos.
(which reduces the client side CPU and device requirements).

So, in case I would have couple CDNs pushing data to my network while the one 
is overloading my clients current hardware
There is a right to push back this CDN performance since he somehow in-directly 
forces my clients to  upgrade their hardware.

To my understanding many ISPs and CDNs won't see their actions to give better 
service coverage as bad
but what's next? We won't be able to snoop in our noses between commercials 
that are popping into 
our screen or eyes or head?

I just put this really faint conversation in my head:
Son: ohh my head hurts dad!
Dad: Who hit you?
Son: I don't know, I was sitting Infront of the **device** and the video was 4k 
fps.
Dad: You should use a lower image fps rendering, in my days we used to watch 
matrix in 15-30 fps and it was fun.

Hope It Helps to give someone a smile.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, 27 July 2022 3:46
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] adding cache_control = nocache to http request using 
squid transparent proxy

On 27/07/22 07:52, muhanad wrote:
> Hello
> 
> I am trying to edit the header of http headers to set the cache_control 
> option to " no-cache" to prevent users from being able to cache the 
> contents

This will not do what you think it does.

The "no-cache" control actually *enables* caching by recipients. It just 
requires a quick revalidation check before the cached content is used.


> even if they are using any type of caching engines. the squid 
> proxy will work in a transparent mode. The traffic is originated from 
> one of our CDNs,

This does not make sense. Just publish the Squid machine IP in DNS 
instead of the CDN server IP. No need for interception.


> also the connection is direct between the clients and 
> the CDN servers, thus the proxy will work in transparent mode with IP 
> spoofing so the in the header the IP address is stays the IP address of 
> the client and  not the proxy server.

This may not do what you think it does. When traffic is arriving *from* 
Internet the source-IP indicates which route to deliver the response 
packets. You do not want the origin server(s) bypassing Squid on the TCP 
SYNACK packets - that will break all traffic.


> PS: We are an ISP company based in Iraq, Baghdad and we are trying to 
> prevent the clients from caching all HTTP data.
> 

Why? This is typically a very bad idea.

All it does is:
  * lower the amount of bandwidth available to your clients
- given them a bad service/experience.
  * increase the traffic delays across your network
- even worse service/experience.
  * encourage other ISP to erase the cache limitations on traffic from 
your servers even on traffic where it is correct
- even worse service/experience.


Even if you are charging clients for bandwidth used. You want to be able 
to service *more* clients as quickly as possible, not scare them away 
with a bad service.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Windows Server 2019-22 Kerberos transparent Windows client authentication help wanted. Try 2

2022-07-30 Thread ngtech1ltd
Hey Everybody,

Last time I have tried to test transparent windows client authentication to AD 
with Kerberos I have failed in any test.
The documentation at:
https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
https://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory

Is not sufficient, it only describes the idea and while this is well understood 
the actual implementation is not well explained
in most of the articles I have tried to understand from.
Last time I have tried both CentOS, RHEL, Fedora, Oracle, Debian ,Ubuntu and 
failed.

The latest documents I have seen which seems good to some degree are:
https://support.kaspersky.com/KWTS/6.1/en-US/166440.htm
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/setting-up-squid-as-a-caching-proxy-with-kerberos-authentication


My next try is for:
https://journeyofthegeek.com/2017/12/30/pfsense-squid-kerberos/

If someone have the knowledge about a specific guide that works for Windows 
Server 2016+ please send me a link.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] regex for normal websites

2022-08-02 Thread ngtech1ltd
I believe it should have been:
^adobe\.com$
^.*\.adobe\.com$
^\*\.adobe\.com$
 
But I don’t know the code to this depth.
If I would have written the match I think it would have been something a bit 
different.
*   A match for SNI
*   A joker match for SAN ie *.adobe.com SAN should catch both 
www.www.adobe.com  
 
But for some reason it’s not like that, I assume the browsers and the libraries 
doesn’t implement it for an unknown reason.
 
If Alex or anyone else from Factory knows the details of the ACL they can 
answer more then me.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: robert k Wild  
Sent: Tuesday, 2 August 2022 14:51
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] regex for normal websites
 
thanks Eliezer
 
so it should be
 
adobe\.com
 
not
 
.adobe.\com or
 
^.*adobe.com  
 
as the ^.* could include
 
blahadobe.com  
 
 
 
On Thu, 28 Jul 2022 at 08:14, mailto:ngtech1...@gmail.com> > wrote:
Hey Robert,
 
The docs at http://www.squid-cache.org/Doc/config/acl/  states:
 
acl aclname ssl::server_name_regex [-i] \.foo\.com ...
  # regex matches server name obtained from various sources [fast]
 
Which and I do not know exactly what it means but it will not work with a 
helper in most cases.
I have found the in the git the next sources:
https://github.com/squid-cache/squid/blob/bf95c10aa95bf8e56d9d8d1545cb5a3aafab0d2c/doc/release-notes/release-3.5.sgml#L414
 
New types ssl::server_name  and ssl::server_name_regex
   to match server name from various sources (CONNECT authority 
name,
   TLS SNI domain, or X.509 certificate Subject Name).
 
Which means that there is a set of checks which the acl does and not just a 
domain name.
It’s also even possible that the domain name is not know in the CONNECT state 
of the connection.
If I remember correctly there is a possibility for browsers to use the same 
exact connection for multiple domains but
I have not seen this yet in production.
With Squid once you bump the connection to HTTP/1.x you can make 100% sure the 
features of the Host header request.
 
At Servername.cc ie:
https://github.com/squid-cache/squid/blob/aee3523a768aff4d1e6c1195c4a401b4ef5688a0/src/acl/ServerName.cc#L81
 
 
There is a specific logic of what is done and what is matched but I am not sure 
what would be used in the case of:
*.adobe.com  
 
Certificate SAN.
 
Specifically This part of the Common Names ie SAN:
https://github.com/squid-cache/squid/blob/aee3523a768aff4d1e6c1195c4a401b4ef5688a0/src/acl/ServerName.cc#L105
 
which to my understanding points to:
https://github.com/squid-cache/squid/blob/d146da3bfe7083381ae7ab38640cbfd0d2542374/src/ssl/support.cc#L195
 
doesn’t make any sense to me.( didn’t tried that much to understand)
 
If someone might be able to make sense of things in a synchronic fashion it 
would help.
(I do not see any debugs usage there or any helping comment )
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Wednesday, 27 July 2022 13:52
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] regex for normal websites
 
that's the weird thing, when i try this in  "ssl::server_name_regex"
.adobe.com  
 
it doesnt work
 
you mean escape ie the \ character
 
 
 
 
 
On Wed, 27 Jul 2022 at 11:05, Matus UHLAR - fantomas mailto:uh...@fantomas.sk> > wrote:
On 27.07.22 10:54, robert k Wild wrote:
>think i got it right but just want to double check with you guys
>
>so in my "ssl::server_name" i had
>.adobe.com  
>
>that worked but i want to mix normal website and regex websites together so
>i just have one list for all

didn't the above work?  AFAIK it should, IIRC domain matching in squid 
matches "domain.com  " if you check for ".domain.com 
 ".

>i now have this for "ssl::server_name_regex"
>^.*adobe.com  $
>
>it works, so im guessing its right

the dot should be escaped


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
BSE = Mad Cow Desease ... BSA = Mad Software Producents Desease
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listin

Re: [squid-users] xcalloc error when installing squid in container on CentOS 9 host

2022-08-02 Thread ngtech1ltd
I will try to publish a CentOS 9 version later on to make sure it will work on 
a VM.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Francesco Chemolli
Sent: Tuesday, 2 August 2022 11:28
To: Frank Ansari 
Cc: Squid Users 
Subject: Re: [squid-users] xcalloc error when installing squid in container on 
CentOS 9 host
 
Hi Frank,
  could you share what does your configuration look like (minus any 
confidential bits)? And I assume you're running the version of squid packaged 
by the distros?
 
 
On Tue, Aug 2, 2022 at 9:22 AM Frank Ansari mailto:nabil1...@gmail.com> > wrote:
Hi,
 
I have found a weird issue with CentOS 9.
 
So far I had squid running on a CentOS 8 system within an Alpine Linux 
Container and this has worked.
 
Now I installed CentOS 9 and also latest Alpine Linux with squid 5.5.
 
Squid refuses to start and when I run "squid -z" I get this error:
 
[root@324ae7d5e4db /]# 2022/08/01 08:01:47| FATAL: xcalloc: Unable to allocate 
1073741816 blocks of 432 bytes!

2022/08/01 08:01:47| Squid Cache (Version 5.5): Terminated abnormally. 
CPU Usage: 0.002 seconds = 0.000 user + 0.002 sys
Maximum Resident Size: 31744 KB
Page faults with physical i/o: 0
 
My question: has anybody the same issue? Why is squid asking for 432 GB?
 
This seems to have nothing to do with my squid.conf. Whatever I change there 
has no effect at all.
 
The CentOS 9 VM is running on Proxmox and has 4 GB RAM.
 
I also tried to install Debian 11 and Ubuntu 20 and 22 containers and similar 
errors.
 
My last try was to install a CentOS 9 conatiner on the CentOS 9 host and also 
this gives the same error.
 
I have now installed squid 5.5 directly on the OS but I still curios why it 
refuses to run in any kind of container.
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users


 
-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] regex for normal websites

2022-08-02 Thread ngtech1ltd
Hey Matus,

The question is not matching the browser only by what the client asks for but 
also for..
The request a the lower levels.
The ACLS check (as I mentioned in the code snippets) also the certificate 
"Subject Alternative Name".
Due to and based on this, it's relevant for couple use cases.
For example, if I want to splice a star domain SAN but not a literal one.
There is a difference between regex and dstdomain by definition and indeed it's 
not documented enough to my taste.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of 
Matus UHLAR - fantomas
Sent: Tuesday, 2 August 2022 15:18
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] regex for normal websites

On 02.08.22 15:05, ngtech1...@gmail.com wrote:
>I believe it should have been:
>^adobe\.com$
>^.*\.adobe\.com$

\.adobe\.com$ does the same and is more efficient

>^\*\.adobe\.com$

this is for literal "*.adobe.com" (noboty puts * into web browser), but it's 
covered by previous variand.


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam = (S)tupid (P)eople's (A)dvertising (M)ethod
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] filedescriptors on debian/systemd

2022-08-02 Thread ngtech1ltd
Hey,

What's the bug exactly?
The design of systemd is to enforce the FD limit.
This is coming from the init 0 level of the design and due to this,
squid cannot "patch" the kernel at runtime like any other process.
The OS and systemd do not give any API to allow a request for"more FD".
I assume that these days it makes sense for a request from the kernel or
systemd to implement such a feature but I don't know if anyone have worked on 
this.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of 
Matus UHLAR - fantomas
Sent: Tuesday, 2 August 2022 16:54
To: squid-users@lists.squid-cache.org
Subject: [squid-users] filedescriptors on debian/systemd

Hello,

I have encountered Debian bug 934208:

2022/07/28 16:40:53 kid1| With 1024 file descriptors available
2022/07/29 06:50:18 kid1| WARNING! Your cache is running out of filedescriptors

according to the bug report:

"Under systemd the default is not to have any limitation at all."

which seems not to be true, but:

Unfortunately there are still some bugs that needs to be straightened
out upstream for Squid to use the --with-filedescriptors value when
there is *no* specific upper limit provided by the OS.


limits when I log in (ssh or console) are 1024(soft) and 1048576 (hard) and 
yet squid starts with 1024 FDs.

I have configured:

# cat /etc/systemd/system/squid.service.d/override.conf
[Service]
LimitNOFILE=65536

and after reloading systemd:

# systemctl daemon-reload

and restart squid it seems to work properly:

2022/08/02 15:52:28 kid1| With 65536 file descriptors available


does anyone encounter this bug?
How do you fix it?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] regex for normal websites

2022-08-02 Thread ngtech1ltd
Hey Robert,
 
I will test this with latest squid and my Apps helper and will verify.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: robert k Wild  
Sent: Tuesday, 2 August 2022 15:15
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] regex for normal websites
 
ok i have tested and this works
 
adobe\.com$
 
i found it weird this didnt work
 
\.adobe\.com
 
just curious thats all
 
On Tue, 2 Aug 2022 at 13:05, mailto:ngtech1...@gmail.com> > wrote:
I believe it should have been:
^adobe\.com$
^.*\.adobe\.com$
^\*\.adobe\.com$
 
But I don’t know the code to this depth.
If I would have written the match I think it would have been something a bit 
different.
*   A match for SNI
*   A joker match for SAN ie *.adobe.com   SAN should 
catch both www.www.adobe.com  
 
But for some reason it’s not like that, I assume the browsers and the libraries 
doesn’t implement it for an unknown reason.
 
If Alex or anyone else from Factory knows the details of the ACL they can 
answer more then me.
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: robert k Wild mailto:robertkw...@gmail.com> > 
Sent: Tuesday, 2 August 2022 14:51
To: Eliezer Croitoru mailto:ngtech1...@gmail.com> >
Cc: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] regex for normal websites
 
thanks Eliezer
 
so it should be
 
adobe\.com
 
not
 
.adobe.\com or
 
^.*adobe.com  
 
as the ^.* could include
 
blahadobe.com  
 
 
 
On Thu, 28 Jul 2022 at 08:14, mailto:ngtech1...@gmail.com> > wrote:
Hey Robert,
 
The docs at http://www.squid-cache.org/Doc/config/acl/  states:
 
acl aclname ssl::server_name_regex [-i] \.foo\.com ...
  # regex matches server name obtained from various sources [fast]
 
Which and I do not know exactly what it means but it will not work with a 
helper in most cases.
I have found the in the git the next sources:
https://github.com/squid-cache/squid/blob/bf95c10aa95bf8e56d9d8d1545cb5a3aafab0d2c/doc/release-notes/release-3.5.sgml#L414
 
New types ssl::server_name  and ssl::server_name_regex
   to match server name from various sources (CONNECT authority 
name,
   TLS SNI domain, or X.509 certificate Subject Name).
 
Which means that there is a set of checks which the acl does and not just a 
domain name.
It’s also even possible that the domain name is not know in the CONNECT state 
of the connection.
If I remember correctly there is a possibility for browsers to use the same 
exact connection for multiple domains but
I have not seen this yet in production.
With Squid once you bump the connection to HTTP/1.x you can make 100% sure the 
features of the Host header request.
 
At Servername.cc ie:
https://github.com/squid-cache/squid/blob/aee3523a768aff4d1e6c1195c4a401b4ef5688a0/src/acl/ServerName.cc#L81
 
 
There is a specific logic of what is done and what is matched but I am not sure 
what would be used in the case of:
*.adobe.com  
 
Certificate SAN.
 
Specifically This part of the Common Names ie SAN:
https://github.com/squid-cache/squid/blob/aee3523a768aff4d1e6c1195c4a401b4ef5688a0/src/acl/ServerName.cc#L105
 
which to my understanding points to:
https://github.com/squid-cache/squid/blob/d146da3bfe7083381ae7ab38640cbfd0d2542374/src/ssl/support.cc#L195
 
doesn’t make any sense to me.( didn’t tried that much to understand)
 
If someone might be able to make sense of things in a synchronic fashion it 
would help.
(I do not see any debugs usage there or any helping comment )
 
Thanks,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Wednesday, 27 July 2022 13:52
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] regex for normal websites
 
that's the weird thing, when i try this in  "ssl::server_name_regex"
.adobe.com  
 
it doesnt work
 
you mean escape ie the \ character
 
 
 
 
 
On Wed, 27 Jul 2022 at 11:05, Matus UHLAR - fantomas mailto:uh...@fantomas.sk> > wrote:
On 27.07.22 10:54, robert k Wild wrote:
>think i got it right but just want to double check with you guys
>
>so in my "ssl::server_name" i had
>.adobe.com  
>
>that worked but i want to mix normal website and regex websites together so
>i just have one list for all

didn't the above work?  AFAIK it should, II

Re: [squid-users] filedescriptors on debian/systemd

2022-08-02 Thread ngtech1ltd
Hey Amos,

I was under the impression that Systemd does impose a basic limit but I can 
test it to verify my doubts.
>From my point of view and testing until now systemd does impose a basic global 
>limit.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, 3 August 2022 5:31
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] filedescriptors on debian/systemd

On 3/08/22 01:54, Matus UHLAR - fantomas wrote:
> Hello,
> 
> I have encountered Debian bug 934208:
> 
> 2022/07/28 16:40:53 kid1| With 1024 file descriptors available
> 2022/07/29 06:50:18 kid1| WARNING! Your cache is running out of 
> filedescriptors
> 
> according to the bug report:
> 
> "Under systemd the default is not to have any limitation at all."
> 

To clarify, what that means is that *systemd* does not impose any limit 
by default. Squid when it cannot find a limit sets 1024 as default.



Under systemd the "correct" way to set such a limit is for the admin to 
decide on a limit and configure it. You can do this is two ways:

  1) set the systemd local config like you did:

> 
> # cat /etc/systemd/system/squid.service.d/override.conf
> [Service]
> LimitNOFILE=65536
> 
> and after reloading systemd:
> 
> # systemctl daemon-reload
> 

or, 2)

   set max_filedescriptors in squid.conf


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] regex for normal websites

2022-08-02 Thread ngtech1ltd
Hey Amos,

And just to be clear:

ssl::server_name_regex has the same path as ssl::server_name ?
I have not read the code yet but it seems pretty obviates to me.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, 3 August 2022 5:10
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] regex for normal websites

On 3/08/22 05:01, robert k Wild wrote:
> Mmm, maybe I should try
> 
> dstdom_regex
> 
> Instead of
> 
> ssl::server_name_regex
> 
> But when you using ssl bump in your squid.conf, isn't it best to use
> 
> ssl::server_name_regex
> 

Typically yes, or ssl::server_name.


FYI, the two ACL types do exactly the same matching algorithm. They 
differ only in what detail from the traffic they match against:

  * dstdomain matches:
- the domain found in HTTP request-target (aka URL or URI), or
- the reverse-DNS hostname for a raw-IP found in HTTP request-target 
(aka URL or URI).

  * ssl::server_name matches whichever is available from (in order of 
preference):
- the request-target URL domain from decrypted HTTP(S) message, or
- the host name from SSL server certificate AltSubject, or
- the host name from TLS SNI message, or
- the domain from request-target URI of CONNECT request.

... in that order of preference for both.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] regex for normal websites

2022-08-04 Thread ngtech1ltd
Hey Robert,
 
I recorded this video for you:
https://cloud1.ngtech.co.il/static/squid-data/regex-for-robert.mp4
 
This is what I did when I reviewed the question.
I hope it will help you and others use this tool:
https://rubular.com/
 
and squid.
 
If you have any question regarding REGEX here we are welcoming every question.
 
All The Bests and Hope This Helps,
Eliezer
 
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
robert k Wild
Sent: Wednesday, 3 August 2022 14:52
To: Squid Users 
Subject: Re: [squid-users] regex for normal websites
 
thanks Amos for this greatly appreciated
 
On Wed, 3 Aug 2022 at 09:35, Matus UHLAR - fantomas mailto:uh...@fantomas.sk> > wrote:
On 03.08.22 14:12, Amos Jeffries wrote:
>IMO, what you are looking for is actually this ACL definition:
>
> acl adobe ssl::server_name .adobe.com  
>
>or its regex equivalent,
>
> acl adobe ssl::server_name_regex (^|\.)adobe\.com$

this is what I was searching for. Squid FAQ says:

https://wiki.squid-cache.org/SquidFaq/SquidAcl#Squid_doesn.27t_match_my_subdomains

www.example.com   matches the exact host 
www.example.com  , while .example.com 
  
matches the entire domain example.com   (including 
example.com   alone)


but I wasn't sure if this matching also applies to ssl::server_name.

thanks
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk   ; 
http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I don't have lysdexia. The Dog wouldn't allow that.
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users


-- 
Regards, 

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and Epic Games HCapctca

2022-08-04 Thread ngtech1ltd
Hey Adam,

I don’t remember where exactly epic games is hosted but, it should be spliced.
If you need an app definition I can try to grab one from my local squid.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: mailto:ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

From: squid-users  On Behalf Of Adam 
Barnett
Sent: Thursday, 4 August 2022 14:28
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid and Epic Games HCapctca

Hi All, 

I am trying to get squid to allow me to login to Epicgames.com with my epic 
login, i get to the login page and get the hcaptca images and everytime i get 
"invalid response" 

i looked at the headers and the only error that i can see is "The cache 
information is missing from the entry" 

My config looks like so 

workers 2

```
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

http_port 3128 ssl-bump  dynamic_cert_mem_cache_size=16MB  
generate-host-certificates=on cert=/etc/squid/certs/squid-ca-cert-key.pem

sslcrtd_program /usr/lib64/squid/security_file_certgen -s /var/spool/squid/ssl 
-M 16MB
dns_nameservers 10.5.1.2 8.8.8.8
visible_hostname foo-proxy-1
forwarded_for truncate
via off

# Send to file
access_log daemon:/var/log/squid/access.log



acl CONNECT method CONNECT
acl local src http://10.0.0.0/8
always_direct allow all
request_header_add X-GoogApps-Allowed-Domains "http://foo.com"; all

memory_replacement_policy heap GDSF
maximum_object_size 100 KB
maximum_object_size 1 MB

cache allow all
cache_mem 256 MB
cache_dir rock /var/spool/squid 1024
memory_pools off
cache_swap_low 90
client_persistent_connections on


http_access allow localhost manager
http_access deny manager

# SquidGaurd
url_rewrite_program /usr/bin/squidGuard
```

Any suggestions? 

Thanks

Adam Barnett
Senior SysAdmin beloFX



 
https://514584150-atari-embeds.googleusercontent.com/embeds/16cb204cf3a9d4d223a0a3fd8b0eec5d/inner-frame-minified.html?jsh=m%3B%2F_%2Fscs%2Fabc-static%2F_%2Fjs%2Fk%3Dgapi.lb.en.dzXZWX9QTbE.O%2Fd%3D1%2Frs%3DAHpOoo_epIQDPHdjFr3MLkazUi2Jmy50dQ%2Fm%3D__features__


 
http://www.belofx.com/


 
http://www.linkedin.com/company/belofx






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and Epic Games HCapctca

2022-08-04 Thread ngtech1ltd
Hey Adam,
 
I recorded a video for you on how I do it at:
https://cloud1.ngtech.co.il/static/squid-data/splice-epic-games.mp4
 
So basically the relevant domains are:
 
epicgames-download1.akamaized.net
.epicgames.com
.unrealengine.com
 
And you can peek at robert k Wild mail: “regex for normal websites”
 
And it contains the relevant technical details.
If for any reason you need a more detailed answer let me know.
 
Yours,
Eliezer 
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of Adam 
Barnett
Sent: Thursday, 4 August 2022 14:28
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid and Epic Games HCapctca
 
Hi All, 
 
I am trying to get squid to allow me to login to Epicgames.com with my epic 
login, i get to the login page and get the hcaptca images and everytime i get 
"invalid response" 
 
i looked at the headers and the only error that i can see is "The cache 
information is missing from the entry" 
 
My config looks like so 

workers 2

```
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

http_port 3128 ssl-bump  dynamic_cert_mem_cache_size=16MB  
generate-host-certificates=on cert=/etc/squid/certs/squid-ca-cert-key.pem

sslcrtd_program /usr/lib64/squid/security_file_certgen -s /var/spool/squid/ssl 
-M 16MB
dns_nameservers 10.5.1.2 8.8.8.8
visible_hostname foo-proxy-1
forwarded_for truncate
via off

# Send to file
access_log daemon:/var/log/squid/access.log



acl CONNECT method CONNECT
acl local src 10.0.0.0/8  
always_direct allow all
request_header_add X-GoogApps-Allowed-Domains "foo.com  " all

memory_replacement_policy heap GDSF
maximum_object_size 100 KB
maximum_object_size 1 MB

cache allow all
cache_mem 256 MB
cache_dir rock /var/spool/squid 1024
memory_pools off
cache_swap_low 90
client_persistent_connections on


http_access allow localhost manager
http_access deny manager

# SquidGaurd
url_rewrite_program /usr/bin/squidGuard
```

Any suggestions? 
 
Thanks




Adam Barnett
Senior SysAdmin beloFX





   
 
 

 abarn...@belofx.com


   
 
  www.belofx.com


   
 
  LinkedIn
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and Epic Games HCapctca

2022-08-04 Thread ngtech1ltd
You are welcome.
 
I wrote an app that does everything for me so I just need to dump the database 
into a:
ssl::server_name directive
 
it’s basically:
## START
acl NoBump_server_name ssl::server_name 
"/etc/squid/no-ssl-bump-server-name.list"
 
acl tls_to_splice any-of inspect_only NoBump_src NoBump_server_name 
NoBump_server_regex_by_urls_domain NoBump_server_regex
 
ssl_bump peek app_matcher_helper
ssl_bump peek tls_s1_connect
 
ssl_bump bump app_matcher_helper
ssl_bump bump app_reader_helper
ssl_bump bump deny_note
 
ssl_bump splice app_matcher_helper
ssl_bump splice tls_to_splice
 
ssl_bump stare app_matcher_helper
ssl_bump stare tls_s2_client_hello
 
ssl_bump bump app_matcher_helper
ssl_bump bump tls_to_bump
## END
 
If you want I can upload a snippet of the whole setup dump with hope you could 
make use of it.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: Adam Barnett  
Sent: Friday, 5 August 2022 0:26
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid and Epic Games HCapctca
 
תודה רבה
It looks like you are using a database and then building the config from that? 
any cahnce you can send me the snippet of the config instead of the DB bits? ? 
 
Thanks again 
 
Adam 
 
On Thu, 4 Aug 2022 at 22:18, mailto:ngtech1...@gmail.com> > wrote:
Hey Adam,
 
I recorded a video for you on how I do it at:
https://cloud1.ngtech.co.il/static/squid-data/splice-epic-games.mp4
 
So basically the relevant domains are:
 
epicgames-download1.akamaized.net  
.epicgames.com  
.unrealengine.com  
 
And you can peek at robert k Wild mail: “regex for normal websites”
 
And it contains the relevant technical details.
If for any reason you need a more detailed answer let me know.
 
Yours,
Eliezer 
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Adam Barnett
Sent: Thursday, 4 August 2022 14:28
To: squid-users@lists.squid-cache.org 
 
Subject: [squid-users] Squid and Epic Games HCapctca
 
Hi All, 
 
I am trying to get squid to allow me to login to Epicgames.com with my epic 
login, i get to the login page and get the hcaptca images and everytime i get 
"invalid response" 
 
i looked at the headers and the only error that i can see is "The cache 
information is missing from the entry" 
 
My config looks like so 

workers 2

```
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

http_port 3128 ssl-bump  dynamic_cert_mem_cache_size=16MB  
generate-host-certificates=on cert=/etc/squid/certs/squid-ca-cert-key.pem

sslcrtd_program /usr/lib64/squid/security_file_certgen -s /var/spool/squid/ssl 
-M 16MB
dns_nameservers 10.5.1.2 8.8.8.8
visible_hostname foo-proxy-1
forwarded_for truncate
via off

# Send to file
access_log daemon:/var/log/squid/access.log



acl CONNECT method CONNECT
acl local src 10.0.0.0/8  
always_direct allow all
request_header_add X-GoogApps-Allowed-Domains "foo.com  " all

memory_replacement_policy heap GDSF
maximum_object_size 100 KB
maximum_object_size 1 MB

cache allow all
cache_mem 256 MB
cache_dir rock /var/spool/squid 1024
memory_pools off
cache_swap_low 90
client_persistent_connections on


http_access allow localhost manager
http_access deny manager

# SquidGaurd
url_rewrite_program /usr/bin/squidGuard
```

Any suggestions? 
 
Thanks




Adam Barnett
Senior SysAdmin beloFX





   
 
 

 abarn...@belofx.com


   
 
  www.belofx.com


   
 
  LinkedIn
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and Epic Games HCapctca

2022-08-04 Thread ngtech1ltd
Please don’t bang your head… everybody is here for you.
Sometimes it takes time to respond but you will get your answers.
 
https://www.ngtech.co.il/squid/support-save/support-save-2022-08-05_00-51-47.tar.gz
 
Is not the fastest connection and it has a blacklist in the DB dump so for now 
it’s a production system but works good enough for me.
I hope it’s not too much information in the support save file.
 
Let me know if it makes more sense for you.
Also I am happy that you have asked this question since now others can enjoy 
from the answer 😊
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: Adam Barnett  
Sent: Friday, 5 August 2022 0:44
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid and Epic Games HCapctca
 
Sure, the more the beter, ive been banging my head against the wall for a while 
on this
 
Adam 
 
On Thu, 4 Aug 2022 at 22:41, mailto:ngtech1...@gmail.com> > wrote:
You are welcome.
 
I wrote an app that does everything for me so I just need to dump the database 
into a:
ssl::server_name directive
 
it’s basically:
## START
acl NoBump_server_name ssl::server_name 
"/etc/squid/no-ssl-bump-server-name.list"
 
acl tls_to_splice any-of inspect_only NoBump_src NoBump_server_name 
NoBump_server_regex_by_urls_domain NoBump_server_regex
 
ssl_bump peek app_matcher_helper
ssl_bump peek tls_s1_connect
 
ssl_bump bump app_matcher_helper
ssl_bump bump app_reader_helper
ssl_bump bump deny_note
 
ssl_bump splice app_matcher_helper
ssl_bump splice tls_to_splice
 
ssl_bump stare app_matcher_helper
ssl_bump stare tls_s2_client_hello
 
ssl_bump bump app_matcher_helper
ssl_bump bump tls_to_bump
## END
 
If you want I can upload a snippet of the whole setup dump with hope you could 
make use of it.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: Adam Barnett mailto:abarn...@belofx.com> > 
Sent: Friday, 5 August 2022 0:26
To: ngtech1...@gmail.com  
Cc: squid-users@lists.squid-cache.org 
 
Subject: Re: [squid-users] Squid and Epic Games HCapctca
 
תודה רבה
It looks like you are using a database and then building the config from that? 
any cahnce you can send me the snippet of the config instead of the DB bits? ? 
 
Thanks again 
 
Adam 
 
On Thu, 4 Aug 2022 at 22:18, mailto:ngtech1...@gmail.com> > wrote:
Hey Adam,
 
I recorded a video for you on how I do it at:
https://cloud1.ngtech.co.il/static/squid-data/splice-epic-games.mp4
 
So basically the relevant domains are:
 
epicgames-download1.akamaized.net  
.epicgames.com  
.unrealengine.com  
 
And you can peek at robert k Wild mail: “regex for normal websites”
 
And it contains the relevant technical details.
If for any reason you need a more detailed answer let me know.
 
Yours,
Eliezer 
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Adam Barnett
Sent: Thursday, 4 August 2022 14:28
To: squid-users@lists.squid-cache.org 
 
Subject: [squid-users] Squid and Epic Games HCapctca
 
Hi All, 
 
I am trying to get squid to allow me to login to Epicgames.com with my epic 
login, i get to the login page and get the hcaptca images and everytime i get 
"invalid response" 
 
i looked at the headers and the only error that i can see is "The cache 
information is missing from the entry" 
 
My config looks like so 

workers 2

```
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

http_port 3128 ssl-bump  dynamic_cert_mem_cache_size=16MB  
generate-host-certificates=on cert=/etc/squid/certs/squid-ca-cert-key.pem

sslcrtd_program /usr/lib64/squid/security_file_certgen -s /var/spool/squid/ssl 
-M 16MB
dns_nameservers 10.5.1.2 8.8.8.8
visible_hostname foo-proxy-1
forwarded_for truncate
via off

# Send to file
access_log daemon:/var/log/squid/access.log



acl CONNECT method CONNECT
acl local src 10.0.0.0/8  
always_direct allow all
request_header_add X-GoogApps-Allowed-Domains "foo.com  " all

memory_replacement_policy heap GDSF
maximum_object_size 100 KB
maximum_object_size 1 MB

cache allow all
cache_mem 256 MB
cache_dir rock /var/spool/squid 1024
memory_pools off
cache_swap_low 90
client_persistent_connections on


http_access allow localhost manager
http_access deny manager

# SquidGaurd
url_rewrite_program /usr/bin/squidGuard
```

Any suggestio

Re: [squid-users] regex for normal websites

2022-08-05 Thread ngtech1ltd
OK Robert,
 
I have seen the issue you were having and indeed it’s because cloudflare 
understands that there is some kind of MITM in the path.
It’s good but there should be a way to allow such MITM from cloudflare side.
I believe that the cloudflare client should have the ability to allow or 
disallow MITM such as squid to allow caching on the path.
However in this specific case EpicGames like Microsoft transfer their actual 
updates over http and allow caching so it’s OK.
 
The next squid.conf is working but I have not tested it with squidGuard on.
I can test it with squidGuard next week:
## START
workers 2
 
visible_hostname can-proxy-1
forwarded_for delete
via off
host_verify_strict off
client_dst_passthru on
read_ahead_gap 64 MB
shutdown_lifetime 10 seconds
 
acl fetched_certificate transaction_initiator certificate-fetching
 
acl deny_note note verdict deny
 
acl NoBump_server_name ssl::server_name 
"/etc/squid/no-ssl-bump-server-name.list"
acl dst_quixel ssl::server_name .epicgames.com 
epicgames-download1.akamaized.net .unrealengine.com
acl dst_quixel_dstdomain dstdomain .epicgames.com 
epicgames-download1.akamaized.net .unrealengine.com
 
acl Bump_server_name ssl::server_name "/etc/squid/ssl-bump-server-name.list"
 
acl fetched_certificate transaction_initiator certificate-fetching
 
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
 
acl tls_s1_connect at_step SslBump1
acl tls_s2_client_hello at_step SslBump2
acl tls_s3_server_hello at_step SslBump3
 
acl tls_to_splice any-of NoBump_server_name
acl tls_to_bump any-of Bump_server_name
 
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly 
plugged) machines
acl localnet src 172.16.0.0/12  # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines
 
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
 
http_access deny !Safe_ports
 
http_access deny CONNECT !SSL_ports
 
http_access allow localhost manager
http_access deny manager
 
http_access allow fetched_certificate
http_access allow localnet dst_quixel_dstdomain
 
http_access allow localnet
http_access allow localhost
 
http_access deny all
 
http_port 3128 ssl-bump dynamic_cert_mem_cache_size=16MB 
generate-host-certificates=on cert=/etc/squid/certs/squid-ca-cert-key.pem
sslcrtd_program /usr/lib64/squid/security_file_certgen -s /var/spool/squid/ssl 
-M 16MB
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE
 
ssl_bump peek tls_s1_connect
 
ssl_bump bump deny_note
 
ssl_bump splice dst_quixel
ssl_bump splice tls_to_splice
 
ssl_bump stare tls_s2_client_hello
 
ssl_bump bump tls_to_bump
 
strip_query_terms off
 
coredump_dir /var/spool/squid
 
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
## END
 
Let me know if it gives enough details for you to understand how to implement 
this.
By the way, a great proxy config you’v got there.
 
Demo of the diff:
https://cloud1.ngtech.co.il/static/squid-data/splice-epic-games-1.mp4
 
And a support-save of the setup:
https://cloud1.ngtech.co.il/static/squid-data/support-save-2022-08-05_14-16-59.tar.gz
 
I have used latest ngtech squid5.6 rpms from my repo.
 
Let me know if you need more assistance with your setup.
 
Yours,
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: robert k Wild  
Sent: Friday, 5 August 2022 13:24
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] regex for normal websites
 
wow thanks Eliezer so much for that video, that website looks awesome, ive 
bookmarked it already
 
On Thu, 4 Aug 2022 at 09:59, mailto:ngtech1...@gmail.com> > wrote:
Hey Robert,
 
I recorded this video for you:
https://cloud1.ngtech.co.il/static/squid-data/regex-for-robert.mp4
 
This is what I did when I reviewed the question.
I hope it will help you and others use this tool:
https://rubular.com/
 
and squid.
 
If you have any question

Re: [squid-users] Trying to recompile squid 4.13 with ./configure CXXFLAGS="-DMAXTCPLISTENPORTS=256"

2022-08-05 Thread ngtech1ltd
Hey Marcelo,

What OS are you using? Debian? Ubuntu?
The `which squid` command will show you where squid binary of squid -v is being 
take/used from.
And also, just wondering why 4.13? and not 4.17?

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of 
marcelorodr...@graminsta.com.br
Sent: Thursday, 4 August 2022 1:17
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Trying to recompile squid 4.13 with ./configure 
CXXFLAGS="-DMAXTCPLISTENPORTS=256"

Some important information.

I am trying to recompile using:

./configure CXXFLAGS="-DMAXTCPLISTENPORTS=1 -g -O2 -fPIE 
-fstack-protector-strong -Wformat -Werror=format-security" 
--build="x86_64-linux-gnu" --prefix="/usr" 
--includedir="${prefix}/include" --mandir="${prefix}/share/man" 
--infodir="${prefix}/share/info" --sysconfdir="/etc" 
--localstatedir="/var" --libexecdir="${prefix}/lib/squid3" --srcdir="." 
--disable-maintainer-mode --disable-dependency-tracking 
--disable-silent-rules BUILDCXXFLAGS="-g -O2 -fPIE 
-fstack-protector-strong -Wformat -Werror=format-security 
-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now" 
--datadir="/usr/share/squid" --sysconfdir="/etc/squid" 
--libexecdir="/usr/lib/squid" --mandir="/usr/share/man" --enable-inline 
--disable-arch-native --enable-async-io="8" 
--enable-storeio="ufs,aufs,diskd,rock" 
--enable-removal-policies="lru,heap" --enable-delay-pools 
--enable-cache-digests --enable-icap-client 
--enable-follow-x-forwarded-for 
--enable-auth-basic="DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB" 
--enable-auth-digest="file,LDAP" 
--enable-auth-negotiate="kerberos,wrapper" 
--enable-auth-ntlm="fake,smb_lm" 
--enable-external-acl-helpers="file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group"
 
--enable-url-rewrite-helpers="fake" --enable-eui --enable-esi 
--enable-icmp --enable-zph-qos --enable-ecap --disable-translation 
--with-swapdir="/var/spool/squid" --with-logdir="/var/log/squid" 
--with-pidfile="/var/run/squid.pid" --with-filedescriptors="65536" 
--with-large-files --with-default-user="proxy" 
--enable-build-info="Ubuntu linux" --enable-linux-netfilter 
build_alias="x86_64-linux-gnu" CFLAGS="-g -O2 -fPIE 
-fstack-protector-strong -Wformat -Werror=format-security -Wall" 
LDFLAGS="-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now" 
CPPFLAGS="-Wdate-time -D_FORTIFY_SOURCE=2" --with-openssl 
--enable-ssl-crtd

Then make and make install from /build/squid/squid-4.13/ folder, but 
nothin seems to change when squid -v is used.

I also tryied do recompile with this example:

./configure --build="x86_64-linux-gnu" --prefix="/usr" 
--includedir="${prefix}/include" --mandir="${prefix}/share/man" 
--infodir="${prefix}/share/info" --sysconfdir="/etc" 
--localstatedir="/var" --libexecdir="${prefix}/lib/squid3" --srcdir="." 
--disable-maintainer-mode --disable-dependency-tracking 
--disable-silent-rules BUILDCXXFLAGS="-g -O2 -fPIE 
-fstack-protector-strong -Wformat -Werror=format-security 
-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now" 
--datadir="/usr/share/squid" --sysconfdir="/etc/squid" 
--libexecdir="/usr/lib/squid" --mandir="/usr/share/man" --enable-inline 
--disable-arch-native --enable-async-io="8" 
--enable-storeio="ufs,aufs,diskd,rock" 
--enable-removal-policies="lru,heap" --enable-delay-pools 
--enable-cache-digests --enable-icap-client 
--enable-follow-x-forwarded-for 
--enable-auth-basic="DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB" 
--enable-auth-digest="file,LDAP" 
--enable-auth-negotiate="kerberos,wrapper" 
--enable-auth-ntlm="fake,smb_lm" 
--enable-external-acl-helpers="file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group"
 
--enable-url-rewrite-helpers="fake" --enable-eui --enable-esi 
--enable-icmp --enable-zph-qos --enable-ecap --disable-translation 
--with-swapdir="/var/spool/squid" --with-logdir="/var/log/squid" 
--with-pidfile="/var/run/squid.pid" --with-filedescriptors="65536" 
--with-large-files --with-default-user="proxy" 
--enable-build-info="Ubuntu linux" --enable-linux-netfilter 
build_alias="x86_64-linux-gnu" CFLAGS="-g -O2 -fPIE 
-fstack-protector-strong -Wformat -Werror=format-security -Wall" 
LDFLAGS="-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now" 
CPPFLAGS="-Wdate-time -D_FORTIFY_SOURCE=2" 
CXXFLAGS="-DMAXTCPLISTENPORTS=450 -g -O2 -fPIE -fstack-protector-strong 
-Wformat -Werror=format-security"

I used several virtualserver sessions and clones, but the 
CXXFLAGS="-DMAXTCPLISTENPORTS=" dont appears in the squid -v

What is wrong in this rebuilding?


On 2022-08-03 11:12, marcelorodr...@graminsta.com.br wrote:
> Hi,
> 
> I am trying to recompile squid 4.13 using ./configure
> CXXFLAGS="-DMAXTCPLISTENPORTS=256".
> It runs the recompile but the CXXFLAGS= does not even ap

[squid-users] SQL DB squid.conf backend, who was it that asked about it?

2022-08-07 Thread ngtech1ltd
Hey Everybody,
 
I don’t remember who was it but I was asked about using a SQL DB backend for 
squid.conf.
If the question is still in place I can try to help and give an example how 
it’s being done and also
how to implement such a feature.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.8+ intercept

2022-08-10 Thread ngtech1ltd
Hey K,
 
I am not sure about the network topology.
Preferably the Squid should reside on another network then the clients if it’s 
intercepting the traffic.
Also, I assume it’s not a TPROXY setup so it should be pretty simple and 
straight forward.
 
I understand why are you asking this question.
Also take into account that Mikrotik is now on 7.4 firmware and it’s 
recommended to use this one.
If you are using any other version let me know so I can try to make sense on 
the differences.
I will try to give a DEMO for such a setup and how to make it work.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of M K
Sent: Tuesday, 9 August 2022 22:29
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid 4.8+ intercept
 
Hello,
 
I have a setup like this one:

| Client | => | Router | => Internet
 ||
 \/
  | Squid |
 
...the router is a Mikrotik router capable of all things NAT/Redirect and 
whatnot. Squid server has only one network interface.
Using the router:
- I tried routing traffic to squid server IP.
- I tried destination-NATing from client to server IP, with origin server 
IP-and-port natted to squid IP-and-port, and with origin server IP-only natted 
to squid-IP.
 
I have been struggling for 2 days to setup a working Squid 4.8 or higher 
interception.
Test server is running Ubuntu 18.4.3 and Squid 4.8.
Documentation is either too much trim or extremely outdated.
Any help would be very much appreciated.
 
All best,
K
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.8+ intercept

2022-08-10 Thread ngtech1ltd
Hey Rafael,
 
This document covers on the V6 branch of Mikrotik and the stable is 7.4.
If you do have the resources to publish a V7 document upgrade it would help 
others.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of 
Rafael Akchurin
Sent: Tuesday, 9 August 2022 23:54
To: M K 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 4.8+ intercept
 
Hello K, 
 
We use https://docs.diladele.com/tutorials/mikrotik_transparent_squid/index.html
Best regards, 
Rafael



Op 9 aug. 2022 om 21:29 heeft M K mailto:mohammed.khal...@gmail.com> > het volgende geschreven:
 
Hello, 
 
I have a setup like this one:

| Client | => | Router | => Internet
 ||
 \/
  | Squid |
 
...the router is a Mikrotik router capable of all things NAT/Redirect and 
whatnot. Squid server has only one network interface.
Using the router:
- I tried routing traffic to squid server IP.
- I tried destination-NATing from client to server IP, with origin server 
IP-and-port natted to squid IP-and-port, and with origin server IP-only natted 
to squid-IP.
 
I have been struggling for 2 days to setup a working Squid 4.8 or higher 
interception.
Test server is running Ubuntu 18.4.3 and Squid 4.8.
Documentation is either too much trim or extremely outdated.
Any help would be very much appreciated.
 
All best,
K
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.8+ intercept

2022-08-10 Thread ngtech1ltd
Hey K,
 
Here a video example on how to implement what you probably want:
https://cloud1.ngtech.co.il/static/squid-data/mikrotik-v7-intercept.mp4
 
If the proxy sits in the same network that the clients sit it won’t work.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:   ngtech1...@gmail.com
Web:   https://ngtech.co.il/
My-Tube:   https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of M K
Sent: Tuesday, 9 August 2022 22:29
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid 4.8+ intercept
 
Hello,
 
I have a setup like this one:

| Client | => | Router | => Internet
 ||
 \/
  | Squid |
 
...the router is a Mikrotik router capable of all things NAT/Redirect and 
whatnot. Squid server has only one network interface.
Using the router:
- I tried routing traffic to squid server IP.
- I tried destination-NATing from client to server IP, with origin server 
IP-and-port natted to squid IP-and-port, and with origin server IP-only natted 
to squid-IP.
 
I have been struggling for 2 days to setup a working Squid 4.8 or higher 
interception.
Test server is running Ubuntu 18.4.3 and Squid 4.8.
Documentation is either too much trim or extremely outdated.
Any help would be very much appreciated.
 
All best,
K
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.8+ intercept

2022-08-11 Thread ngtech1ltd
Hey Grant,

The issue is very simple, if squid and the clients sits on the same subnet( not 
the same network segment)
then squid will send the traffic back directly to the client.
WCCP is not related to the network level of things and will not resolve this 
exact same issue in most similar use cases.
(there are which can but this is not the use case)

You should never SNAT traffic from local network to the proxy since you will 
cause some issue with this.
What you might want to do is to give the proxy a special subnet against the 
mikrotik and to use policy based routing
to forward the clients traffic to the proxy.

If you can plug the proxy to another port on the Mikrotik device and give it a 
special subnet it much more preferable.

I believe that WCCP is not an option for Mikrotik so unless you have a specific 
device that supports WCCP, don't bother thinking about it.
Also, in the same breath I can tell you that most commercial services that 
implement MITM have not been using and are not using WCCP.
There are much smarter ways these days then basic WCCP to make sure that the 
traffic will be passed to the right proxy.

Also just take a minute and think: what WCCP gives exactly that a Mikrotik 
admin cannot do?
A Mikrotik can be automated in such a way that WCCP would be inferior to what 
Mikrotik can offer.
(To my knowledge)

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: squid-users  On Behalf Of 
Grant Taylor
Sent: Thursday, 11 August 2022 6:48
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 4.8+ intercept

On 8/10/22 3:47 AM, ngtech1...@gmail.com wrote:
> If the proxy sits in the same network that the clients sit it won’t work.

Why not?

Is this because of -- what I call -- the TCP triangle problem?  - 
Meaning that Squid sees the source as the client and replies directly?

If that's the case, you can cheat by SNATing the traffic that's going to 
Squid such that Squid sees the router as the source of the traffic. 
Thus Squid replies to the router which unDNATs it and sends it back to 
the original / real client.

Aside:  Isn't this what WCCP was originally meant to address?  Is WCCP a 
non-starter any more?  Even with TLS bump / monkey in the middle?



-- 
Grant. . . .
unix || die


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as Reverse Proxy with Parent Proxy, http inbound and https outbound

2022-08-12 Thread ngtech1ltd
Hey Joel,
 
I don’t know if squid would be able to do what you want/need but I know that 
nginx can do some part of what you want.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users  On Behalf Of Joel 
Howard
Sent: Friday, 12 August 2022 7:28
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid as Reverse Proxy with Parent Proxy, http 
inbound and https outbound
 
Hey Alex,
 
Thanks for the quick and detailed response! I inherited this service recently - 
would you recommend upgrading to 5? My configs are fairly simple, so upgrade 
should be easy.
 
Here's my desired flow - let "reverse" and "parent" represent the IPs of those 
proxies, and "target" represent the target API hostname.

Application sends GET (POST, PUT, etc) http://reverse/some/path
(Note: Application doesn't know target, and couldn't reach it if it did.)

Reverse adds headers to the request
Reverse sends the request to https://target/some/path, using parent as a 
forward proxy.
 
The parent proxy in my test case accepts TCP, although if possible I would like 
to support parent TLS proxies as well - this reverse proxy is deployed in 
different environments where the parent proxy may differ.

I set this up outside of a docker and without trying to force ssl. The config 
below was my first attempt - it works if the reverse proxy has direct internet 
access, but just hangs otherwise; my understanding is that requests that use 
the first cache_peer do not use the second to proxy.
 
# Reverse proxy to google.com  
http_port 80 accel vhost defaultsite=www.google.com  
cache_peer google.com   parent 80 0 no-query originserver 
forceddomain=www.google.com   name=target
request_header_add Joel Joel

# Simplified acl
http_access allow all
cache_peer_access target allow all

# Parent proxy
cache_peer 10.60.4.178 parent 3128 0 no-query default
acl all src 0.0.0.0/0.0.0.0  
never_direct allow all

This was my second attempt, using forceddomain to replace the host header but 
sending the request directly to the parent proxy. This results in the parent 
receiving GET /, which it does not understand (it expects GET target/somepath).
 
# Reverse proxy directly to forward proxy google.com  
http_port 80 accel vhost defaultsite=www.google.com  
cache_peer 10.60.4.178 parent 3128 0 no-query originserver 
forceddomain=www.google.com   name=parent
request_header_add Joel Joel

# Misc
cache deny all
shutdown_lifetime 1 seconds
 
I suspect this would need a url rewriter to force the url to target - I'm 
failing to get any of the example rewriters working (maybe due to the old squid 
version?) so I haven't been able to test that yet. But I suspect it will fail 
for HTTPS, because the rewritten URL will be sent as GET target/something to 
the parent proxy, instead of CONNECT target/something - I still think I'm 
missing something to get my squid to use the forward as a proxy while itself 
functioning in reverse.
 
I'll rewrite these for squid 5 and try to get URL rewriting working. In the 
meantime, could you let me know if either of these two general approaches is 
remotely correct and if so, what I can do to get further with them?

Thanks so much! If you happen to be on StackOverflow, I've asked the question 
with a bounty there 

  as well (although less squid-specific).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.8+ intercept

2022-08-13 Thread ngtech1ltd
Hey K,
 
What RouterOS version are you using?
Also, what rules have you applied?
If there is a very long delay and then a failure you should verify that the 
rules you wrote are proper to your environment.
You should route packets based on connection marks and mark only new 
connections from LAN IP addresses and only on the LAN interface.
As I showed in the demo video it’s very simple to implement.
 
Let me know if you are still having issues.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: M K  
Sent: Saturday, 13 August 2022 10:59
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 4.8+ intercept
 
Thank  you for your quick reply. The text-drawing actually changed with 
different font; the squid server is effectively connected to MikroTik router, 
not the same physical link as the client.
 
The MikroTik router sits between the client and squid server.
 
That said, I can confirm that the MikroTik router is effectively able to 
route/DNat client packets going to ports 80 and 443 to squid server. Depending 
on router rules be it route or dnat, the client browser effectively displays 
the error page of squid, or goes into a very long delay then failure.
 
I will retry and let you know.
 
K
On Wed, Aug 10, 2022, 10:08 mailto:ngtech1...@gmail.com> 
> wrote:
Hey K,
 
I am not sure about the network topology.
Preferably the Squid should reside on another network then the clients if it’s 
intercepting the traffic.
Also, I assume it’s not a TPROXY setup so it should be pretty simple and 
straight forward.
 
I understand why are you asking this question.
Also take into account that Mikrotik is now on 7.4 firmware and it’s 
recommended to use this one.
If you are using any other version let me know so I can try to make sense on 
the differences.
I will try to give a DEMO for such a setup and how to make it work.
 
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com  
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of M K
Sent: Tuesday, 9 August 2022 22:29
To: squid-users@lists.squid-cache.org 
 
Subject: [squid-users] Squid 4.8+ intercept
 
Hello,
 
I have a setup like this one:

| Client | => | Router | => Internet
 ||
 \/
  | Squid |
 
...the router is a Mikrotik router capable of all things NAT/Redirect and 
whatnot. Squid server has only one network interface.
Using the router:
- I tried routing traffic to squid server IP.
- I tried destination-NATing from client to server IP, with origin server 
IP-and-port natted to squid IP-and-port, and with origin server IP-only natted 
to squid-IP.
 
I have been struggling for 2 days to setup a working Squid 4.8 or higher 
interception.
Test server is running Ubuntu 18.4.3 and Squid 4.8.
Documentation is either too much trim or extremely outdated.
Any help would be very much appreciated.
 
All best,
K
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.8+ intercept

2022-08-18 Thread ngtech1ltd
Hey K,

I need your Mikrotik and squid.conf and iptables to understand what the issue 
might be.
You will need to describe your setup in a way I can relate to it.
There is not much of a difference between port 80 to 443 just that the port 
need to have ssl-bump settings If you are using it.
The CONNECT port is a simple forward proxy and it seems your setup is not as 
simple as you describe.
If you do have NAT then you need this to be only on specific interfaces in the 
Mikrotik and the Squid server.

In my case the basic setup works for a very long time now so I cannot imagine 
what's wrong in your case.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/

-Original Message-
From: M K  
Sent: Thursday, 18 August 2022 6:20
To: ngtech1...@gmail.com
Cc: squid-users@lists.squid-cache.org; Rafael Akchurin 

Subject: Re: [squid-users] Squid 4.8+ intercept

Hello Eliezer,

I finally got my setup to work; turned out to be intercepted clients
running into default nat, while my test squid server did not allow
them access, not even through iptables!

Now, I have one last bit to handle, which you did not cover in your
video. I'm using 3 ports for squid like Rafael's guide: one for normal
CONNECT, one for intercepted plain HTTP on 80, and one for intercepted
HTTPs on 443.

The setup works awesome for TLS addresses (i.e https://), but browser
redirection from Plain to TLS, say from http://cnn.com to
https://cnn.com, fails to happen. It just waits then time out.
What could be done to make it happen?

All best,
K


On Sat, Aug 13, 2022 at 7:57 PM  wrote:
>
> Hey K,
>
>
>
> What RouterOS version are you using?
>
> Also, what rules have you applied?
>
> If there is a very long delay and then a failure you should verify that the 
> rules you wrote are proper to your environment.
>
> You should route packets based on connection marks and mark only new 
> connections from LAN IP addresses and only on the LAN interface.
>
> As I showed in the demo video it’s very simple to implement.
>
>
>
> Let me know if you are still having issues.
>
>
>
> Eliezer
>
>
>
> 
>
> Eliezer Croitoru
>
> NgTech, Tech Support
>
> Mobile: +972-5-28704261
>
> Email: ngtech1...@gmail.com
>
> Web: https://ngtech.co.il/
>
> My-Tube: https://tube.ngtech.co.il/
>
>
>
> From: M K 
> Sent: Saturday, 13 August 2022 10:59
> To: ngtech1...@gmail.com
> Cc: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Squid 4.8+ intercept
>
>
>
> Thank  you for your quick reply. The text-drawing actually changed with 
> different font; the squid server is effectively connected to MikroTik router, 
> not the same physical link as the client.
>
>
>
> The MikroTik router sits between the client and squid server.
>
>
>
> That said, I can confirm that the MikroTik router is effectively able to 
> route/DNat client packets going to ports 80 and 443 to squid server. 
> Depending on router rules be it route or dnat, the client browser effectively 
> displays the error page of squid, or goes into a very long delay then failure.
>
>
>
> I will retry and let you know.
>
>
>
> K
>
> On Wed, Aug 10, 2022, 10:08  wrote:
>
> Hey K,
>
>
>
> I am not sure about the network topology.
>
> Preferably the Squid should reside on another network then the clients if 
> it’s intercepting the traffic.
>
> Also, I assume it’s not a TPROXY setup so it should be pretty simple and 
> straight forward.
>
>
>
> I understand why are you asking this question.
>
> Also take into account that Mikrotik is now on 7.4 firmware and it’s 
> recommended to use this one.
>
> If you are using any other version let me know so I can try to make sense on 
> the differences.
>
> I will try to give a DEMO for such a setup and how to make it work.
>
>
>
> Eliezer
>
>
>
> 
>
> Eliezer Croitoru
>
> NgTech, Tech Support
>
> Mobile: +972-5-28704261
>
> Email: ngtech1...@gmail.com
>
> Web: https://ngtech.co.il/
>
> My-Tube: https://tube.ngtech.co.il/
>
>
>
> From: squid-users  On Behalf Of M K
> Sent: Tuesday, 9 August 2022 22:29
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Squid 4.8+ intercept
>
>
>
> Hello,
>
>
>
> I have a setup like this one:
>
>
> | Client | => | Router | => Internet
>  ||
>  \/
>   | Squid |
>
>
>
> ...the router is a Mikrotik router capable of all things NAT/Redirect and 
> whatnot. Squid server has only one network interface.
>
> Using the router:
>
> - I tried routing traffic to squid server IP.
>
> - I tried destination-NATing from client to server IP, with origin server 
> IP-and-port natted to squid IP-and-port, and with origin server IP-only 
> natted to squid-IP.
>
>
>
> I have been struggling for 2 days to setup a working Squid 4.8 or higher 
> interception.
>
> Test server is running Ubuntu 18.4.3 and Squid 4.8.
>
> Documentation is either too much trim or extremely outda

  1   2   >