Hello,
I'm trying to use rock store with 6.9, there is a limitation about the
size of cache ? I tried 15000 but there is no rock db created with squid
-z but it works with 1000
My goal is using a 200G SSD disk
cache_dir rock /cache 1000 max-swap-rate=250 swap-timeout=350
Thanks
___
hello All,
I would like to know if anyone is using Squid 6 with Rock Store in a Docker
container? On my end, it crashes at launch with the following in my squid.conf:
cache_dir rock /var/spool/squid 55000 max-swap-rate=250 swap-timeout=35
I'm using https://hub.docker.com/r/fredbcode/squid
Than
I opened a bug here https://bugs.squid-cache.org/show_bug.cgi?id=5394
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
Hello,
I'm struggling with close_wait and squid in docker, after some hours I
have thousand of close_wait , a lot more than the others status
I tried some sysctl options but without more success, I guess because
the close_wait can be related to my clients (many simultaneous)
Maybe this is r
Thanks, I will try with one proxy
For now I'm trying with the latest version of docker without more success
Do you think a wrong configuration parameters related with close_wait
could be set in squid ?
At the end of the days I have more than 35 000 close wait for each squid
...
__
Le 07/12/2021 à 08:11, FredB a écrit :
Thanks, I will try with one proxy
FI: The close_wait are well deleted, but I don't know if there is an
important impact or not for my users
My browser was still connected to a secure website, but I did no
Do you think, client lifetime 1 minute works (there is no minimal value
in documentation)
For testing purpose I'm trying in a test platform and I'm seeing no
impact, for example download a large file is not interrupted
There is no error in squid parse but I found nothing in debug about
lifet
s, but it may be the same.
>
> Needless to say, bugs notwithstanding, too small client_lifetime
> values
> will kill too many innocent transactions. Please see my first
> response
Yes It's just for testing purpose, I'm seeing no impact but only for my usage
case ...
For now I will try client_lif
Hello All
Here docker image builds, automatic at each official release
Amd64 and Arm (64 bits os only, tested on raspberry v3,v4)
https://hub.docker.com/r/fredbcode/squid
Fred
--
Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma
brièveté.__
Hi,
What is this image general purpose?
Have a containerized Squid, easy to install and upgrade, and In my case
use multi proxies on same machine
Enabled options, here:
https://gitlab.com/fredbcode-images/squid/-/blob/master/Dockerfile#L8
Squid is automatically compiled, tested (I will ad
Hi All,
In practise how you maintain the CA files? I'm testing SSLBump with Debian
Jessie the package ca-certificates provides many certificates but less than the
latest Firefox Browser.
How do you manage to keep all that in check? When a CA is missing you add the
pem in you system config or ex
0,0 0:03.41 digest_
FredB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hi Yuri,
200 mbits, more or less 1000/2000 simultaneous users
I increase children value, because the limit is reached very quickly
> and only 100 MB on disk?
100 MB by process, no ? I think I should reduce this value and rather increase
the max of children
Maybe such load is just impossible
SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
It can be very, very, useful for analysis
Thanks
FredB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Sorry, it was just a wrong cut/paste cache_size=50MB the previous result still
the same
About children I tried with 256, unfortunately squid is still stuck at 100%
Regards
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lis
I agree, to be honest I started with low values updated again and again, I
should have post my previous tests rather than the latest :)
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
3:43 +0100] "GET
https://bugs.squid-cache.org/ HTTP/1.1" 503 353 349 NONE:HIER_NONE
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" -
Cache.log
ssl3_get_server_certificate:certificate verify failed (1/-1/0)
I'm missing something?
Thanks
Hi Eliezer
It's just what I'm seeing and it works well, so with fetched_certificate
rule the first point is now fixed
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Now squid can get directly the intermediate CA as a browser does, it's a
very interesting feature to me
Maybe I'm missing something, but I can see the request from squid now
(with squid 4) it's a good point, my sslbump config is very basic,
perhaps to basic cl step at_step SslBump1
ssl_bump
Sorry wrong topic
Le 15/01/2019 à 18:08, FredB a écrit :
Now squid can get directly the intermediate CA as a browser does, it's
a very interesting feature to me
Maybe I'm missing something, but I can see the request from squid now
(with squid 4) it's a good point, my sslbump
Now squid can get directly the intermediate CA as a browser does, it's a
very interesting feature to me
Maybe I'm missing something, but I can see the request from squid now
(with squid 4) it's a good point, my sslbump config is very basic,
perhaps to basic cl step at_step SslBump1
ssl_bump
Yes it works, my first issue is now resolved
There is a 200 when automatic download occurs, so this part is good
Unfortunately still there is a code 503 at the third request, a specific
bump configuration is needed ?
- - - [15/Jan/2019:16:33:43 +0100] "GET
http://cert.int-x3.letsencrypt.org/
ownload CA (wget for
example)
Perhaps this is a "bug" because pkix-cert is used by browsers (or
clients software) to automatically adds CA
https://www.iana.org/assignments/media-types/application/pkix-cert
FredB
___
squid-users mail
ctory from my system and it seems
pretty outdated (Debian 9) there is a link somewhere, for example,
using the latest mozilla CA in Squid ?
FredB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hello all,
I'm playing with Squid4 and e2guardian as ICAP server.
I'm seeing something I misunderstand, when a SSL website is blocked
e2guardian returns a encapsulated "HTTP/1.1 403 Forbidden" header this
part seems good to me with an encrypted website a denied or redirection
page can't be ad
Hello Alex
But unfortunately Squid adds a "Connection: keep-alive" header
It is not clear _why_ you consider that header "unfortunate" and the
connection "wasted". That header may or may not be wrong, and the
connection may or may not be reusable, depending on many factors (that
you have not s
As a workaround, you can try disabling client-to-Squid persistent
connections (client_persistent_connections off) or changing your ICAP
service to produce a response with a non-empty 403 body.
You are right this is a browser bug (firefox at least recent versions)
and this issue can be resol
Amos, Alex
Ithought you might beinterested, there was a bug in Firefox with huge
impact for some configurations
https://bugzilla.mozilla.org/show_bug.cgi?id=1522093
Regards
Fredb
___
squid-users mailing list
squid-users@lists.squid-cache.org
Thanks, there a lot of impacts here, response time, load average, etc,
unfortunately we should wait that FF 66 (and after) is installed everywhere to
fix that ...
I'm really surprised that there is no more messages about this
Fred
___
squid-users m
Hello,
when a SSL website request is dropped by proxy with FF the connection is
not well finished
Example of this here, first message:
https://bugzilla.mozilla.org/show_bug.cgi?id=1522093
___
squid-users mailing list
squid-users@lists.squid-cache.
Yes, here my usage case
1- Squid as explicit proxy connected to e2guardian with ICAP
2 - E2guardian block a SSL website (no bump) a 403 header is returned ->
I tried 302, 307, 200, without more success
3 - With IE or chrome the connection is well dropped but with FF (61 ->
next 67) the conne
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hello
I wonder if I can use NTLM auth without any integration in AD ?
Just interrogate the AD for user/password, I can do that ?
Regards
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-use
> The SMB_LM helper performs a downgrade attack on the NTLM protocol
> and
> decrypts the resulting username and password. Then logs into AD using
> Basic auth.
> This requires that the client supports the extremely insecure LM
> auth.
> Any sane client will not.
>
> Alternatively, the 'fake'
>
> I have the same issue and racked my brain trying to find a solution.
> Now, I
> see there is no solution for this yet.
>
> I would appreciate so much if this feature were made available in the
> future.
>
> Eduardo Carneiro
>
>
Yes http://bugs.squid-cache.org/show_bug.cgi?id=4607
___
/root/soso/userIP.conf
Make a try with /tmp
/tmp/userIP.conf
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Now, You should use another directory, less insecure I mean
/tmp is r/w for all ...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
If really needed, there is a patch here
http://bugs.squid-cache.org/show_bug.cgi?id=3792
But as Amos said this patch is incomplete the CONNECT XFF header contents
should also be added to the bumped request
Fred
___
squid-users mailing list
squid-users@
I do not see this, do you have something particular ? SSLBump maybe ? SMP ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hello,
I'm searching a way to exclude an user (account) or an IP from my lan
I can exclude a destination domain to decryption with SSL_bump but not all
requests from a specific source, maybe because I'm using x-forwarded ?
Thanks
Fred
___
squid-use
> but not all requests from a specific source
> what do you mean here?
I mean no ssl-bump at all for a specific user, no matter the destinations
I tried some acl without success
>>, maybe because I'm using x-forwarded ?
> x-forwarded-for has nothing to do with this
There is a known bug with s
Hello,
FI, I'm reading some parts of code and I found two little spelling errors
FredB
---
--- src/client_side.cc 2016-10-09 21:58:01.0 +0200
+++ src/client_side.cc 2016-12-14 10:57:12.915469723 +0100
@@ -2736,10 +27
So how I can manage computers without my CA ? (eg: laptop temporary connected)
In my situation I have also some smartphones in some case, connected to my
squids, how I can exclude them from SSLBump ?
I have already some ACL based on authentication (user azerty = with/without
some rules)
FredB
PO.
But in practice I don't how how you can do that, just hello I want a
subordinate root certificates ?
FredB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Thanks Eliezer
Unfortunately my "lan" is huge, many thousands of people, and MAC addresses are
not known
I'm very surprised, I'm alone with this ? Nobody needs to exclude some users
from SSLBump ?
Fredb
___
squid-users mail
>
> acl tls_s1_connect at_step SslBump1
>
> acl tls_vip_usersfill-in-your-details
>
> ssl_bump splicetls_vip_users # do not peek/bump vip users
> ssl_bump peek tls_s1_connect # peek at connections of other
> users
> ssl_bump stare all# peek
Hello,
I'm debugging e2guardian and I found something in squid log the X-forwarwed IP
seems not always recorded? I saw nothing particular with tcpdumd so I made a
change in code (e2guardian) to show the header passed
--- With problem -
E2 Debug:
Apr 10 09:07:49 pro
delay_pool mixed with an acl like this acl ldap_auth proxy_auth REQUIRED
delay_access 1 allow ldap_auth
delay_access 1 deny all
A delay_class 4 should be good
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cach
My answer was only for this point
> Would be necessary for me to do so for including some traffic based
> limitations for each user
I don't known radius with Squid but I guess you have an acl like this
acl radius-auth proxy_auth REQUIRED ?? (or something close)
In this case I guess you can e
Hi all,
There is way to approximately estimate the "cost" of CPU/Memory usage of
SSLbump ?
What do you see in practice ?
Some features are incompatibles with SMP so I'm using a single process, Squid
is using more or less 30/40 % of CPU
I have approximately 1000 users simultaneously connected
> >
> > I’ve set up a Squid as a transparent child-proxy. Every request is
> > redirected to another Squid with the content filtering add-on
> > e2guardian. I encounter the problem that the transparent child
> > Squid
> > only forwards IP-Addresses to the e2guardian when HTTPS is used and
> > so
You can easily make this with an acl, delay_pool is a very powerful tool
eg:
Bandwidth 64k for each users with an identification except for acl BP and only
in time included in acl desk
acl my_ldap_auth proxy_auth REQUIRED
acl bp dstdom_regex "/etc/squid/limit"
acl desk time 09:00-12:00
acl
I guess you have an acl with proxy_auth ?
Something like acl ldapauth proxy_auth REQUIRED ?
So you can just add http_access allow ldapauth !pdfdoc and perhaps http_access
allow pdfdoc after
Fred
___
squid-users mailing list
squid-users@lists.squid-cac
>
> auth_param basic program /usr/sbin/squid_ldap_auth -b T=MYDOMAIN -f
> "uid=%s"
> -s sub -h 192.168.1.1 acl password
> auth_param basic children 10
> auth_param basic realm Internetzugang im VERWALTUNGSNETZ FAL-BK:
> Bitte mit
> den Daten aus diesem Netzwerk anmelden!
> acl password proxy_auth
Hello
I migrated my Squid to the latest version 3.5.16 (from 3.5.10) and now I have
many many "Vary loop objects"
What happen ? I made no configuration changes
After 1 hours
Squid 3.5.16
grep "Vary" /var/log/squid/cache.log | wc -l
18176
Squid 3.5.10
grep "Vary" /var/log/squid/cache.log | wc
> Objet: Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)
>
> intercept ??
No, implicit proxy
> i got excellent result but not the correct way its and old issue
> may be i was not posting the issue in correct way for the dev... to
> understand
Very recent for me, not problem with
>
> Version 4.0.8 has the same issue after upgrading without cache
> clean-up.
>
Thanks I will test, I confirm the problem still present after a while
Eg: this object seems never cleaned/fixed from cache
Snip, there are many requests before ...
2016/04/04 13:39:11 kid1| varyEvaluateMatch:
>
> mmm code ar the same must be something else corrupt the vary before
> varyEvaluateMatch()
>
This ?
http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3.5-14016.patch
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://l
>
> Thanks I will test, I confirm the problem still present after a while
> Eg: this object seems never cleaned/fixed from cache
>
No more success with fresh cache, after 5 minutes the messages appears again
and again
Joe is right there is a bug somewhere
>
> i can provide testing patchjust for testing .. not for
> production until
> they find the right cause
> but make shurr the header ar public for those link might be your
> situation ar diff...
I will, but later on a platform test
Now I will fallback to a previous release
_
Hi Amos,
I confirm, cleaning the cache (mkfs in my case) do not fix the issue
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
>
> As I'm currently updating too: is this a bug or have I only to clear
> the
> old cache directories to prevent these error messages?
>
As far as I know, no, I tried
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-c
Oh sorry
Ok it seems work for me
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
>
> Attached is a patch which I think will fix 3.5.16 (should apply fine
> on
> 4.0.8 too) without needing the cache reset. Anyone able to test it
> please?
>
Reset the cache still needed, at least in my case
Fred
___
squid-users mailing list
squid-
Amos I don't know if this is related or not, but I have a lot of
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse h
Hello all,
I'm trying to use a server with 64 Go of ram, but I'm faced with a problem,
squid can't works with more than 50% of memory
After that the swap is totally full and kswap process gone mad ...
I tried with vm.swappiness = 0 but same problem, perhaps a little better, I
also tried memory_
Thanks for your answer
> What is cache_mem ?
> See also http://wiki.squid-cache.org/SquidFaq/SquidMemory
>
Actually 25 Gb
I tried different values, but I guess no matter, the problem is that the squid
limit is only 50% of ram
> > After that the swap is totally full and kswap process gone mad .
Maybe I'm wrong, but the server is also using many memories for TCP
cat /proc/net/sockstat
sockets: used 13523
TCP: inuse 8612 orphan 49 tw 31196 alloc 8728 mem 18237
UDP: inuse 14 mem 6
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
netstat -lataupen | wc -l
38780
Yes I guess this is a good track for me (more or less 2 now ...)
Maybe half_closed should be help but unfortunately it crashes squid, Bug 4156
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/s
>
> Yes I guess this is a good track for me (more or less 2 now ...)
> Maybe half_closed should be help but unfortunately it crashes squid,
> Bug 4156
>
> Fred
> ___
Maybe this is also related with the post "Excessive TCP memory usage" because
I'
>
> You are mentioning ufdbGuard. Are its lists free for government use?
> If not, then I can not use it, since we have very strict purchasing
> requirements, even if it costs $1. And of course, I would have to go
> through evaluation, the usual learning curve etc.
>
> Don't get me wrong here, I
Hello,
I wonder what headers can be see by squid with a SSL website ? Without SSLBump
of course
In my logs I'm seeing User-Agent, Proxy-Authorization and some others but when
I try to put some new headers it works only with an HTTP website
I can't do that ? What are the limitations ?
My goal i
Thanks Amos for your answer
Do you think I can use an alternate method to tag my users requests ?
Modifiy/add Header seems a bad idea
Regards
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squ
Ok thanks, so I will thinking about an another way ...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hello All,
I tried rock store and smp long time ago (squid 3.2 I guess), Unfortunately I
definitely drop smp because there are some limitations (In my case), and I
fall-back to diskd because there were many bugs with rock store. FI I also
switched to aufs without big differences.
But now with
>
> We use SMP and Rock under the 3.5 series without problems. But I
> don't
> think any of our sites have as high req/sec load as you.
Thanks for your answer
Please can you describe your load and configurations ?
No crash ?
Fred
___
squid-users mai
> Set the environment with below command
> Export http_ proxy=" 192.168.1.2:8080 "
>
HTTP
> But when I download the website through http it is download.
> Wget http://google.com
>
HTTPS
export https_ proxy="192.168.1.2:8080"
Fred
___
squid-users mai
Hello,
I saw this in rock store documentation
If possible, Squid using Rock Store creates a dedicated kid
process called "disker" to avoid blocking Squid worker(s) on disk
I/O. One disker kid is created for each rock cache_dir. Diskers
are created only when Squid,
>
> --enable-disk-io=AIO,Blocking,DiskThreads,IpcIo,Mmapped
Wrong sorry, crash with diskd only because DiskDaemon is missing
>
> But there is a segfault at start, FI same result with diskd ...
>
> OK so I'm trying now --enable-disk-io=yes and there no more disker
> process, I'm doing somethi
Hi Alex
> Normally, you do not need any ./configure options to enable Rock
> support, including support for a stand-alone disker process. If you
> want
> to enable IpcIo explicitly, you may, but I would first check whether
> it
> was enabled without any --enable-disk-io options:
>
> > $ fgrep Ip
I forgot
/cache1:
total 212380
drwxrwxrwx 3 squid root 4096 sept. 1 09:00 .
drwxr-xr-x 26 root root 4096 nov. 17 2015 ..
drwxrwxrwx 2 squid root 16384 août 31 09:12 lost+found
-rwxrwxrwx 1 squid squid 13631488 sept. 1 09:14 rock
/cache2:
total 204584
drwxrw
>
> [Unit]
> Description=Squid Web Proxy Server
> After=network.target
>
> [Service]
> Type=simple
> ExecStart=/usr/sbin/squid -sYC -N
Yes this is the default value
http://bazaar.launchpad.net/~squid/squid/3.5/view/head:/tools/systemd/squid.service
I guess this is wrong no ?
Fred
__
About this, I should open a bug ? Or you think I missed something ?
Maybe I'm wrong but it seems there is something bad with diskers/rock store
squid without -N + Squid compiled with IpcIo + rock =
>
> FATAL: Rock cache_dir at /cache1/rock failed to open db file: (2) No
> such file or directory
I will take a look,thanks
But there is no smp configuration, just rock and squid with two caches
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Just for for information, no problem after two weeks.
Unfortunately I can't test now with IpcIo (a problem with systemd) but rock
store is very stable
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/
One thing, squid restart is very slow because of time required to rebuild the
cache
2016/09/13 00:25:34| Took 1498.42 seconds (3972.24 objects/sec). -> Rock
2016/09/13 00:00:51| Took 5.71 seconds (533481.90 objects/sec). -> Diskd
___
squid-users ma
Hello Alex and thank you for the explanations, I forgot but of course the test
is running on same hardware and same full caches (2 sata drives 15k rpm 123 Gb
of caches each)
I will return to diskd now, because the point 2 is annoying for me, but rock
seems very promising for me
Hello,
I'm testing SSlBump and it works good, however I'm seeing something strange
with two proxies and x-forwarded enabled to the first, some requests are wrote
with the first proxy address.
user -> squid (fowarded_for on) -> squid (follow_x_forwarded_for allow all) ->
Net
Here log from th
>
> Above are bumped requests sent inside the tunnel. Proxy #1 did not
> interact with them, so it has no way to add XFF headers.
>
> The SSL-Bump logic does not yet store some things like indirect
> client
> IP and associate them with the bumped requests.
>
> Amos
>
Ok thank you, there is a
Hello All,
I'm searching a way to use a secure SSO with Squid, how did you implement the
authenticate method with an implicit proxy ?
I'm reading many documentations about SAML, but I found nothing about Squid
I guess we can only do something with cookies ?
Anyone know if it's possible?
Tha
I forgot, if possible a method without active directory
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
> Hi Fred,
> I assume that by "implicit" you mean "transparent" or
> "interception". Short answer, not possible: there is nothing to
> anchor
> cookies to. It could be possible to fake it by having an auxiliary
> website doing standard SAML and feeding a database of associations
> userid-ip. It
>
>
> Proxies only support "HTTP authentication" methods: Basic, Digest,
> NTLM ,etc. So you either have to use one of those, or perhaps "fake"
> the creation of one of those...?
>
>
> eg you mentioned SAML, but gave no context beyond saying you didn't
> want AD. So let's say SAML is a require
Hello,
I found no way to do that, so I changed my mind
I can authenticate a user to squid with a certificate ? I'm thinking about a
smart card
If yes the user name can be saved in squid log file ?
Thanks
Fred
___
squid-users mailing list
squid-u
Hello All,
When Squid is connected to an ICAP server, there is a know list of informations
transmitted ?
I'm thinking of username with kerberos, or some specific headers
Regards
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
htt
> I am aware of folks successfully using certificate-based
> authentication
> in production today, but they are still running v3.3-based code (plus
> many patches). I am not aware of any regressions in that area, but
> since
> there is no adequate regression testing, Amos is right: YMMV.
>
> Alex
Thanks great, if I understand right there is no missing data, all the complete
request (HEADER + DATA) can be transmitted to an ICAP server ?
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squ
Aufs ?
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I have this problem regularly with aufs (long time ...)
Sorry I know no solution, except purge cache
I'm using diskd to avoid this
Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hello,
I wonder if Squid can pass different login/password to another, depending to an
ACL ?
I mean:
1) a client connected to Squid without any identification helper like ntlm,
basic, etc ...
2) an ACL like IP src, or browser, header, ... forward the request to an
another squid with a login/p
1 - 100 of 220 matches
Mail list logo